]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Clarify 'zpool remove' restrictions
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd April 27, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Ar pool Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm iostat
119 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
120 .Op Fl T Sy u Ns | Ns Sy d
121 .Op Fl ghHLpPvy
122 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
123 .Op Ar interval Op Ar count
124 .Nm
125 .Cm labelclear
126 .Op Fl f
127 .Ar device
128 .Nm
129 .Cm list
130 .Op Fl HgLpPv
131 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
132 .Op Fl T Sy u Ns | Ns Sy d
133 .Oo Ar pool Oc Ns ...
134 .Op Ar interval Op Ar count
135 .Nm
136 .Cm offline
137 .Op Fl f
138 .Op Fl t
139 .Ar pool Ar device Ns ...
140 .Nm
141 .Cm online
142 .Op Fl e
143 .Ar pool Ar device Ns ...
144 .Nm
145 .Cm reguid
146 .Ar pool
147 .Nm
148 .Cm reopen
149 .Op Fl n
150 .Ar pool
151 .Nm
152 .Cm remove
153 .Op Fl np
154 .Ar pool Ar device Ns ...
155 .Nm
156 .Cm remove
157 .Fl s
158 .Ar pool
159 .Nm
160 .Cm replace
161 .Op Fl f
162 .Oo Fl o Ar property Ns = Ns Ar value Oc
163 .Ar pool Ar device Op Ar new_device
164 .Nm
165 .Cm scrub
166 .Op Fl s | Fl p
167 .Ar pool Ns ...
168 .Nm
169 .Cm set
170 .Ar property Ns = Ns Ar value
171 .Ar pool
172 .Nm
173 .Cm split
174 .Op Fl gLlnP
175 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
176 .Op Fl R Ar root
177 .Ar pool newpool
178 .Oo Ar device Oc Ns ...
179 .Nm
180 .Cm status
181 .Oo Fl c Ar SCRIPT Oc
182 .Op Fl gLPvxD
183 .Op Fl T Sy u Ns | Ns Sy d
184 .Oo Ar pool Oc Ns ...
185 .Op Ar interval Op Ar count
186 .Nm
187 .Cm sync
188 .Oo Ar pool Oc Ns ...
189 .Nm
190 .Cm upgrade
191 .Nm
192 .Cm upgrade
193 .Fl v
194 .Nm
195 .Cm upgrade
196 .Op Fl V Ar version
197 .Fl a Ns | Ns Ar pool Ns ...
198 .Sh DESCRIPTION
199 The
200 .Nm
201 command configures ZFS storage pools.
202 A storage pool is a collection of devices that provides physical storage and
203 data replication for ZFS datasets.
204 All datasets within a storage pool share the same space.
205 See
206 .Xr zfs 8
207 for information on managing datasets.
208 .Ss Virtual Devices (vdevs)
209 A "virtual device" describes a single device or a collection of devices
210 organized according to certain performance and fault characteristics.
211 The following virtual devices are supported:
212 .Bl -tag -width Ds
213 .It Sy disk
214 A block device, typically located under
215 .Pa /dev .
216 ZFS can use individual slices or partitions, though the recommended mode of
217 operation is to use whole disks.
218 A disk can be specified by a full path, or it can be a shorthand name
219 .Po the relative portion of the path under
220 .Pa /dev
221 .Pc .
222 A whole disk can be specified by omitting the slice or partition designation.
223 For example,
224 .Pa sda
225 is equivalent to
226 .Pa /dev/sda .
227 When given a whole disk, ZFS automatically labels the disk, if necessary.
228 .It Sy file
229 A regular file.
230 The use of files as a backing store is strongly discouraged.
231 It is designed primarily for experimental purposes, as the fault tolerance of a
232 file is only as good as the file system of which it is a part.
233 A file must be specified by a full path.
234 .It Sy mirror
235 A mirror of two or more devices.
236 Data is replicated in an identical fashion across all components of a mirror.
237 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
238 failing before data integrity is compromised.
239 .It Sy raidz , raidz1 , raidz2 , raidz3
240 A variation on RAID-5 that allows for better distribution of parity and
241 eliminates the RAID-5
242 .Qq write hole
243 .Pq in which data and parity become inconsistent after a power loss .
244 Data and parity is striped across all disks within a raidz group.
245 .Pp
246 A raidz group can have single-, double-, or triple-parity, meaning that the
247 raidz group can sustain one, two, or three failures, respectively, without
248 losing any data.
249 The
250 .Sy raidz1
251 vdev type specifies a single-parity raidz group; the
252 .Sy raidz2
253 vdev type specifies a double-parity raidz group; and the
254 .Sy raidz3
255 vdev type specifies a triple-parity raidz group.
256 The
257 .Sy raidz
258 vdev type is an alias for
259 .Sy raidz1 .
260 .Pp
261 A raidz group with N disks of size X with P parity disks can hold approximately
262 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
263 compromised.
264 The minimum number of devices in a raidz group is one more than the number of
265 parity disks.
266 The recommended number is between 3 and 9 to help increase performance.
267 .It Sy spare
268 A special pseudo-vdev which keeps track of available hot spares for a pool.
269 For more information, see the
270 .Sx Hot Spares
271 section.
272 .It Sy log
273 A separate intent log device.
274 If more than one log device is specified, then writes are load-balanced between
275 devices.
276 Log devices can be mirrored.
277 However, raidz vdev types are not supported for the intent log.
278 For more information, see the
279 .Sx Intent Log
280 section.
281 .It Sy dedup
282 A device dedicated solely for allocating dedup data.
283 The redundancy of this device should match the redundancy of the other normal
284 devices in the pool. If more than one dedup device is specified, then
285 allocations are load-balanced between devices.
286 .It Sy special
287 A device dedicated solely for allocating various kinds of internal metadata,
288 and optionally small file data.
289 The redundancy of this device should match the redundancy of the other normal
290 devices in the pool. If more than one special device is specified, then
291 allocations are load-balanced between devices.
292 .Pp
293 For more information on special allocations, see the
294 .Sx Special Allocation Class
295 section.
296 .It Sy cache
297 A device used to cache storage pool data.
298 A cache device cannot be configured as a mirror or raidz group.
299 For more information, see the
300 .Sx Cache Devices
301 section.
302 .El
303 .Pp
304 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
305 contain files or disks.
306 Mirrors of mirrors
307 .Pq or other combinations
308 are not allowed.
309 .Pp
310 A pool can have any number of virtual devices at the top of the configuration
311 .Po known as
312 .Qq root vdevs
313 .Pc .
314 Data is dynamically distributed across all top-level devices to balance data
315 among devices.
316 As new virtual devices are added, ZFS automatically places data on the newly
317 available devices.
318 .Pp
319 Virtual devices are specified one at a time on the command line, separated by
320 whitespace.
321 The keywords
322 .Sy mirror
323 and
324 .Sy raidz
325 are used to distinguish where a group ends and another begins.
326 For example, the following creates two root vdevs, each a mirror of two disks:
327 .Bd -literal
328 # zpool create mypool mirror sda sdb mirror sdc sdd
329 .Ed
330 .Ss Device Failure and Recovery
331 ZFS supports a rich set of mechanisms for handling device failure and data
332 corruption.
333 All metadata and data is checksummed, and ZFS automatically repairs bad data
334 from a good copy when corruption is detected.
335 .Pp
336 In order to take advantage of these features, a pool must make use of some form
337 of redundancy, using either mirrored or raidz groups.
338 While ZFS supports running in a non-redundant configuration, where each root
339 vdev is simply a disk or file, this is strongly discouraged.
340 A single case of bit corruption can render some or all of your data unavailable.
341 .Pp
342 A pool's health status is described by one of three states: online, degraded,
343 or faulted.
344 An online pool has all devices operating normally.
345 A degraded pool is one in which one or more devices have failed, but the data is
346 still available due to a redundant configuration.
347 A faulted pool has corrupted metadata, or one or more faulted devices, and
348 insufficient replicas to continue functioning.
349 .Pp
350 The health of the top-level vdev, such as mirror or raidz device, is
351 potentially impacted by the state of its associated vdevs, or component
352 devices.
353 A top-level vdev or component device is in one of the following states:
354 .Bl -tag -width "DEGRADED"
355 .It Sy DEGRADED
356 One or more top-level vdevs is in the degraded state because one or more
357 component devices are offline.
358 Sufficient replicas exist to continue functioning.
359 .Pp
360 One or more component devices is in the degraded or faulted state, but
361 sufficient replicas exist to continue functioning.
362 The underlying conditions are as follows:
363 .Bl -bullet
364 .It
365 The number of checksum errors exceeds acceptable levels and the device is
366 degraded as an indication that something may be wrong.
367 ZFS continues to use the device as necessary.
368 .It
369 The number of I/O errors exceeds acceptable levels.
370 The device could not be marked as faulted because there are insufficient
371 replicas to continue functioning.
372 .El
373 .It Sy FAULTED
374 One or more top-level vdevs is in the faulted state because one or more
375 component devices are offline.
376 Insufficient replicas exist to continue functioning.
377 .Pp
378 One or more component devices is in the faulted state, and insufficient
379 replicas exist to continue functioning.
380 The underlying conditions are as follows:
381 .Bl -bullet
382 .It
383 The device could be opened, but the contents did not match expected values.
384 .It
385 The number of I/O errors exceeds acceptable levels and the device is faulted to
386 prevent further use of the device.
387 .El
388 .It Sy OFFLINE
389 The device was explicitly taken offline by the
390 .Nm zpool Cm offline
391 command.
392 .It Sy ONLINE
393 The device is online and functioning.
394 .It Sy REMOVED
395 The device was physically removed while the system was running.
396 Device removal detection is hardware-dependent and may not be supported on all
397 platforms.
398 .It Sy UNAVAIL
399 The device could not be opened.
400 If a pool is imported when a device was unavailable, then the device will be
401 identified by a unique identifier instead of its path since the path was never
402 correct in the first place.
403 .El
404 .Pp
405 If a device is removed and later re-attached to the system, ZFS attempts
406 to put the device online automatically.
407 Device attach detection is hardware-dependent and might not be supported on all
408 platforms.
409 .Ss Hot Spares
410 ZFS allows devices to be associated with pools as
411 .Qq hot spares .
412 These devices are not actively used in the pool, but when an active device
413 fails, it is automatically replaced by a hot spare.
414 To create a pool with hot spares, specify a
415 .Sy spare
416 vdev with any number of devices.
417 For example,
418 .Bd -literal
419 # zpool create pool mirror sda sdb spare sdc sdd
420 .Ed
421 .Pp
422 Spares can be shared across multiple pools, and can be added with the
423 .Nm zpool Cm add
424 command and removed with the
425 .Nm zpool Cm remove
426 command.
427 Once a spare replacement is initiated, a new
428 .Sy spare
429 vdev is created within the configuration that will remain there until the
430 original device is replaced.
431 At this point, the hot spare becomes available again if another device fails.
432 .Pp
433 If a pool has a shared spare that is currently being used, the pool can not be
434 exported since other pools may use this shared spare, which may lead to
435 potential data corruption.
436 .Pp
437 An in-progress spare replacement can be cancelled by detaching the hot spare.
438 If the original faulted device is detached, then the hot spare assumes its
439 place in the configuration, and is removed from the spare list of all active
440 pools.
441 .Pp
442 Spares cannot replace log devices.
443 .Ss Intent Log
444 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
445 transactions.
446 For instance, databases often require their transactions to be on stable storage
447 devices when returning from a system call.
448 NFS and other applications can also use
449 .Xr fsync 2
450 to ensure data stability.
451 By default, the intent log is allocated from blocks within the main pool.
452 However, it might be possible to get better performance using separate intent
453 log devices such as NVRAM or a dedicated disk.
454 For example:
455 .Bd -literal
456 # zpool create pool sda sdb log sdc
457 .Ed
458 .Pp
459 Multiple log devices can also be specified, and they can be mirrored.
460 See the
461 .Sx EXAMPLES
462 section for an example of mirroring multiple log devices.
463 .Pp
464 Log devices can be added, replaced, attached, detached and removed. In
465 addition, log devices are imported and exported as part of the pool
466 that contains them.
467 Mirrored devices can be removed by specifying the top-level mirror vdev.
468 .Ss Cache Devices
469 Devices can be added to a storage pool as
470 .Qq cache devices .
471 These devices provide an additional layer of caching between main memory and
472 disk.
473 For read-heavy workloads, where the working set size is much larger than what
474 can be cached in main memory, using cache devices allow much more of this
475 working set to be served from low latency media.
476 Using cache devices provides the greatest performance improvement for random
477 read-workloads of mostly static content.
478 .Pp
479 To create a pool with cache devices, specify a
480 .Sy cache
481 vdev with any number of devices.
482 For example:
483 .Bd -literal
484 # zpool create pool sda sdb cache sdc sdd
485 .Ed
486 .Pp
487 Cache devices cannot be mirrored or part of a raidz configuration.
488 If a read error is encountered on a cache device, that read I/O is reissued to
489 the original storage pool device, which might be part of a mirrored or raidz
490 configuration.
491 .Pp
492 The content of the cache devices is considered volatile, as is the case with
493 other system caches.
494 .Ss Pool checkpoint
495 Before starting critical procedures that include destructive actions (e.g
496 .Nm zfs Cm destroy
497 ), an administrator can checkpoint the pool's state and in the case of a
498 mistake or failure, rewind the entire pool back to the checkpoint.
499 Otherwise, the checkpoint can be discarded when the procedure has completed
500 successfully.
501 .Pp
502 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
503 with care as it contains every part of the pool's state, from properties to vdev
504 configuration.
505 Thus, while a pool has a checkpoint certain operations are not allowed.
506 Specifically, vdev removal/attach/detach, mirror splitting, and
507 changing the pool's guid.
508 Adding a new vdev is supported but in the case of a rewind it will have to be
509 added again.
510 Finally, users of this feature should keep in mind that scrubs in a pool that
511 has a checkpoint do not repair checkpointed data.
512 .Pp
513 To create a checkpoint for a pool:
514 .Bd -literal
515 # zpool checkpoint pool
516 .Ed
517 .Pp
518 To later rewind to its checkpointed state, you need to first export it and
519 then rewind it during import:
520 .Bd -literal
521 # zpool export pool
522 # zpool import --rewind-to-checkpoint pool
523 .Ed
524 .Pp
525 To discard the checkpoint from a pool:
526 .Bd -literal
527 # zpool checkpoint -d pool
528 .Ed
529 .Pp
530 Dataset reservations (controlled by the
531 .Nm reservation
532 or
533 .Nm refreservation
534 zfs properties) may be unenforceable while a checkpoint exists, because the
535 checkpoint is allowed to consume the dataset's reservation.
536 Finally, data that is part of the checkpoint but has been freed in the
537 current state of the pool won't be scanned during a scrub.
538 .Ss Special Allocation Class
539 The allocations in the special class are dedicated to specific block types.
540 By default this includes all metadata, the indirect blocks of user data, and
541 any dedup data. The class can also be provisioned to accept a limited
542 percentage of small file data blocks.
543 .Pp
544 A pool must always have at least one general (non-specified) vdev before
545 other devices can be assigned to the special class. If the special class
546 becomes full, then allocations intended for it will spill back into the
547 normal class.
548 .Pp
549 Dedup data can be excluded from the special class by setting the
550 .Sy zfs_ddt_data_is_special
551 zfs module parameter to false (0).
552 .Pp
553 Inclusion of small file blocks in the special class is opt-in. Each dataset
554 can control the size of small file blocks allowed in the special class by
555 setting the
556 .Sy special_small_blocks
557 dataset property. It defaults to zero so you must opt-in by setting it to a
558 non-zero value. See
559 .Xr zfs 8
560 for more info on setting this property.
561 .Ss Properties
562 Each pool has several properties associated with it.
563 Some properties are read-only statistics while others are configurable and
564 change the behavior of the pool.
565 .Pp
566 The following are read-only properties:
567 .Bl -tag -width Ds
568 .It Cm allocated
569 Amount of storage used within the pool.
570 .It Sy capacity
571 Percentage of pool space used.
572 This property can also be referred to by its shortened column name,
573 .Sy cap .
574 .It Sy expandsize
575 Amount of uninitialized space within the pool or device that can be used to
576 increase the total capacity of the pool.
577 Uninitialized space consists of any space on an EFI labeled vdev which has not
578 been brought online
579 .Po e.g, using
580 .Nm zpool Cm online Fl e
581 .Pc .
582 This space occurs when a LUN is dynamically expanded.
583 .It Sy fragmentation
584 The amount of fragmentation in the pool.
585 .It Sy free
586 The amount of free space available in the pool.
587 .It Sy freeing
588 After a file system or snapshot is destroyed, the space it was using is
589 returned to the pool asynchronously.
590 .Sy freeing
591 is the amount of space remaining to be reclaimed.
592 Over time
593 .Sy freeing
594 will decrease while
595 .Sy free
596 increases.
597 .It Sy health
598 The current health of the pool.
599 Health can be one of
600 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
601 .It Sy guid
602 A unique identifier for the pool.
603 .It Sy load_guid
604 A unique identifier for the pool.
605 Unlike the
606 .Sy guid
607 property, this identifier is generated every time we load the pool (e.g. does
608 not persist across imports/exports) and never changes while the pool is loaded
609 (even if a
610 .Sy reguid
611 operation takes place).
612 .It Sy size
613 Total size of the storage pool.
614 .It Sy unsupported@ Ns Em feature_guid
615 Information about unsupported features that are enabled on the pool.
616 See
617 .Xr zpool-features 5
618 for details.
619 .El
620 .Pp
621 The space usage properties report actual physical space available to the
622 storage pool.
623 The physical space can be different from the total amount of space that any
624 contained datasets can actually use.
625 The amount of space used in a raidz configuration depends on the characteristics
626 of the data being written.
627 In addition, ZFS reserves some space for internal accounting that the
628 .Xr zfs 8
629 command takes into account, but the
630 .Nm
631 command does not.
632 For non-full pools of a reasonable size, these effects should be invisible.
633 For small pools, or pools that are close to being completely full, these
634 discrepancies may become more noticeable.
635 .Pp
636 The following property can be set at creation time and import time:
637 .Bl -tag -width Ds
638 .It Sy altroot
639 Alternate root directory.
640 If set, this directory is prepended to any mount points within the pool.
641 This can be used when examining an unknown pool where the mount points cannot be
642 trusted, or in an alternate boot environment, where the typical paths are not
643 valid.
644 .Sy altroot
645 is not a persistent property.
646 It is valid only while the system is up.
647 Setting
648 .Sy altroot
649 defaults to using
650 .Sy cachefile Ns = Ns Sy none ,
651 though this may be overridden using an explicit setting.
652 .El
653 .Pp
654 The following property can be set only at import time:
655 .Bl -tag -width Ds
656 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
657 If set to
658 .Sy on ,
659 the pool will be imported in read-only mode.
660 This property can also be referred to by its shortened column name,
661 .Sy rdonly .
662 .El
663 .Pp
664 The following properties can be set at creation time and import time, and later
665 changed with the
666 .Nm zpool Cm set
667 command:
668 .Bl -tag -width Ds
669 .It Sy ashift Ns = Ns Sy ashift
670 Pool sector size exponent, to the power of
671 .Sy 2
672 (internally referred to as
673 .Sy ashift
674 ). Values from 9 to 16, inclusive, are valid; also, the special
675 value 0 (the default) means to auto-detect using the kernel's block
676 layer and a ZFS internal exception list. I/O operations will be aligned
677 to the specified size boundaries. Additionally, the minimum (disk)
678 write size will be set to the specified size, so this represents a
679 space vs. performance trade-off. For optimal performance, the pool
680 sector size should be greater than or equal to the sector size of the
681 underlying disks. The typical case for setting this property is when
682 performance is important and the underlying disks use 4KiB sectors but
683 report 512B sectors to the OS (for compatibility reasons); in that
684 case, set
685 .Sy ashift=12
686 (which is 1<<12 = 4096). When set, this property is
687 used as the default hint value in subsequent vdev operations (add,
688 attach and replace). Changing this value will not modify any existing
689 vdev, not even on disk replacement; however it can be used, for
690 instance, to replace a dying 512B sectors disk with a newer 4KiB
691 sectors device: this will probably result in bad performance but at the
692 same time could prevent loss of data.
693 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
694 Controls automatic pool expansion when the underlying LUN is grown.
695 If set to
696 .Sy on ,
697 the pool will be resized according to the size of the expanded device.
698 If the device is part of a mirror or raidz then all devices within that
699 mirror/raidz group must be expanded before the new space is made available to
700 the pool.
701 The default behavior is
702 .Sy off .
703 This property can also be referred to by its shortened column name,
704 .Sy expand .
705 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
706 Controls automatic device replacement.
707 If set to
708 .Sy off ,
709 device replacement must be initiated by the administrator by using the
710 .Nm zpool Cm replace
711 command.
712 If set to
713 .Sy on ,
714 any new device, found in the same physical location as a device that previously
715 belonged to the pool, is automatically formatted and replaced.
716 The default behavior is
717 .Sy off .
718 This property can also be referred to by its shortened column name,
719 .Sy replace .
720 Autoreplace can also be used with virtual disks (like device
721 mapper) provided that you use the /dev/disk/by-vdev paths setup by
722 vdev_id.conf. See the
723 .Xr vdev_id 8
724 man page for more details.
725 Autoreplace and autoonline require the ZFS Event Daemon be configured and
726 running. See the
727 .Xr zed 8
728 man page for more details.
729 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
730 Identifies the default bootable dataset for the root pool. This property is
731 expected to be set mainly by the installation and upgrade programs.
732 Not all Linux distribution boot processes use the bootfs property.
733 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
734 Controls the location of where the pool configuration is cached.
735 Discovering all pools on system startup requires a cached copy of the
736 configuration data that is stored on the root file system.
737 All pools in this cache are automatically imported when the system boots.
738 Some environments, such as install and clustering, need to cache this
739 information in a different location so that pools are not automatically
740 imported.
741 Setting this property caches the pool configuration in a different location that
742 can later be imported with
743 .Nm zpool Cm import Fl c .
744 Setting it to the special value
745 .Sy none
746 creates a temporary pool that is never cached, and the special value
747 .Qq
748 .Pq empty string
749 uses the default location.
750 .Pp
751 Multiple pools can share the same cache file.
752 Because the kernel destroys and recreates this file when pools are added and
753 removed, care should be taken when attempting to access this file.
754 When the last pool using a
755 .Sy cachefile
756 is exported or destroyed, the file will be empty.
757 .It Sy comment Ns = Ns Ar text
758 A text string consisting of printable ASCII characters that will be stored
759 such that it is available even if the pool becomes faulted.
760 An administrator can provide additional information about a pool using this
761 property.
762 .It Sy dedupditto Ns = Ns Ar number
763 Threshold for the number of block ditto copies.
764 If the reference count for a deduplicated block increases above this number, a
765 new ditto copy of this block is automatically stored.
766 The default setting is
767 .Sy 0
768 which causes no ditto copies to be created for deduplicated blocks.
769 The minimum legal nonzero setting is
770 .Sy 100 .
771 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
772 Controls whether a non-privileged user is granted access based on the dataset
773 permissions defined on the dataset.
774 See
775 .Xr zfs 8
776 for more information on ZFS delegated administration.
777 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
778 Controls the system behavior in the event of catastrophic pool failure.
779 This condition is typically a result of a loss of connectivity to the underlying
780 storage device(s) or a failure of all devices within the pool.
781 The behavior of such an event is determined as follows:
782 .Bl -tag -width "continue"
783 .It Sy wait
784 Blocks all I/O access until the device connectivity is recovered and the errors
785 are cleared.
786 This is the default behavior.
787 .It Sy continue
788 Returns
789 .Er EIO
790 to any new write I/O requests but allows reads to any of the remaining healthy
791 devices.
792 Any write requests that have yet to be committed to disk would be blocked.
793 .It Sy panic
794 Prints out a message to the console and generates a system crash dump.
795 .El
796 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
797 The value of this property is the current state of
798 .Ar feature_name .
799 The only valid value when setting this property is
800 .Sy enabled
801 which moves
802 .Ar feature_name
803 to the enabled state.
804 See
805 .Xr zpool-features 5
806 for details on feature states.
807 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
808 Controls whether information about snapshots associated with this pool is
809 output when
810 .Nm zfs Cm list
811 is run without the
812 .Fl t
813 option.
814 The default value is
815 .Sy off .
816 This property can also be referred to by its shortened name,
817 .Sy listsnaps .
818 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
819 Controls whether a pool activity check should be performed during
820 .Nm zpool Cm import .
821 When a pool is determined to be active it cannot be imported, even with the
822 .Fl f
823 option. This property is intended to be used in failover configurations
824 where multiple hosts have access to a pool on shared storage. When this
825 property is on, periodic writes to storage occur to show the pool is in use.
826 See
827 .Sy zfs_multihost_interval
828 in the
829 .Xr zfs-module-parameters 5
830 man page. In order to enable this property each host must set a unique hostid.
831 See
832 .Xr genhostid 1
833 .Xr zgenhostid 8
834 .Xr spl-module-parameters 5
835 for additional details. The default value is
836 .Sy off .
837 .It Sy version Ns = Ns Ar version
838 The current on-disk version of the pool.
839 This can be increased, but never decreased.
840 The preferred method of updating pools is with the
841 .Nm zpool Cm upgrade
842 command, though this property can be used when a specific version is needed for
843 backwards compatibility.
844 Once feature flags are enabled on a pool this property will no longer have a
845 value.
846 .El
847 .Ss Subcommands
848 All subcommands that modify state are logged persistently to the pool in their
849 original form.
850 .Pp
851 The
852 .Nm
853 command provides subcommands to create and destroy storage pools, add capacity
854 to storage pools, and provide information about the storage pools.
855 The following subcommands are supported:
856 .Bl -tag -width Ds
857 .It Xo
858 .Nm
859 .Fl ?
860 .Xc
861 Displays a help message.
862 .It Xo
863 .Nm
864 .Cm add
865 .Op Fl fgLnP
866 .Oo Fl o Ar property Ns = Ns Ar value Oc
867 .Ar pool vdev Ns ...
868 .Xc
869 Adds the specified virtual devices to the given pool.
870 The
871 .Ar vdev
872 specification is described in the
873 .Sx Virtual Devices
874 section.
875 The behavior of the
876 .Fl f
877 option, and the device checks performed are described in the
878 .Nm zpool Cm create
879 subcommand.
880 .Bl -tag -width Ds
881 .It Fl f
882 Forces use of
883 .Ar vdev Ns s ,
884 even if they appear in use or specify a conflicting replication level.
885 Not all devices can be overridden in this manner.
886 .It Fl g
887 Display
888 .Ar vdev ,
889 GUIDs instead of the normal device names. These GUIDs can be used in place of
890 device names for the zpool detach/offline/remove/replace commands.
891 .It Fl L
892 Display real paths for
893 .Ar vdev Ns s
894 resolving all symbolic links. This can be used to look up the current block
895 device name regardless of the /dev/disk/ path used to open it.
896 .It Fl n
897 Displays the configuration that would be used without actually adding the
898 .Ar vdev Ns s .
899 The actual pool creation can still fail due to insufficient privileges or
900 device sharing.
901 .It Fl P
902 Display real paths for
903 .Ar vdev Ns s
904 instead of only the last component of the path. This can be used in
905 conjunction with the
906 .Fl L
907 flag.
908 .It Fl o Ar property Ns = Ns Ar value
909 Sets the given pool properties. See the
910 .Sx Properties
911 section for a list of valid properties that can be set. The only property
912 supported at the moment is ashift.
913 .El
914 .It Xo
915 .Nm
916 .Cm attach
917 .Op Fl f
918 .Oo Fl o Ar property Ns = Ns Ar value Oc
919 .Ar pool device new_device
920 .Xc
921 Attaches
922 .Ar new_device
923 to the existing
924 .Ar device .
925 The existing device cannot be part of a raidz configuration.
926 If
927 .Ar device
928 is not currently part of a mirrored configuration,
929 .Ar device
930 automatically transforms into a two-way mirror of
931 .Ar device
932 and
933 .Ar new_device .
934 If
935 .Ar device
936 is part of a two-way mirror, attaching
937 .Ar new_device
938 creates a three-way mirror, and so on.
939 In either case,
940 .Ar new_device
941 begins to resilver immediately.
942 .Bl -tag -width Ds
943 .It Fl f
944 Forces use of
945 .Ar new_device ,
946 even if its appears to be in use.
947 Not all devices can be overridden in this manner.
948 .It Fl o Ar property Ns = Ns Ar value
949 Sets the given pool properties. See the
950 .Sx Properties
951 section for a list of valid properties that can be set. The only property
952 supported at the moment is ashift.
953 .El
954 .It Xo
955 .Nm
956 .Cm checkpoint
957 .Op Fl d, -discard
958 .Ar pool
959 .Xc
960 Checkpoints the current state of
961 .Ar pool
962 , which can be later restored by
963 .Nm zpool Cm import --rewind-to-checkpoint .
964 The existence of a checkpoint in a pool prohibits the following
965 .Nm zpool
966 commands:
967 .Cm remove ,
968 .Cm attach ,
969 .Cm detach ,
970 .Cm split ,
971 and
972 .Cm reguid .
973 In addition, it may break reservation boundaries if the pool lacks free
974 space.
975 The
976 .Nm zpool Cm status
977 command indicates the existence of a checkpoint or the progress of discarding a
978 checkpoint from a pool.
979 The
980 .Nm zpool Cm list
981 command reports how much space the checkpoint takes from the pool.
982 .Bl -tag -width Ds
983 .It Fl d, -discard
984 Discards an existing checkpoint from
985 .Ar pool .
986 .El
987 .It Xo
988 .Nm
989 .Cm clear
990 .Ar pool
991 .Op Ar device
992 .Xc
993 Clears device errors in a pool.
994 If no arguments are specified, all device errors within the pool are cleared.
995 If one or more devices is specified, only those errors associated with the
996 specified device or devices are cleared.
997 .It Xo
998 .Nm
999 .Cm create
1000 .Op Fl dfn
1001 .Op Fl m Ar mountpoint
1002 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1003 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1004 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1005 .Op Fl R Ar root
1006 .Op Fl t Ar tname
1007 .Ar pool vdev Ns ...
1008 .Xc
1009 Creates a new storage pool containing the virtual devices specified on the
1010 command line.
1011 The pool name must begin with a letter, and can only contain
1012 alphanumeric characters as well as underscore
1013 .Pq Qq Sy _ ,
1014 dash
1015 .Pq Qq Sy \&- ,
1016 colon
1017 .Pq Qq Sy \&: ,
1018 space
1019 .Pq Qq Sy \&\ ,
1020 and period
1021 .Pq Qq Sy \&. .
1022 The pool names
1023 .Sy mirror ,
1024 .Sy raidz ,
1025 .Sy spare
1026 and
1027 .Sy log
1028 are reserved, as are names beginning with
1029 .Sy mirror ,
1030 .Sy raidz ,
1031 .Sy spare ,
1032 and the pattern
1033 .Sy c[0-9] .
1034 The
1035 .Ar vdev
1036 specification is described in the
1037 .Sx Virtual Devices
1038 section.
1039 .Pp
1040 The command verifies that each device specified is accessible and not currently
1041 in use by another subsystem.
1042 There are some uses, such as being currently mounted, or specified as the
1043 dedicated dump device, that prevents a device from ever being used by ZFS.
1044 Other uses, such as having a preexisting UFS file system, can be overridden with
1045 the
1046 .Fl f
1047 option.
1048 .Pp
1049 The command also checks that the replication strategy for the pool is
1050 consistent.
1051 An attempt to combine redundant and non-redundant storage in a single pool, or
1052 to mix disks and files, results in an error unless
1053 .Fl f
1054 is specified.
1055 The use of differently sized devices within a single raidz or mirror group is
1056 also flagged as an error unless
1057 .Fl f
1058 is specified.
1059 .Pp
1060 Unless the
1061 .Fl R
1062 option is specified, the default mount point is
1063 .Pa / Ns Ar pool .
1064 The mount point must not exist or must be empty, or else the root dataset
1065 cannot be mounted.
1066 This can be overridden with the
1067 .Fl m
1068 option.
1069 .Pp
1070 By default all supported features are enabled on the new pool unless the
1071 .Fl d
1072 option is specified.
1073 .Bl -tag -width Ds
1074 .It Fl d
1075 Do not enable any features on the new pool.
1076 Individual features can be enabled by setting their corresponding properties to
1077 .Sy enabled
1078 with the
1079 .Fl o
1080 option.
1081 See
1082 .Xr zpool-features 5
1083 for details about feature properties.
1084 .It Fl f
1085 Forces use of
1086 .Ar vdev Ns s ,
1087 even if they appear in use or specify a conflicting replication level.
1088 Not all devices can be overridden in this manner.
1089 .It Fl m Ar mountpoint
1090 Sets the mount point for the root dataset.
1091 The default mount point is
1092 .Pa /pool
1093 or
1094 .Pa altroot/pool
1095 if
1096 .Ar altroot
1097 is specified.
1098 The mount point must be an absolute path,
1099 .Sy legacy ,
1100 or
1101 .Sy none .
1102 For more information on dataset mount points, see
1103 .Xr zfs 8 .
1104 .It Fl n
1105 Displays the configuration that would be used without actually creating the
1106 pool.
1107 The actual pool creation can still fail due to insufficient privileges or
1108 device sharing.
1109 .It Fl o Ar property Ns = Ns Ar value
1110 Sets the given pool properties.
1111 See the
1112 .Sx Properties
1113 section for a list of valid properties that can be set.
1114 .It Fl o Ar feature@feature Ns = Ns Ar value
1115 Sets the given pool feature. See the
1116 .Xr zpool-features 5
1117 section for a list of valid features that can be set.
1118 Value can be either disabled or enabled.
1119 .It Fl O Ar file-system-property Ns = Ns Ar value
1120 Sets the given file system properties in the root file system of the pool.
1121 See the
1122 .Sx Properties
1123 section of
1124 .Xr zfs 8
1125 for a list of valid properties that can be set.
1126 .It Fl R Ar root
1127 Equivalent to
1128 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1129 .It Fl t Ar tname
1130 Sets the in-core pool name to
1131 .Sy tname
1132 while the on-disk name will be the name specified as the pool name
1133 .Sy pool .
1134 This will set the default cachefile property to none. This is intended
1135 to handle name space collisions when creating pools for other systems,
1136 such as virtual machines or physical machines whose pools live on network
1137 block devices.
1138 .El
1139 .It Xo
1140 .Nm
1141 .Cm destroy
1142 .Op Fl f
1143 .Ar pool
1144 .Xc
1145 Destroys the given pool, freeing up any devices for other use.
1146 This command tries to unmount any active datasets before destroying the pool.
1147 .Bl -tag -width Ds
1148 .It Fl f
1149 Forces any active datasets contained within the pool to be unmounted.
1150 .El
1151 .It Xo
1152 .Nm
1153 .Cm detach
1154 .Ar pool device
1155 .Xc
1156 Detaches
1157 .Ar device
1158 from a mirror.
1159 The operation is refused if there are no other valid replicas of the data.
1160 If device may be re-added to the pool later on then consider the
1161 .Sy zpool offline
1162 command instead.
1163 .It Xo
1164 .Nm
1165 .Cm events
1166 .Op Fl vHf Oo Ar pool Oc | Fl c
1167 .Xc
1168 Lists all recent events generated by the ZFS kernel modules. These events
1169 are consumed by the
1170 .Xr zed 8
1171 and used to automate administrative tasks such as replacing a failed device
1172 with a hot spare. For more information about the subclasses and event payloads
1173 that can be generated see the
1174 .Xr zfs-events 5
1175 man page.
1176 .Bl -tag -width Ds
1177 .It Fl c
1178 Clear all previous events.
1179 .It Fl f
1180 Follow mode.
1181 .It Fl H
1182 Scripted mode. Do not display headers, and separate fields by a
1183 single tab instead of arbitrary space.
1184 .It Fl v
1185 Print the entire payload for each event.
1186 .El
1187 .It Xo
1188 .Nm
1189 .Cm export
1190 .Op Fl a
1191 .Op Fl f
1192 .Ar pool Ns ...
1193 .Xc
1194 Exports the given pools from the system.
1195 All devices are marked as exported, but are still considered in use by other
1196 subsystems.
1197 The devices can be moved between systems
1198 .Pq even those of different endianness
1199 and imported as long as a sufficient number of devices are present.
1200 .Pp
1201 Before exporting the pool, all datasets within the pool are unmounted.
1202 A pool can not be exported if it has a shared spare that is currently being
1203 used.
1204 .Pp
1205 For pools to be portable, you must give the
1206 .Nm
1207 command whole disks, not just partitions, so that ZFS can label the disks with
1208 portable EFI labels.
1209 Otherwise, disk drivers on platforms of different endianness will not recognize
1210 the disks.
1211 .Bl -tag -width Ds
1212 .It Fl a
1213 Exports all pools imported on the system.
1214 .It Fl f
1215 Forcefully unmount all datasets, using the
1216 .Nm unmount Fl f
1217 command.
1218 .Pp
1219 This command will forcefully export the pool even if it has a shared spare that
1220 is currently being used.
1221 This may lead to potential data corruption.
1222 .El
1223 .It Xo
1224 .Nm
1225 .Cm get
1226 .Op Fl Hp
1227 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1228 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1229 .Ar pool Ns ...
1230 .Xc
1231 Retrieves the given list of properties
1232 .Po
1233 or all properties if
1234 .Sy all
1235 is used
1236 .Pc
1237 for the specified storage pool(s).
1238 These properties are displayed with the following fields:
1239 .Bd -literal
1240 name Name of storage pool
1241 property Property name
1242 value Property value
1243 source Property source, either 'default' or 'local'.
1244 .Ed
1245 .Pp
1246 See the
1247 .Sx Properties
1248 section for more information on the available pool properties.
1249 .Bl -tag -width Ds
1250 .It Fl H
1251 Scripted mode.
1252 Do not display headers, and separate fields by a single tab instead of arbitrary
1253 space.
1254 .It Fl o Ar field
1255 A comma-separated list of columns to display.
1256 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1257 is the default value.
1258 .It Fl p
1259 Display numbers in parsable (exact) values.
1260 .El
1261 .It Xo
1262 .Nm
1263 .Cm history
1264 .Op Fl il
1265 .Oo Ar pool Oc Ns ...
1266 .Xc
1267 Displays the command history of the specified pool(s) or all pools if no pool is
1268 specified.
1269 .Bl -tag -width Ds
1270 .It Fl i
1271 Displays internally logged ZFS events in addition to user initiated events.
1272 .It Fl l
1273 Displays log records in long format, which in addition to standard format
1274 includes, the user name, the hostname, and the zone in which the operation was
1275 performed.
1276 .El
1277 .It Xo
1278 .Nm
1279 .Cm import
1280 .Op Fl D
1281 .Op Fl d Ar dir Ns | Ns device
1282 .Xc
1283 Lists pools available to import.
1284 If the
1285 .Fl d
1286 option is not specified, this command searches for devices in
1287 .Pa /dev .
1288 The
1289 .Fl d
1290 option can be specified multiple times, and all directories are searched.
1291 If the device appears to be part of an exported pool, this command displays a
1292 summary of the pool with the name of the pool, a numeric identifier, as well as
1293 the vdev layout and current health of the device for each device or file.
1294 Destroyed pools, pools that were previously destroyed with the
1295 .Nm zpool Cm destroy
1296 command, are not listed unless the
1297 .Fl D
1298 option is specified.
1299 .Pp
1300 The numeric identifier is unique, and can be used instead of the pool name when
1301 multiple exported pools of the same name are available.
1302 .Bl -tag -width Ds
1303 .It Fl c Ar cachefile
1304 Reads configuration from the given
1305 .Ar cachefile
1306 that was created with the
1307 .Sy cachefile
1308 pool property.
1309 This
1310 .Ar cachefile
1311 is used instead of searching for devices.
1312 .It Fl d Ar dir Ns | Ns Ar device
1313 Uses
1314 .Ar device
1315 or searches for devices or files in
1316 .Ar dir .
1317 The
1318 .Fl d
1319 option can be specified multiple times.
1320 .It Fl D
1321 Lists destroyed pools only.
1322 .El
1323 .It Xo
1324 .Nm
1325 .Cm import
1326 .Fl a
1327 .Op Fl DflmN
1328 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1329 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1330 .Op Fl o Ar mntopts
1331 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1332 .Op Fl R Ar root
1333 .Op Fl s
1334 .Xc
1335 Imports all pools found in the search directories.
1336 Identical to the previous command, except that all pools with a sufficient
1337 number of devices available are imported.
1338 Destroyed pools, pools that were previously destroyed with the
1339 .Nm zpool Cm destroy
1340 command, will not be imported unless the
1341 .Fl D
1342 option is specified.
1343 .Bl -tag -width Ds
1344 .It Fl a
1345 Searches for and imports all pools found.
1346 .It Fl c Ar cachefile
1347 Reads configuration from the given
1348 .Ar cachefile
1349 that was created with the
1350 .Sy cachefile
1351 pool property.
1352 This
1353 .Ar cachefile
1354 is used instead of searching for devices.
1355 .It Fl d Ar dir Ns | Ns Ar device
1356 Uses
1357 .Ar device
1358 or searches for devices or files in
1359 .Ar dir .
1360 The
1361 .Fl d
1362 option can be specified multiple times.
1363 This option is incompatible with the
1364 .Fl c
1365 option.
1366 .It Fl D
1367 Imports destroyed pools only.
1368 The
1369 .Fl f
1370 option is also required.
1371 .It Fl f
1372 Forces import, even if the pool appears to be potentially active.
1373 .It Fl F
1374 Recovery mode for a non-importable pool.
1375 Attempt to return the pool to an importable state by discarding the last few
1376 transactions.
1377 Not all damaged pools can be recovered by using this option.
1378 If successful, the data from the discarded transactions is irretrievably lost.
1379 This option is ignored if the pool is importable or already imported.
1380 .It Fl l
1381 Indicates that this command will request encryption keys for all encrypted
1382 datasets it attempts to mount as it is bringing the pool online. Note that if
1383 any datasets have a
1384 .Sy keylocation
1385 of
1386 .Sy prompt
1387 this command will block waiting for the keys to be entered. Without this flag
1388 encrypted datasets will be left unavailable until the keys are loaded.
1389 .It Fl m
1390 Allows a pool to import when there is a missing log device.
1391 Recent transactions can be lost because the log device will be discarded.
1392 .It Fl n
1393 Used with the
1394 .Fl F
1395 recovery option.
1396 Determines whether a non-importable pool can be made importable again, but does
1397 not actually perform the pool recovery.
1398 For more details about pool recovery mode, see the
1399 .Fl F
1400 option, above.
1401 .It Fl N
1402 Import the pool without mounting any file systems.
1403 .It Fl o Ar mntopts
1404 Comma-separated list of mount options to use when mounting datasets within the
1405 pool.
1406 See
1407 .Xr zfs 8
1408 for a description of dataset properties and mount options.
1409 .It Fl o Ar property Ns = Ns Ar value
1410 Sets the specified property on the imported pool.
1411 See the
1412 .Sx Properties
1413 section for more information on the available pool properties.
1414 .It Fl R Ar root
1415 Sets the
1416 .Sy cachefile
1417 property to
1418 .Sy none
1419 and the
1420 .Sy altroot
1421 property to
1422 .Ar root .
1423 .It Fl -rewind-to-checkpoint
1424 Rewinds pool to the checkpointed state.
1425 Once the pool is imported with this flag there is no way to undo the rewind.
1426 All changes and data that were written after the checkpoint are lost!
1427 The only exception is when the
1428 .Sy readonly
1429 mounting option is enabled.
1430 In this case, the checkpointed state of the pool is opened and an
1431 administrator can see how the pool would look like if they were
1432 to fully rewind.
1433 .It Fl s
1434 Scan using the default search path, the libblkid cache will not be
1435 consulted. A custom search path may be specified by setting the
1436 ZPOOL_IMPORT_PATH environment variable.
1437 .It Fl X
1438 Used with the
1439 .Fl F
1440 recovery option. Determines whether extreme
1441 measures to find a valid txg should take place. This allows the pool to
1442 be rolled back to a txg which is no longer guaranteed to be consistent.
1443 Pools imported at an inconsistent txg may contain uncorrectable
1444 checksum errors. For more details about pool recovery mode, see the
1445 .Fl F
1446 option, above. WARNING: This option can be extremely hazardous to the
1447 health of your pool and should only be used as a last resort.
1448 .It Fl T
1449 Specify the txg to use for rollback. Implies
1450 .Fl FX .
1451 For more details
1452 about pool recovery mode, see the
1453 .Fl X
1454 option, above. WARNING: This option can be extremely hazardous to the
1455 health of your pool and should only be used as a last resort.
1456 .El
1457 .It Xo
1458 .Nm
1459 .Cm import
1460 .Op Fl Dflm
1461 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1462 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1463 .Op Fl o Ar mntopts
1464 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1465 .Op Fl R Ar root
1466 .Op Fl s
1467 .Ar pool Ns | Ns Ar id
1468 .Op Ar newpool
1469 .Xc
1470 Imports a specific pool.
1471 A pool can be identified by its name or the numeric identifier.
1472 If
1473 .Ar newpool
1474 is specified, the pool is imported using the name
1475 .Ar newpool .
1476 Otherwise, it is imported with the same name as its exported name.
1477 .Pp
1478 If a device is removed from a system without running
1479 .Nm zpool Cm export
1480 first, the device appears as potentially active.
1481 It cannot be determined if this was a failed export, or whether the device is
1482 really in use from another host.
1483 To import a pool in this state, the
1484 .Fl f
1485 option is required.
1486 .Bl -tag -width Ds
1487 .It Fl c Ar cachefile
1488 Reads configuration from the given
1489 .Ar cachefile
1490 that was created with the
1491 .Sy cachefile
1492 pool property.
1493 This
1494 .Ar cachefile
1495 is used instead of searching for devices.
1496 .It Fl d Ar dir Ns | Ns Ar device
1497 Uses
1498 .Ar device
1499 or searches for devices or files in
1500 .Ar dir .
1501 The
1502 .Fl d
1503 option can be specified multiple times.
1504 This option is incompatible with the
1505 .Fl c
1506 option.
1507 .It Fl D
1508 Imports destroyed pool.
1509 The
1510 .Fl f
1511 option is also required.
1512 .It Fl f
1513 Forces import, even if the pool appears to be potentially active.
1514 .It Fl F
1515 Recovery mode for a non-importable pool.
1516 Attempt to return the pool to an importable state by discarding the last few
1517 transactions.
1518 Not all damaged pools can be recovered by using this option.
1519 If successful, the data from the discarded transactions is irretrievably lost.
1520 This option is ignored if the pool is importable or already imported.
1521 .It Fl l
1522 Indicates that this command will request encryption keys for all encrypted
1523 datasets it attempts to mount as it is bringing the pool online. Note that if
1524 any datasets have a
1525 .Sy keylocation
1526 of
1527 .Sy prompt
1528 this command will block waiting for the keys to be entered. Without this flag
1529 encrypted datasets will be left unavailable until the keys are loaded.
1530 .It Fl m
1531 Allows a pool to import when there is a missing log device.
1532 Recent transactions can be lost because the log device will be discarded.
1533 .It Fl n
1534 Used with the
1535 .Fl F
1536 recovery option.
1537 Determines whether a non-importable pool can be made importable again, but does
1538 not actually perform the pool recovery.
1539 For more details about pool recovery mode, see the
1540 .Fl F
1541 option, above.
1542 .It Fl o Ar mntopts
1543 Comma-separated list of mount options to use when mounting datasets within the
1544 pool.
1545 See
1546 .Xr zfs 8
1547 for a description of dataset properties and mount options.
1548 .It Fl o Ar property Ns = Ns Ar value
1549 Sets the specified property on the imported pool.
1550 See the
1551 .Sx Properties
1552 section for more information on the available pool properties.
1553 .It Fl R Ar root
1554 Sets the
1555 .Sy cachefile
1556 property to
1557 .Sy none
1558 and the
1559 .Sy altroot
1560 property to
1561 .Ar root .
1562 .It Fl s
1563 Scan using the default search path, the libblkid cache will not be
1564 consulted. A custom search path may be specified by setting the
1565 ZPOOL_IMPORT_PATH environment variable.
1566 .It Fl X
1567 Used with the
1568 .Fl F
1569 recovery option. Determines whether extreme
1570 measures to find a valid txg should take place. This allows the pool to
1571 be rolled back to a txg which is no longer guaranteed to be consistent.
1572 Pools imported at an inconsistent txg may contain uncorrectable
1573 checksum errors. For more details about pool recovery mode, see the
1574 .Fl F
1575 option, above. WARNING: This option can be extremely hazardous to the
1576 health of your pool and should only be used as a last resort.
1577 .It Fl T
1578 Specify the txg to use for rollback. Implies
1579 .Fl FX .
1580 For more details
1581 about pool recovery mode, see the
1582 .Fl X
1583 option, above. WARNING: This option can be extremely hazardous to the
1584 health of your pool and should only be used as a last resort.
1585 .It Fl t
1586 Used with
1587 .Sy newpool .
1588 Specifies that
1589 .Sy newpool
1590 is temporary. Temporary pool names last until export. Ensures that
1591 the original pool name will be used in all label updates and therefore
1592 is retained upon export.
1593 Will also set -o cachefile=none when not explicitly specified.
1594 .El
1595 .It Xo
1596 .Nm
1597 .Cm iostat
1598 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1599 .Op Fl T Sy u Ns | Ns Sy d
1600 .Op Fl ghHLpPvy
1601 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1602 .Op Ar interval Op Ar count
1603 .Xc
1604 Displays I/O statistics for the given pools/vdevs. You can pass in a
1605 list of pools, a pool and list of vdevs in that pool, or a list of any
1606 vdevs from any pool. If no items are specified, statistics for every
1607 pool in the system are shown.
1608 When given an
1609 .Ar interval ,
1610 the statistics are printed every
1611 .Ar interval
1612 seconds until ^C is pressed. If count is specified, the command exits
1613 after count reports are printed. The first report printed is always
1614 the statistics since boot regardless of whether
1615 .Ar interval
1616 and
1617 .Ar count
1618 are passed. However, this behavior can be suppressed with the
1619 .Fl y
1620 flag. Also note that the units of
1621 .Sy K ,
1622 .Sy M ,
1623 .Sy G ...
1624 that are printed in the report are in base 1024. To get the raw
1625 values, use the
1626 .Fl p
1627 flag.
1628 .Bl -tag -width Ds
1629 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1630 Run a script (or scripts) on each vdev and include the output as a new column
1631 in the
1632 .Nm zpool Cm iostat
1633 output. Users can run any script found in their
1634 .Pa ~/.zpool.d
1635 directory or from the system
1636 .Pa /etc/zfs/zpool.d
1637 directory. Script names containing the slash (/) character are not allowed.
1638 The default search path can be overridden by setting the
1639 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1640 .Fl c
1641 if they have the ZPOOL_SCRIPTS_AS_ROOT
1642 environment variable set. If a script requires the use of a privileged
1643 command, like
1644 .Xr smartctl 8 ,
1645 then it's recommended you allow the user access to it in
1646 .Pa /etc/sudoers
1647 or add the user to the
1648 .Pa /etc/sudoers.d/zfs
1649 file.
1650 .Pp
1651 If
1652 .Fl c
1653 is passed without a script name, it prints a list of all scripts.
1654 .Fl c
1655 also sets verbose mode
1656 .No \&( Ns Fl v Ns No \&).
1657 .Pp
1658 Script output should be in the form of "name=value". The column name is
1659 set to "name" and the value is set to "value". Multiple lines can be
1660 used to output multiple columns. The first line of output not in the
1661 "name=value" format is displayed without a column title, and no more
1662 output after that is displayed. This can be useful for printing error
1663 messages. Blank or NULL values are printed as a '-' to make output
1664 awk-able.
1665 .Pp
1666 The following environment variables are set before running each script:
1667 .Bl -tag -width "VDEV_PATH"
1668 .It Sy VDEV_PATH
1669 Full path to the vdev
1670 .El
1671 .Bl -tag -width "VDEV_UPATH"
1672 .It Sy VDEV_UPATH
1673 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1674 multipath, or partitioned vdevs.
1675 .El
1676 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1677 .It Sy VDEV_ENC_SYSFS_PATH
1678 The sysfs path to the enclosure for the vdev (if any).
1679 .El
1680 .It Fl T Sy u Ns | Ns Sy d
1681 Display a time stamp.
1682 Specify
1683 .Sy u
1684 for a printed representation of the internal representation of time.
1685 See
1686 .Xr time 2 .
1687 Specify
1688 .Sy d
1689 for standard date format.
1690 See
1691 .Xr date 1 .
1692 .It Fl g
1693 Display vdev GUIDs instead of the normal device names. These GUIDs
1694 can be used in place of device names for the zpool
1695 detach/offline/remove/replace commands.
1696 .It Fl H
1697 Scripted mode. Do not display headers, and separate fields by a
1698 single tab instead of arbitrary space.
1699 .It Fl L
1700 Display real paths for vdevs resolving all symbolic links. This can
1701 be used to look up the current block device name regardless of the
1702 .Pa /dev/disk/
1703 path used to open it.
1704 .It Fl p
1705 Display numbers in parsable (exact) values. Time values are in
1706 nanoseconds.
1707 .It Fl P
1708 Display full paths for vdevs instead of only the last component of
1709 the path. This can be used in conjunction with the
1710 .Fl L
1711 flag.
1712 .It Fl r
1713 Print request size histograms for the leaf ZIOs. This includes
1714 histograms of individual ZIOs (
1715 .Ar ind )
1716 and aggregate ZIOs (
1717 .Ar agg ).
1718 These stats can be useful for seeing how well the ZFS IO aggregator is
1719 working. Do not confuse these request size stats with the block layer
1720 requests; it's possible ZIOs can be broken up before being sent to the
1721 block device.
1722 .It Fl v
1723 Verbose statistics Reports usage statistics for individual vdevs within the
1724 pool, in addition to the pool-wide statistics.
1725 .It Fl y
1726 Omit statistics since boot.
1727 Normally the first line of output reports the statistics since boot.
1728 This option suppresses that first line of output.
1729 .It Fl w
1730 Display latency histograms:
1731 .Pp
1732 .Ar total_wait :
1733 Total IO time (queuing + disk IO time).
1734 .Ar disk_wait :
1735 Disk IO time (time reading/writing the disk).
1736 .Ar syncq_wait :
1737 Amount of time IO spent in synchronous priority queues. Does not include
1738 disk time.
1739 .Ar asyncq_wait :
1740 Amount of time IO spent in asynchronous priority queues. Does not include
1741 disk time.
1742 .Ar scrub :
1743 Amount of time IO spent in scrub queue. Does not include disk time.
1744 .It Fl l
1745 Include average latency statistics:
1746 .Pp
1747 .Ar total_wait :
1748 Average total IO time (queuing + disk IO time).
1749 .Ar disk_wait :
1750 Average disk IO time (time reading/writing the disk).
1751 .Ar syncq_wait :
1752 Average amount of time IO spent in synchronous priority queues. Does
1753 not include disk time.
1754 .Ar asyncq_wait :
1755 Average amount of time IO spent in asynchronous priority queues.
1756 Does not include disk time.
1757 .Ar scrub :
1758 Average queuing time in scrub queue. Does not include disk time.
1759 .It Fl q
1760 Include active queue statistics. Each priority queue has both
1761 pending (
1762 .Ar pend )
1763 and active (
1764 .Ar activ )
1765 IOs. Pending IOs are waiting to
1766 be issued to the disk, and active IOs have been issued to disk and are
1767 waiting for completion. These stats are broken out by priority queue:
1768 .Pp
1769 .Ar syncq_read/write :
1770 Current number of entries in synchronous priority
1771 queues.
1772 .Ar asyncq_read/write :
1773 Current number of entries in asynchronous priority queues.
1774 .Ar scrubq_read :
1775 Current number of entries in scrub queue.
1776 .Pp
1777 All queue statistics are instantaneous measurements of the number of
1778 entries in the queues. If you specify an interval, the measurements
1779 will be sampled from the end of the interval.
1780 .El
1781 .It Xo
1782 .Nm
1783 .Cm labelclear
1784 .Op Fl f
1785 .Ar device
1786 .Xc
1787 Removes ZFS label information from the specified
1788 .Ar device .
1789 The
1790 .Ar device
1791 must not be part of an active pool configuration.
1792 .Bl -tag -width Ds
1793 .It Fl f
1794 Treat exported or foreign devices as inactive.
1795 .El
1796 .It Xo
1797 .Nm
1798 .Cm list
1799 .Op Fl HgLpPv
1800 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1801 .Op Fl T Sy u Ns | Ns Sy d
1802 .Oo Ar pool Oc Ns ...
1803 .Op Ar interval Op Ar count
1804 .Xc
1805 Lists the given pools along with a health status and space usage.
1806 If no
1807 .Ar pool Ns s
1808 are specified, all pools in the system are listed.
1809 When given an
1810 .Ar interval ,
1811 the information is printed every
1812 .Ar interval
1813 seconds until ^C is pressed.
1814 If
1815 .Ar count
1816 is specified, the command exits after
1817 .Ar count
1818 reports are printed.
1819 .Bl -tag -width Ds
1820 .It Fl g
1821 Display vdev GUIDs instead of the normal device names. These GUIDs
1822 can be used in place of device names for the zpool
1823 detach/offline/remove/replace commands.
1824 .It Fl H
1825 Scripted mode.
1826 Do not display headers, and separate fields by a single tab instead of arbitrary
1827 space.
1828 .It Fl o Ar property
1829 Comma-separated list of properties to display.
1830 See the
1831 .Sx Properties
1832 section for a list of valid properties.
1833 The default list is
1834 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1835 .Cm capacity , dedupratio , health , altroot .
1836 .It Fl L
1837 Display real paths for vdevs resolving all symbolic links. This can
1838 be used to look up the current block device name regardless of the
1839 /dev/disk/ path used to open it.
1840 .It Fl p
1841 Display numbers in parsable
1842 .Pq exact
1843 values.
1844 .It Fl P
1845 Display full paths for vdevs instead of only the last component of
1846 the path. This can be used in conjunction with the
1847 .Fl L
1848 flag.
1849 .It Fl T Sy u Ns | Ns Sy d
1850 Display a time stamp.
1851 Specify
1852 .Fl u
1853 for a printed representation of the internal representation of time.
1854 See
1855 .Xr time 2 .
1856 Specify
1857 .Fl d
1858 for standard date format.
1859 See
1860 .Xr date 1 .
1861 .It Fl v
1862 Verbose statistics.
1863 Reports usage statistics for individual vdevs within the pool, in addition to
1864 the pool-wise statistics.
1865 .El
1866 .It Xo
1867 .Nm
1868 .Cm offline
1869 .Op Fl f
1870 .Op Fl t
1871 .Ar pool Ar device Ns ...
1872 .Xc
1873 Takes the specified physical device offline.
1874 While the
1875 .Ar device
1876 is offline, no attempt is made to read or write to the device.
1877 This command is not applicable to spares.
1878 .Bl -tag -width Ds
1879 .It Fl f
1880 Force fault. Instead of offlining the disk, put it into a faulted
1881 state. The fault will persist across imports unless the
1882 .Fl t
1883 flag was specified.
1884 .It Fl t
1885 Temporary.
1886 Upon reboot, the specified physical device reverts to its previous state.
1887 .El
1888 .It Xo
1889 .Nm
1890 .Cm online
1891 .Op Fl e
1892 .Ar pool Ar device Ns ...
1893 .Xc
1894 Brings the specified physical device online.
1895 This command is not applicable to spares.
1896 .Bl -tag -width Ds
1897 .It Fl e
1898 Expand the device to use all available space.
1899 If the device is part of a mirror or raidz then all devices must be expanded
1900 before the new space will become available to the pool.
1901 .El
1902 .It Xo
1903 .Nm
1904 .Cm reguid
1905 .Ar pool
1906 .Xc
1907 Generates a new unique identifier for the pool.
1908 You must ensure that all devices in this pool are online and healthy before
1909 performing this action.
1910 .It Xo
1911 .Nm
1912 .Cm reopen
1913 .Op Fl n
1914 .Ar pool
1915 .Xc
1916 Reopen all the vdevs associated with the pool.
1917 .Bl -tag -width Ds
1918 .It Fl n
1919 Do not restart an in-progress scrub operation. This is not recommended and can
1920 result in partially resilvered devices unless a second scrub is performed.
1921 .El
1922 .It Xo
1923 .Nm
1924 .Cm remove
1925 .Op Fl np
1926 .Ar pool Ar device Ns ...
1927 .Xc
1928 Removes the specified device from the pool.
1929 This command supports removing hot spare, cache, log, and both mirrored and
1930 non-redundant primary top-level vdevs, including dedup and special vdevs.
1931 When the primary pool storage includes a top-level raidz vdev only hot spare,
1932 cache, and log devices can be removed.
1933 .sp
1934 Removing a top-level vdev reduces the total amount of space in the storage pool.
1935 The specified device will be evacuated by copying all allocated space from it to
1936 the other devices in the pool.
1937 In this case, the
1938 .Nm zpool Cm remove
1939 command initiates the removal and returns, while the evacuation continues in
1940 the background.
1941 The removal progress can be monitored with
1942 .Nm zpool Cm status.
1943 The
1944 .Sy device_removal
1945 feature flag must be enabled to remove a top-level vdev, see
1946 .Xr zpool-features 5 .
1947 .Pp
1948 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1949 same.
1950 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1951 the
1952 .Nm zpool Cm detach
1953 command.
1954 .Bl -tag -width Ds
1955 .It Fl n
1956 Do not actually perform the removal ("no-op").
1957 Instead, print the estimated amount of memory that will be used by the
1958 mapping table after the removal completes.
1959 This is nonzero only for top-level vdevs.
1960 .El
1961 .Bl -tag -width Ds
1962 .It Fl p
1963 Used in conjunction with the
1964 .Fl n
1965 flag, displays numbers as parsable (exact) values.
1966 .El
1967 .It Xo
1968 .Nm
1969 .Cm remove
1970 .Fl s
1971 .Ar pool
1972 .Xc
1973 Stops and cancels an in-progress removal of a top-level vdev.
1974 .It Xo
1975 .Nm
1976 .Cm replace
1977 .Op Fl f
1978 .Op Fl o Ar property Ns = Ns Ar value
1979 .Ar pool Ar device Op Ar new_device
1980 .Xc
1981 Replaces
1982 .Ar old_device
1983 with
1984 .Ar new_device .
1985 This is equivalent to attaching
1986 .Ar new_device ,
1987 waiting for it to resilver, and then detaching
1988 .Ar old_device .
1989 .Pp
1990 The size of
1991 .Ar new_device
1992 must be greater than or equal to the minimum size of all the devices in a mirror
1993 or raidz configuration.
1994 .Pp
1995 .Ar new_device
1996 is required if the pool is not redundant.
1997 If
1998 .Ar new_device
1999 is not specified, it defaults to
2000 .Ar old_device .
2001 This form of replacement is useful after an existing disk has failed and has
2002 been physically replaced.
2003 In this case, the new disk may have the same
2004 .Pa /dev
2005 path as the old device, even though it is actually a different disk.
2006 ZFS recognizes this.
2007 .Bl -tag -width Ds
2008 .It Fl f
2009 Forces use of
2010 .Ar new_device ,
2011 even if its appears to be in use.
2012 Not all devices can be overridden in this manner.
2013 .It Fl o Ar property Ns = Ns Ar value
2014 Sets the given pool properties. See the
2015 .Sx Properties
2016 section for a list of valid properties that can be set.
2017 The only property supported at the moment is
2018 .Sy ashift .
2019 .El
2020 .It Xo
2021 .Nm
2022 .Cm scrub
2023 .Op Fl s | Fl p
2024 .Ar pool Ns ...
2025 .Xc
2026 Begins a scrub or resumes a paused scrub.
2027 The scrub examines all data in the specified pools to verify that it checksums
2028 correctly.
2029 For replicated
2030 .Pq mirror or raidz
2031 devices, ZFS automatically repairs any damage discovered during the scrub.
2032 The
2033 .Nm zpool Cm status
2034 command reports the progress of the scrub and summarizes the results of the
2035 scrub upon completion.
2036 .Pp
2037 Scrubbing and resilvering are very similar operations.
2038 The difference is that resilvering only examines data that ZFS knows to be out
2039 of date
2040 .Po
2041 for example, when attaching a new device to a mirror or replacing an existing
2042 device
2043 .Pc ,
2044 whereas scrubbing examines all data to discover silent errors due to hardware
2045 faults or disk failure.
2046 .Pp
2047 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2048 one at a time.
2049 If a scrub is paused, the
2050 .Nm zpool Cm scrub
2051 resumes it.
2052 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2053 resilver completes.
2054 .Bl -tag -width Ds
2055 .It Fl s
2056 Stop scrubbing.
2057 .El
2058 .Bl -tag -width Ds
2059 .It Fl p
2060 Pause scrubbing.
2061 Scrub pause state and progress are periodically synced to disk.
2062 If the system is restarted or pool is exported during a paused scrub,
2063 even after import, scrub will remain paused until it is resumed.
2064 Once resumed the scrub will pick up from the place where it was last
2065 checkpointed to disk.
2066 To resume a paused scrub issue
2067 .Nm zpool Cm scrub
2068 again.
2069 .El
2070 .It Xo
2071 .Nm
2072 .Cm set
2073 .Ar property Ns = Ns Ar value
2074 .Ar pool
2075 .Xc
2076 Sets the given property on the specified pool.
2077 See the
2078 .Sx Properties
2079 section for more information on what properties can be set and acceptable
2080 values.
2081 .It Xo
2082 .Nm
2083 .Cm split
2084 .Op Fl gLlnP
2085 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2086 .Op Fl R Ar root
2087 .Ar pool newpool
2088 .Op Ar device ...
2089 .Xc
2090 Splits devices off
2091 .Ar pool
2092 creating
2093 .Ar newpool .
2094 All vdevs in
2095 .Ar pool
2096 must be mirrors and the pool must not be in the process of resilvering.
2097 At the time of the split,
2098 .Ar newpool
2099 will be a replica of
2100 .Ar pool .
2101 By default, the
2102 last device in each mirror is split from
2103 .Ar pool
2104 to create
2105 .Ar newpool .
2106 .Pp
2107 The optional device specification causes the specified device(s) to be
2108 included in the new
2109 .Ar pool
2110 and, should any devices remain unspecified,
2111 the last device in each mirror is used as would be by default.
2112 .Bl -tag -width Ds
2113 .It Fl g
2114 Display vdev GUIDs instead of the normal device names. These GUIDs
2115 can be used in place of device names for the zpool
2116 detach/offline/remove/replace commands.
2117 .It Fl L
2118 Display real paths for vdevs resolving all symbolic links. This can
2119 be used to look up the current block device name regardless of the
2120 .Pa /dev/disk/
2121 path used to open it.
2122 .It Fl l
2123 Indicates that this command will request encryption keys for all encrypted
2124 datasets it attempts to mount as it is bringing the new pool online. Note that
2125 if any datasets have a
2126 .Sy keylocation
2127 of
2128 .Sy prompt
2129 this command will block waiting for the keys to be entered. Without this flag
2130 encrypted datasets will be left unavailable until the keys are loaded.
2131 .It Fl n
2132 Do dry run, do not actually perform the split.
2133 Print out the expected configuration of
2134 .Ar newpool .
2135 .It Fl P
2136 Display full paths for vdevs instead of only the last component of
2137 the path. This can be used in conjunction with the
2138 .Fl L
2139 flag.
2140 .It Fl o Ar property Ns = Ns Ar value
2141 Sets the specified property for
2142 .Ar newpool .
2143 See the
2144 .Sx Properties
2145 section for more information on the available pool properties.
2146 .It Fl R Ar root
2147 Set
2148 .Sy altroot
2149 for
2150 .Ar newpool
2151 to
2152 .Ar root
2153 and automatically import it.
2154 .El
2155 .It Xo
2156 .Nm
2157 .Cm status
2158 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2159 .Op Fl gLPvxD
2160 .Op Fl T Sy u Ns | Ns Sy d
2161 .Oo Ar pool Oc Ns ...
2162 .Op Ar interval Op Ar count
2163 .Xc
2164 Displays the detailed health status for the given pools.
2165 If no
2166 .Ar pool
2167 is specified, then the status of each pool in the system is displayed.
2168 For more information on pool and device health, see the
2169 .Sx Device Failure and Recovery
2170 section.
2171 .Pp
2172 If a scrub or resilver is in progress, this command reports the percentage done
2173 and the estimated time to completion.
2174 Both of these are only approximate, because the amount of data in the pool and
2175 the other workloads on the system can change.
2176 .Bl -tag -width Ds
2177 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2178 Run a script (or scripts) on each vdev and include the output as a new column
2179 in the
2180 .Nm zpool Cm status
2181 output. See the
2182 .Fl c
2183 option of
2184 .Nm zpool Cm iostat
2185 for complete details.
2186 .It Fl g
2187 Display vdev GUIDs instead of the normal device names. These GUIDs
2188 can be used in place of device names for the zpool
2189 detach/offline/remove/replace commands.
2190 .It Fl L
2191 Display real paths for vdevs resolving all symbolic links. This can
2192 be used to look up the current block device name regardless of the
2193 .Pa /dev/disk/
2194 path used to open it.
2195 .It Fl P
2196 Display full paths for vdevs instead of only the last component of
2197 the path. This can be used in conjunction with the
2198 .Fl L
2199 flag.
2200 .It Fl D
2201 Display a histogram of deduplication statistics, showing the allocated
2202 .Pq physically present on disk
2203 and referenced
2204 .Pq logically referenced in the pool
2205 block counts and sizes by reference count.
2206 .It Fl T Sy u Ns | Ns Sy d
2207 Display a time stamp.
2208 Specify
2209 .Fl u
2210 for a printed representation of the internal representation of time.
2211 See
2212 .Xr time 2 .
2213 Specify
2214 .Fl d
2215 for standard date format.
2216 See
2217 .Xr date 1 .
2218 .It Fl v
2219 Displays verbose data error information, printing out a complete list of all
2220 data errors since the last complete pool scrub.
2221 .It Fl x
2222 Only display status for pools that are exhibiting errors or are otherwise
2223 unavailable.
2224 Warnings about pools not using the latest on-disk format will not be included.
2225 .El
2226 .It Xo
2227 .Nm
2228 .Cm sync
2229 .Op Ar pool ...
2230 .Xc
2231 This command forces all in-core dirty data to be written to the primary
2232 pool storage and not the ZIL. It will also update administrative
2233 information including quota reporting. Without arguments,
2234 .Sy zpool sync
2235 will sync all pools on the system. Otherwise, it will sync only the
2236 specified pool(s).
2237 .It Xo
2238 .Nm
2239 .Cm upgrade
2240 .Xc
2241 Displays pools which do not have all supported features enabled and pools
2242 formatted using a legacy ZFS version number.
2243 These pools can continue to be used, but some features may not be available.
2244 Use
2245 .Nm zpool Cm upgrade Fl a
2246 to enable all features on all pools.
2247 .It Xo
2248 .Nm
2249 .Cm upgrade
2250 .Fl v
2251 .Xc
2252 Displays legacy ZFS versions supported by the current software.
2253 See
2254 .Xr zpool-features 5
2255 for a description of feature flags features supported by the current software.
2256 .It Xo
2257 .Nm
2258 .Cm upgrade
2259 .Op Fl V Ar version
2260 .Fl a Ns | Ns Ar pool Ns ...
2261 .Xc
2262 Enables all supported features on the given pool.
2263 Once this is done, the pool will no longer be accessible on systems that do not
2264 support feature flags.
2265 See
2266 .Xr zfs-features 5
2267 for details on compatibility with systems that support feature flags, but do not
2268 support all features enabled on the pool.
2269 .Bl -tag -width Ds
2270 .It Fl a
2271 Enables all supported features on all pools.
2272 .It Fl V Ar version
2273 Upgrade to the specified legacy version.
2274 If the
2275 .Fl V
2276 flag is specified, no features will be enabled on the pool.
2277 This option can only be used to increase the version number up to the last
2278 supported legacy version number.
2279 .El
2280 .El
2281 .Sh EXIT STATUS
2282 The following exit values are returned:
2283 .Bl -tag -width Ds
2284 .It Sy 0
2285 Successful completion.
2286 .It Sy 1
2287 An error occurred.
2288 .It Sy 2
2289 Invalid command line options were specified.
2290 .El
2291 .Sh EXAMPLES
2292 .Bl -tag -width Ds
2293 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2294 The following command creates a pool with a single raidz root vdev that
2295 consists of six disks.
2296 .Bd -literal
2297 # zpool create tank raidz sda sdb sdc sdd sde sdf
2298 .Ed
2299 .It Sy Example 2 No Creating a Mirrored Storage Pool
2300 The following command creates a pool with two mirrors, where each mirror
2301 contains two disks.
2302 .Bd -literal
2303 # zpool create tank mirror sda sdb mirror sdc sdd
2304 .Ed
2305 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2306 The following command creates an unmirrored pool using two disk partitions.
2307 .Bd -literal
2308 # zpool create tank sda1 sdb2
2309 .Ed
2310 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2311 The following command creates an unmirrored pool using files.
2312 While not recommended, a pool based on files can be useful for experimental
2313 purposes.
2314 .Bd -literal
2315 # zpool create tank /path/to/file/a /path/to/file/b
2316 .Ed
2317 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2318 The following command adds two mirrored disks to the pool
2319 .Em tank ,
2320 assuming the pool is already made up of two-way mirrors.
2321 The additional space is immediately available to any datasets within the pool.
2322 .Bd -literal
2323 # zpool add tank mirror sda sdb
2324 .Ed
2325 .It Sy Example 6 No Listing Available ZFS Storage Pools
2326 The following command lists all available pools on the system.
2327 In this case, the pool
2328 .Em zion
2329 is faulted due to a missing device.
2330 The results from this command are similar to the following:
2331 .Bd -literal
2332 # zpool list
2333 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2334 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2335 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2336 zion - - - - - - - FAULTED -
2337 .Ed
2338 .It Sy Example 7 No Destroying a ZFS Storage Pool
2339 The following command destroys the pool
2340 .Em tank
2341 and any datasets contained within.
2342 .Bd -literal
2343 # zpool destroy -f tank
2344 .Ed
2345 .It Sy Example 8 No Exporting a ZFS Storage Pool
2346 The following command exports the devices in pool
2347 .Em tank
2348 so that they can be relocated or later imported.
2349 .Bd -literal
2350 # zpool export tank
2351 .Ed
2352 .It Sy Example 9 No Importing a ZFS Storage Pool
2353 The following command displays available pools, and then imports the pool
2354 .Em tank
2355 for use on the system.
2356 The results from this command are similar to the following:
2357 .Bd -literal
2358 # zpool import
2359 pool: tank
2360 id: 15451357997522795478
2361 state: ONLINE
2362 action: The pool can be imported using its name or numeric identifier.
2363 config:
2364
2365 tank ONLINE
2366 mirror ONLINE
2367 sda ONLINE
2368 sdb ONLINE
2369
2370 # zpool import tank
2371 .Ed
2372 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2373 The following command upgrades all ZFS Storage pools to the current version of
2374 the software.
2375 .Bd -literal
2376 # zpool upgrade -a
2377 This system is currently running ZFS version 2.
2378 .Ed
2379 .It Sy Example 11 No Managing Hot Spares
2380 The following command creates a new pool with an available hot spare:
2381 .Bd -literal
2382 # zpool create tank mirror sda sdb spare sdc
2383 .Ed
2384 .Pp
2385 If one of the disks were to fail, the pool would be reduced to the degraded
2386 state.
2387 The failed device can be replaced using the following command:
2388 .Bd -literal
2389 # zpool replace tank sda sdd
2390 .Ed
2391 .Pp
2392 Once the data has been resilvered, the spare is automatically removed and is
2393 made available for use should another device fail.
2394 The hot spare can be permanently removed from the pool using the following
2395 command:
2396 .Bd -literal
2397 # zpool remove tank sdc
2398 .Ed
2399 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2400 The following command creates a ZFS storage pool consisting of two, two-way
2401 mirrors and mirrored log devices:
2402 .Bd -literal
2403 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2404 sde sdf
2405 .Ed
2406 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2407 The following command adds two disks for use as cache devices to a ZFS storage
2408 pool:
2409 .Bd -literal
2410 # zpool add pool cache sdc sdd
2411 .Ed
2412 .Pp
2413 Once added, the cache devices gradually fill with content from main memory.
2414 Depending on the size of your cache devices, it could take over an hour for
2415 them to fill.
2416 Capacity and reads can be monitored using the
2417 .Cm iostat
2418 option as follows:
2419 .Bd -literal
2420 # zpool iostat -v pool 5
2421 .Ed
2422 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2423 The following commands remove the mirrored log device
2424 .Sy mirror-2
2425 and mirrored top-level data device
2426 .Sy mirror-1 .
2427 .Pp
2428 Given this configuration:
2429 .Bd -literal
2430 pool: tank
2431 state: ONLINE
2432 scrub: none requested
2433 config:
2434
2435 NAME STATE READ WRITE CKSUM
2436 tank ONLINE 0 0 0
2437 mirror-0 ONLINE 0 0 0
2438 sda ONLINE 0 0 0
2439 sdb ONLINE 0 0 0
2440 mirror-1 ONLINE 0 0 0
2441 sdc ONLINE 0 0 0
2442 sdd ONLINE 0 0 0
2443 logs
2444 mirror-2 ONLINE 0 0 0
2445 sde ONLINE 0 0 0
2446 sdf ONLINE 0 0 0
2447 .Ed
2448 .Pp
2449 The command to remove the mirrored log
2450 .Sy mirror-2
2451 is:
2452 .Bd -literal
2453 # zpool remove tank mirror-2
2454 .Ed
2455 .Pp
2456 The command to remove the mirrored data
2457 .Sy mirror-1
2458 is:
2459 .Bd -literal
2460 # zpool remove tank mirror-1
2461 .Ed
2462 .It Sy Example 15 No Displaying expanded space on a device
2463 The following command displays the detailed information for the pool
2464 .Em data .
2465 This pool is comprised of a single raidz vdev where one of its devices
2466 increased its capacity by 10GB.
2467 In this example, the pool will not be able to utilize this extra capacity until
2468 all the devices under the raidz vdev have been expanded.
2469 .Bd -literal
2470 # zpool list -v data
2471 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2472 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2473 raidz1 23.9G 14.6G 9.30G - 48%
2474 sda - - - - -
2475 sdb - - - 10G -
2476 sdc - - - - -
2477 .Ed
2478 .It Sy Example 16 No Adding output columns
2479 Additional columns can be added to the
2480 .Nm zpool Cm status
2481 and
2482 .Nm zpool Cm iostat
2483 output with
2484 .Fl c
2485 option.
2486 .Bd -literal
2487 # zpool status -c vendor,model,size
2488 NAME STATE READ WRITE CKSUM vendor model size
2489 tank ONLINE 0 0 0
2490 mirror-0 ONLINE 0 0 0
2491 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2492 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2493 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2494 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2495 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2496 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2497
2498 # zpool iostat -vc slaves
2499 capacity operations bandwidth
2500 pool alloc free read write read write slaves
2501 ---------- ----- ----- ----- ----- ----- ----- ---------
2502 tank 20.4G 7.23T 26 152 20.7M 21.6M
2503 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2504 U1 - - 0 31 1.46K 20.6M sdb sdff
2505 U10 - - 0 1 3.77K 13.3K sdas sdgw
2506 U11 - - 0 1 288K 13.3K sdat sdgx
2507 U12 - - 0 1 78.4K 13.3K sdau sdgy
2508 U13 - - 0 1 128K 13.3K sdav sdgz
2509 U14 - - 0 1 63.2K 13.3K sdfk sdg
2510 .Ed
2511 .El
2512 .Sh ENVIRONMENT VARIABLES
2513 .Bl -tag -width "ZFS_ABORT"
2514 .It Ev ZFS_ABORT
2515 Cause
2516 .Nm zpool
2517 to dump core on exit for the purposes of running
2518 .Sy ::findleaks .
2519 .El
2520 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2521 .It Ev ZPOOL_IMPORT_PATH
2522 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2523 .Nm zpool
2524 looks for device nodes and files.
2525 Similar to the
2526 .Fl d
2527 option in
2528 .Nm zpool import .
2529 .El
2530 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2531 .It Ev ZPOOL_VDEV_NAME_GUID
2532 Cause
2533 .Nm zpool subcommands to output vdev guids by default. This behavior
2534 is identical to the
2535 .Nm zpool status -g
2536 command line option.
2537 .El
2538 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2539 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2540 Cause
2541 .Nm zpool
2542 subcommands to follow links for vdev names by default. This behavior is identical to the
2543 .Nm zpool status -L
2544 command line option.
2545 .El
2546 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2547 .It Ev ZPOOL_VDEV_NAME_PATH
2548 Cause
2549 .Nm zpool
2550 subcommands to output full vdev path names by default. This
2551 behavior is identical to the
2552 .Nm zpool status -p
2553 command line option.
2554 .El
2555 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2556 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2557 Older ZFS on Linux implementations had issues when attempting to display pool
2558 config VDEV names if a
2559 .Sy devid
2560 NVP value is present in the pool's config.
2561 .Pp
2562 For example, a pool that originated on illumos platform would have a devid
2563 value in the config and
2564 .Nm zpool status
2565 would fail when listing the config.
2566 This would also be true for future Linux based pools.
2567 .Pp
2568 A pool can be stripped of any
2569 .Sy devid
2570 values on import or prevented from adding
2571 them on
2572 .Nm zpool create
2573 or
2574 .Nm zpool add
2575 by setting
2576 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2577 .El
2578 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2579 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2580 Allow a privileged user to run the
2581 .Nm zpool status/iostat
2582 with the
2583 .Fl c
2584 option. Normally, only unprivileged users are allowed to run
2585 .Fl c .
2586 .El
2587 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2588 .It Ev ZPOOL_SCRIPTS_PATH
2589 The search path for scripts when running
2590 .Nm zpool status/iostat
2591 with the
2592 .Fl c
2593 option. This is a colon-separated list of directories and overrides the default
2594 .Pa ~/.zpool.d
2595 and
2596 .Pa /etc/zfs/zpool.d
2597 search paths.
2598 .El
2599 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2600 .It Ev ZPOOL_SCRIPTS_ENABLED
2601 Allow a user to run
2602 .Nm zpool status/iostat
2603 with the
2604 .Fl c
2605 option. If
2606 .Sy ZPOOL_SCRIPTS_ENABLED
2607 is not set, it is assumed that the user is allowed to run
2608 .Nm zpool status/iostat -c .
2609 .El
2610 .Sh INTERFACE STABILITY
2611 .Sy Evolving
2612 .Sh SEE ALSO
2613 .Xr zfs-events 5 ,
2614 .Xr zfs-module-parameters 5 ,
2615 .Xr zpool-features 5 ,
2616 .Xr zed 8 ,
2617 .Xr zfs 8