]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
Fix '-T u|d' descriptions in zpool(8)
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd November 29, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm initialize
119 .Op Fl c | Fl s
120 .Ar pool
121 .Op Ar device Ns ...
122 .Nm
123 .Cm iostat
124 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125 .Op Fl T Sy u Ns | Ns Sy d
126 .Op Fl ghHLnpPvy
127 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm labelclear
131 .Op Fl f
132 .Ar device
133 .Nm
134 .Cm list
135 .Op Fl HgLpPv
136 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137 .Op Fl T Sy u Ns | Ns Sy d
138 .Oo Ar pool Oc Ns ...
139 .Op Ar interval Op Ar count
140 .Nm
141 .Cm offline
142 .Op Fl f
143 .Op Fl t
144 .Ar pool Ar device Ns ...
145 .Nm
146 .Cm online
147 .Op Fl e
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm reguid
151 .Ar pool
152 .Nm
153 .Cm reopen
154 .Op Fl n
155 .Ar pool
156 .Nm
157 .Cm remove
158 .Op Fl np
159 .Ar pool Ar device Ns ...
160 .Nm
161 .Cm remove
162 .Fl s
163 .Ar pool
164 .Nm
165 .Cm replace
166 .Op Fl f
167 .Oo Fl o Ar property Ns = Ns Ar value Oc
168 .Ar pool Ar device Op Ar new_device
169 .Nm
170 .Cm resilver
171 .Ar pool Ns ...
172 .Nm
173 .Cm scrub
174 .Op Fl s | Fl p
175 .Ar pool Ns ...
176 .Nm
177 .Cm set
178 .Ar property Ns = Ns Ar value
179 .Ar pool
180 .Nm
181 .Cm split
182 .Op Fl gLlnP
183 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
184 .Op Fl R Ar root
185 .Ar pool newpool
186 .Oo Ar device Oc Ns ...
187 .Nm
188 .Cm status
189 .Oo Fl c Ar SCRIPT Oc
190 .Op Fl DigLpPsvx
191 .Op Fl T Sy u Ns | Ns Sy d
192 .Oo Ar pool Oc Ns ...
193 .Op Ar interval Op Ar count
194 .Nm
195 .Cm sync
196 .Oo Ar pool Oc Ns ...
197 .Nm
198 .Cm upgrade
199 .Nm
200 .Cm upgrade
201 .Fl v
202 .Nm
203 .Cm upgrade
204 .Op Fl V Ar version
205 .Fl a Ns | Ns Ar pool Ns ...
206 .Sh DESCRIPTION
207 The
208 .Nm
209 command configures ZFS storage pools.
210 A storage pool is a collection of devices that provides physical storage and
211 data replication for ZFS datasets.
212 All datasets within a storage pool share the same space.
213 See
214 .Xr zfs 8
215 for information on managing datasets.
216 .Ss Virtual Devices (vdevs)
217 A "virtual device" describes a single device or a collection of devices
218 organized according to certain performance and fault characteristics.
219 The following virtual devices are supported:
220 .Bl -tag -width Ds
221 .It Sy disk
222 A block device, typically located under
223 .Pa /dev .
224 ZFS can use individual slices or partitions, though the recommended mode of
225 operation is to use whole disks.
226 A disk can be specified by a full path, or it can be a shorthand name
227 .Po the relative portion of the path under
228 .Pa /dev
229 .Pc .
230 A whole disk can be specified by omitting the slice or partition designation.
231 For example,
232 .Pa sda
233 is equivalent to
234 .Pa /dev/sda .
235 When given a whole disk, ZFS automatically labels the disk, if necessary.
236 .It Sy file
237 A regular file.
238 The use of files as a backing store is strongly discouraged.
239 It is designed primarily for experimental purposes, as the fault tolerance of a
240 file is only as good as the file system of which it is a part.
241 A file must be specified by a full path.
242 .It Sy mirror
243 A mirror of two or more devices.
244 Data is replicated in an identical fashion across all components of a mirror.
245 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
246 failing before data integrity is compromised.
247 .It Sy raidz , raidz1 , raidz2 , raidz3
248 A variation on RAID-5 that allows for better distribution of parity and
249 eliminates the RAID-5
250 .Qq write hole
251 .Pq in which data and parity become inconsistent after a power loss .
252 Data and parity is striped across all disks within a raidz group.
253 .Pp
254 A raidz group can have single-, double-, or triple-parity, meaning that the
255 raidz group can sustain one, two, or three failures, respectively, without
256 losing any data.
257 The
258 .Sy raidz1
259 vdev type specifies a single-parity raidz group; the
260 .Sy raidz2
261 vdev type specifies a double-parity raidz group; and the
262 .Sy raidz3
263 vdev type specifies a triple-parity raidz group.
264 The
265 .Sy raidz
266 vdev type is an alias for
267 .Sy raidz1 .
268 .Pp
269 A raidz group with N disks of size X with P parity disks can hold approximately
270 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
271 compromised.
272 The minimum number of devices in a raidz group is one more than the number of
273 parity disks.
274 The recommended number is between 3 and 9 to help increase performance.
275 .It Sy spare
276 A special pseudo-vdev which keeps track of available hot spares for a pool.
277 For more information, see the
278 .Sx Hot Spares
279 section.
280 .It Sy log
281 A separate intent log device.
282 If more than one log device is specified, then writes are load-balanced between
283 devices.
284 Log devices can be mirrored.
285 However, raidz vdev types are not supported for the intent log.
286 For more information, see the
287 .Sx Intent Log
288 section.
289 .It Sy dedup
290 A device dedicated solely for allocating dedup data.
291 The redundancy of this device should match the redundancy of the other normal
292 devices in the pool. If more than one dedup device is specified, then
293 allocations are load-balanced between devices.
294 .It Sy special
295 A device dedicated solely for allocating various kinds of internal metadata,
296 and optionally small file data.
297 The redundancy of this device should match the redundancy of the other normal
298 devices in the pool. If more than one special device is specified, then
299 allocations are load-balanced between devices.
300 .Pp
301 For more information on special allocations, see the
302 .Sx Special Allocation Class
303 section.
304 .It Sy cache
305 A device used to cache storage pool data.
306 A cache device cannot be configured as a mirror or raidz group.
307 For more information, see the
308 .Sx Cache Devices
309 section.
310 .El
311 .Pp
312 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
313 contain files or disks.
314 Mirrors of mirrors
315 .Pq or other combinations
316 are not allowed.
317 .Pp
318 A pool can have any number of virtual devices at the top of the configuration
319 .Po known as
320 .Qq root vdevs
321 .Pc .
322 Data is dynamically distributed across all top-level devices to balance data
323 among devices.
324 As new virtual devices are added, ZFS automatically places data on the newly
325 available devices.
326 .Pp
327 Virtual devices are specified one at a time on the command line, separated by
328 whitespace.
329 The keywords
330 .Sy mirror
331 and
332 .Sy raidz
333 are used to distinguish where a group ends and another begins.
334 For example, the following creates two root vdevs, each a mirror of two disks:
335 .Bd -literal
336 # zpool create mypool mirror sda sdb mirror sdc sdd
337 .Ed
338 .Ss Device Failure and Recovery
339 ZFS supports a rich set of mechanisms for handling device failure and data
340 corruption.
341 All metadata and data is checksummed, and ZFS automatically repairs bad data
342 from a good copy when corruption is detected.
343 .Pp
344 In order to take advantage of these features, a pool must make use of some form
345 of redundancy, using either mirrored or raidz groups.
346 While ZFS supports running in a non-redundant configuration, where each root
347 vdev is simply a disk or file, this is strongly discouraged.
348 A single case of bit corruption can render some or all of your data unavailable.
349 .Pp
350 A pool's health status is described by one of three states: online, degraded,
351 or faulted.
352 An online pool has all devices operating normally.
353 A degraded pool is one in which one or more devices have failed, but the data is
354 still available due to a redundant configuration.
355 A faulted pool has corrupted metadata, or one or more faulted devices, and
356 insufficient replicas to continue functioning.
357 .Pp
358 The health of the top-level vdev, such as mirror or raidz device, is
359 potentially impacted by the state of its associated vdevs, or component
360 devices.
361 A top-level vdev or component device is in one of the following states:
362 .Bl -tag -width "DEGRADED"
363 .It Sy DEGRADED
364 One or more top-level vdevs is in the degraded state because one or more
365 component devices are offline.
366 Sufficient replicas exist to continue functioning.
367 .Pp
368 One or more component devices is in the degraded or faulted state, but
369 sufficient replicas exist to continue functioning.
370 The underlying conditions are as follows:
371 .Bl -bullet
372 .It
373 The number of checksum errors exceeds acceptable levels and the device is
374 degraded as an indication that something may be wrong.
375 ZFS continues to use the device as necessary.
376 .It
377 The number of I/O errors exceeds acceptable levels.
378 The device could not be marked as faulted because there are insufficient
379 replicas to continue functioning.
380 .El
381 .It Sy FAULTED
382 One or more top-level vdevs is in the faulted state because one or more
383 component devices are offline.
384 Insufficient replicas exist to continue functioning.
385 .Pp
386 One or more component devices is in the faulted state, and insufficient
387 replicas exist to continue functioning.
388 The underlying conditions are as follows:
389 .Bl -bullet
390 .It
391 The device could be opened, but the contents did not match expected values.
392 .It
393 The number of I/O errors exceeds acceptable levels and the device is faulted to
394 prevent further use of the device.
395 .El
396 .It Sy OFFLINE
397 The device was explicitly taken offline by the
398 .Nm zpool Cm offline
399 command.
400 .It Sy ONLINE
401 The device is online and functioning.
402 .It Sy REMOVED
403 The device was physically removed while the system was running.
404 Device removal detection is hardware-dependent and may not be supported on all
405 platforms.
406 .It Sy UNAVAIL
407 The device could not be opened.
408 If a pool is imported when a device was unavailable, then the device will be
409 identified by a unique identifier instead of its path since the path was never
410 correct in the first place.
411 .El
412 .Pp
413 If a device is removed and later re-attached to the system, ZFS attempts
414 to put the device online automatically.
415 Device attach detection is hardware-dependent and might not be supported on all
416 platforms.
417 .Ss Hot Spares
418 ZFS allows devices to be associated with pools as
419 .Qq hot spares .
420 These devices are not actively used in the pool, but when an active device
421 fails, it is automatically replaced by a hot spare.
422 To create a pool with hot spares, specify a
423 .Sy spare
424 vdev with any number of devices.
425 For example,
426 .Bd -literal
427 # zpool create pool mirror sda sdb spare sdc sdd
428 .Ed
429 .Pp
430 Spares can be shared across multiple pools, and can be added with the
431 .Nm zpool Cm add
432 command and removed with the
433 .Nm zpool Cm remove
434 command.
435 Once a spare replacement is initiated, a new
436 .Sy spare
437 vdev is created within the configuration that will remain there until the
438 original device is replaced.
439 At this point, the hot spare becomes available again if another device fails.
440 .Pp
441 If a pool has a shared spare that is currently being used, the pool can not be
442 exported since other pools may use this shared spare, which may lead to
443 potential data corruption.
444 .Pp
445 An in-progress spare replacement can be cancelled by detaching the hot spare.
446 If the original faulted device is detached, then the hot spare assumes its
447 place in the configuration, and is removed from the spare list of all active
448 pools.
449 .Pp
450 Spares cannot replace log devices.
451 .Ss Intent Log
452 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
453 transactions.
454 For instance, databases often require their transactions to be on stable storage
455 devices when returning from a system call.
456 NFS and other applications can also use
457 .Xr fsync 2
458 to ensure data stability.
459 By default, the intent log is allocated from blocks within the main pool.
460 However, it might be possible to get better performance using separate intent
461 log devices such as NVRAM or a dedicated disk.
462 For example:
463 .Bd -literal
464 # zpool create pool sda sdb log sdc
465 .Ed
466 .Pp
467 Multiple log devices can also be specified, and they can be mirrored.
468 See the
469 .Sx EXAMPLES
470 section for an example of mirroring multiple log devices.
471 .Pp
472 Log devices can be added, replaced, attached, detached and removed. In
473 addition, log devices are imported and exported as part of the pool
474 that contains them.
475 Mirrored devices can be removed by specifying the top-level mirror vdev.
476 .Ss Cache Devices
477 Devices can be added to a storage pool as
478 .Qq cache devices .
479 These devices provide an additional layer of caching between main memory and
480 disk.
481 For read-heavy workloads, where the working set size is much larger than what
482 can be cached in main memory, using cache devices allow much more of this
483 working set to be served from low latency media.
484 Using cache devices provides the greatest performance improvement for random
485 read-workloads of mostly static content.
486 .Pp
487 To create a pool with cache devices, specify a
488 .Sy cache
489 vdev with any number of devices.
490 For example:
491 .Bd -literal
492 # zpool create pool sda sdb cache sdc sdd
493 .Ed
494 .Pp
495 Cache devices cannot be mirrored or part of a raidz configuration.
496 If a read error is encountered on a cache device, that read I/O is reissued to
497 the original storage pool device, which might be part of a mirrored or raidz
498 configuration.
499 .Pp
500 The content of the cache devices is considered volatile, as is the case with
501 other system caches.
502 .Ss Pool checkpoint
503 Before starting critical procedures that include destructive actions (e.g
504 .Nm zfs Cm destroy
505 ), an administrator can checkpoint the pool's state and in the case of a
506 mistake or failure, rewind the entire pool back to the checkpoint.
507 Otherwise, the checkpoint can be discarded when the procedure has completed
508 successfully.
509 .Pp
510 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
511 with care as it contains every part of the pool's state, from properties to vdev
512 configuration.
513 Thus, while a pool has a checkpoint certain operations are not allowed.
514 Specifically, vdev removal/attach/detach, mirror splitting, and
515 changing the pool's guid.
516 Adding a new vdev is supported but in the case of a rewind it will have to be
517 added again.
518 Finally, users of this feature should keep in mind that scrubs in a pool that
519 has a checkpoint do not repair checkpointed data.
520 .Pp
521 To create a checkpoint for a pool:
522 .Bd -literal
523 # zpool checkpoint pool
524 .Ed
525 .Pp
526 To later rewind to its checkpointed state, you need to first export it and
527 then rewind it during import:
528 .Bd -literal
529 # zpool export pool
530 # zpool import --rewind-to-checkpoint pool
531 .Ed
532 .Pp
533 To discard the checkpoint from a pool:
534 .Bd -literal
535 # zpool checkpoint -d pool
536 .Ed
537 .Pp
538 Dataset reservations (controlled by the
539 .Nm reservation
540 or
541 .Nm refreservation
542 zfs properties) may be unenforceable while a checkpoint exists, because the
543 checkpoint is allowed to consume the dataset's reservation.
544 Finally, data that is part of the checkpoint but has been freed in the
545 current state of the pool won't be scanned during a scrub.
546 .Ss Special Allocation Class
547 The allocations in the special class are dedicated to specific block types.
548 By default this includes all metadata, the indirect blocks of user data, and
549 any dedup data. The class can also be provisioned to accept a limited
550 percentage of small file data blocks.
551 .Pp
552 A pool must always have at least one general (non-specified) vdev before
553 other devices can be assigned to the special class. If the special class
554 becomes full, then allocations intended for it will spill back into the
555 normal class.
556 .Pp
557 Dedup data can be excluded from the special class by setting the
558 .Sy zfs_ddt_data_is_special
559 zfs module parameter to false (0).
560 .Pp
561 Inclusion of small file blocks in the special class is opt-in. Each dataset
562 can control the size of small file blocks allowed in the special class by
563 setting the
564 .Sy special_small_blocks
565 dataset property. It defaults to zero so you must opt-in by setting it to a
566 non-zero value. See
567 .Xr zfs 8
568 for more info on setting this property.
569 .Ss Properties
570 Each pool has several properties associated with it.
571 Some properties are read-only statistics while others are configurable and
572 change the behavior of the pool.
573 .Pp
574 The following are read-only properties:
575 .Bl -tag -width Ds
576 .It Cm allocated
577 Amount of storage used within the pool.
578 .It Sy capacity
579 Percentage of pool space used.
580 This property can also be referred to by its shortened column name,
581 .Sy cap .
582 .It Sy expandsize
583 Amount of uninitialized space within the pool or device that can be used to
584 increase the total capacity of the pool.
585 Uninitialized space consists of any space on an EFI labeled vdev which has not
586 been brought online
587 .Po e.g, using
588 .Nm zpool Cm online Fl e
589 .Pc .
590 This space occurs when a LUN is dynamically expanded.
591 .It Sy fragmentation
592 The amount of fragmentation in the pool.
593 .It Sy free
594 The amount of free space available in the pool.
595 .It Sy freeing
596 After a file system or snapshot is destroyed, the space it was using is
597 returned to the pool asynchronously.
598 .Sy freeing
599 is the amount of space remaining to be reclaimed.
600 Over time
601 .Sy freeing
602 will decrease while
603 .Sy free
604 increases.
605 .It Sy health
606 The current health of the pool.
607 Health can be one of
608 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
609 .It Sy guid
610 A unique identifier for the pool.
611 .It Sy load_guid
612 A unique identifier for the pool.
613 Unlike the
614 .Sy guid
615 property, this identifier is generated every time we load the pool (e.g. does
616 not persist across imports/exports) and never changes while the pool is loaded
617 (even if a
618 .Sy reguid
619 operation takes place).
620 .It Sy size
621 Total size of the storage pool.
622 .It Sy unsupported@ Ns Em feature_guid
623 Information about unsupported features that are enabled on the pool.
624 See
625 .Xr zpool-features 5
626 for details.
627 .El
628 .Pp
629 The space usage properties report actual physical space available to the
630 storage pool.
631 The physical space can be different from the total amount of space that any
632 contained datasets can actually use.
633 The amount of space used in a raidz configuration depends on the characteristics
634 of the data being written.
635 In addition, ZFS reserves some space for internal accounting that the
636 .Xr zfs 8
637 command takes into account, but the
638 .Nm
639 command does not.
640 For non-full pools of a reasonable size, these effects should be invisible.
641 For small pools, or pools that are close to being completely full, these
642 discrepancies may become more noticeable.
643 .Pp
644 The following property can be set at creation time and import time:
645 .Bl -tag -width Ds
646 .It Sy altroot
647 Alternate root directory.
648 If set, this directory is prepended to any mount points within the pool.
649 This can be used when examining an unknown pool where the mount points cannot be
650 trusted, or in an alternate boot environment, where the typical paths are not
651 valid.
652 .Sy altroot
653 is not a persistent property.
654 It is valid only while the system is up.
655 Setting
656 .Sy altroot
657 defaults to using
658 .Sy cachefile Ns = Ns Sy none ,
659 though this may be overridden using an explicit setting.
660 .El
661 .Pp
662 The following property can be set only at import time:
663 .Bl -tag -width Ds
664 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
665 If set to
666 .Sy on ,
667 the pool will be imported in read-only mode.
668 This property can also be referred to by its shortened column name,
669 .Sy rdonly .
670 .El
671 .Pp
672 The following properties can be set at creation time and import time, and later
673 changed with the
674 .Nm zpool Cm set
675 command:
676 .Bl -tag -width Ds
677 .It Sy ashift Ns = Ns Sy ashift
678 Pool sector size exponent, to the power of
679 .Sy 2
680 (internally referred to as
681 .Sy ashift
682 ). Values from 9 to 16, inclusive, are valid; also, the special
683 value 0 (the default) means to auto-detect using the kernel's block
684 layer and a ZFS internal exception list. I/O operations will be aligned
685 to the specified size boundaries. Additionally, the minimum (disk)
686 write size will be set to the specified size, so this represents a
687 space vs. performance trade-off. For optimal performance, the pool
688 sector size should be greater than or equal to the sector size of the
689 underlying disks. The typical case for setting this property is when
690 performance is important and the underlying disks use 4KiB sectors but
691 report 512B sectors to the OS (for compatibility reasons); in that
692 case, set
693 .Sy ashift=12
694 (which is 1<<12 = 4096). When set, this property is
695 used as the default hint value in subsequent vdev operations (add,
696 attach and replace). Changing this value will not modify any existing
697 vdev, not even on disk replacement; however it can be used, for
698 instance, to replace a dying 512B sectors disk with a newer 4KiB
699 sectors device: this will probably result in bad performance but at the
700 same time could prevent loss of data.
701 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
702 Controls automatic pool expansion when the underlying LUN is grown.
703 If set to
704 .Sy on ,
705 the pool will be resized according to the size of the expanded device.
706 If the device is part of a mirror or raidz then all devices within that
707 mirror/raidz group must be expanded before the new space is made available to
708 the pool.
709 The default behavior is
710 .Sy off .
711 This property can also be referred to by its shortened column name,
712 .Sy expand .
713 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
714 Controls automatic device replacement.
715 If set to
716 .Sy off ,
717 device replacement must be initiated by the administrator by using the
718 .Nm zpool Cm replace
719 command.
720 If set to
721 .Sy on ,
722 any new device, found in the same physical location as a device that previously
723 belonged to the pool, is automatically formatted and replaced.
724 The default behavior is
725 .Sy off .
726 This property can also be referred to by its shortened column name,
727 .Sy replace .
728 Autoreplace can also be used with virtual disks (like device
729 mapper) provided that you use the /dev/disk/by-vdev paths setup by
730 vdev_id.conf. See the
731 .Xr vdev_id 8
732 man page for more details.
733 Autoreplace and autoonline require the ZFS Event Daemon be configured and
734 running. See the
735 .Xr zed 8
736 man page for more details.
737 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
738 Identifies the default bootable dataset for the root pool. This property is
739 expected to be set mainly by the installation and upgrade programs.
740 Not all Linux distribution boot processes use the bootfs property.
741 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
742 Controls the location of where the pool configuration is cached.
743 Discovering all pools on system startup requires a cached copy of the
744 configuration data that is stored on the root file system.
745 All pools in this cache are automatically imported when the system boots.
746 Some environments, such as install and clustering, need to cache this
747 information in a different location so that pools are not automatically
748 imported.
749 Setting this property caches the pool configuration in a different location that
750 can later be imported with
751 .Nm zpool Cm import Fl c .
752 Setting it to the special value
753 .Sy none
754 creates a temporary pool that is never cached, and the special value
755 .Qq
756 .Pq empty string
757 uses the default location.
758 .Pp
759 Multiple pools can share the same cache file.
760 Because the kernel destroys and recreates this file when pools are added and
761 removed, care should be taken when attempting to access this file.
762 When the last pool using a
763 .Sy cachefile
764 is exported or destroyed, the file will be empty.
765 .It Sy comment Ns = Ns Ar text
766 A text string consisting of printable ASCII characters that will be stored
767 such that it is available even if the pool becomes faulted.
768 An administrator can provide additional information about a pool using this
769 property.
770 .It Sy dedupditto Ns = Ns Ar number
771 Threshold for the number of block ditto copies.
772 If the reference count for a deduplicated block increases above this number, a
773 new ditto copy of this block is automatically stored.
774 The default setting is
775 .Sy 0
776 which causes no ditto copies to be created for deduplicated blocks.
777 The minimum legal nonzero setting is
778 .Sy 100 .
779 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
780 Controls whether a non-privileged user is granted access based on the dataset
781 permissions defined on the dataset.
782 See
783 .Xr zfs 8
784 for more information on ZFS delegated administration.
785 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
786 Controls the system behavior in the event of catastrophic pool failure.
787 This condition is typically a result of a loss of connectivity to the underlying
788 storage device(s) or a failure of all devices within the pool.
789 The behavior of such an event is determined as follows:
790 .Bl -tag -width "continue"
791 .It Sy wait
792 Blocks all I/O access until the device connectivity is recovered and the errors
793 are cleared.
794 This is the default behavior.
795 .It Sy continue
796 Returns
797 .Er EIO
798 to any new write I/O requests but allows reads to any of the remaining healthy
799 devices.
800 Any write requests that have yet to be committed to disk would be blocked.
801 .It Sy panic
802 Prints out a message to the console and generates a system crash dump.
803 .El
804 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
805 The value of this property is the current state of
806 .Ar feature_name .
807 The only valid value when setting this property is
808 .Sy enabled
809 which moves
810 .Ar feature_name
811 to the enabled state.
812 See
813 .Xr zpool-features 5
814 for details on feature states.
815 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
816 Controls whether information about snapshots associated with this pool is
817 output when
818 .Nm zfs Cm list
819 is run without the
820 .Fl t
821 option.
822 The default value is
823 .Sy off .
824 This property can also be referred to by its shortened name,
825 .Sy listsnaps .
826 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
827 Controls whether a pool activity check should be performed during
828 .Nm zpool Cm import .
829 When a pool is determined to be active it cannot be imported, even with the
830 .Fl f
831 option. This property is intended to be used in failover configurations
832 where multiple hosts have access to a pool on shared storage. When this
833 property is on, periodic writes to storage occur to show the pool is in use.
834 See
835 .Sy zfs_multihost_interval
836 in the
837 .Xr zfs-module-parameters 5
838 man page. In order to enable this property each host must set a unique hostid.
839 See
840 .Xr genhostid 1
841 .Xr zgenhostid 8
842 .Xr spl-module-parameters 5
843 for additional details. The default value is
844 .Sy off .
845 .It Sy version Ns = Ns Ar version
846 The current on-disk version of the pool.
847 This can be increased, but never decreased.
848 The preferred method of updating pools is with the
849 .Nm zpool Cm upgrade
850 command, though this property can be used when a specific version is needed for
851 backwards compatibility.
852 Once feature flags are enabled on a pool this property will no longer have a
853 value.
854 .El
855 .Ss Subcommands
856 All subcommands that modify state are logged persistently to the pool in their
857 original form.
858 .Pp
859 The
860 .Nm
861 command provides subcommands to create and destroy storage pools, add capacity
862 to storage pools, and provide information about the storage pools.
863 The following subcommands are supported:
864 .Bl -tag -width Ds
865 .It Xo
866 .Nm
867 .Fl ?
868 .Xc
869 Displays a help message.
870 .It Xo
871 .Nm
872 .Cm add
873 .Op Fl fgLnP
874 .Oo Fl o Ar property Ns = Ns Ar value Oc
875 .Ar pool vdev Ns ...
876 .Xc
877 Adds the specified virtual devices to the given pool.
878 The
879 .Ar vdev
880 specification is described in the
881 .Sx Virtual Devices
882 section.
883 The behavior of the
884 .Fl f
885 option, and the device checks performed are described in the
886 .Nm zpool Cm create
887 subcommand.
888 .Bl -tag -width Ds
889 .It Fl f
890 Forces use of
891 .Ar vdev Ns s ,
892 even if they appear in use or specify a conflicting replication level.
893 Not all devices can be overridden in this manner.
894 .It Fl g
895 Display
896 .Ar vdev ,
897 GUIDs instead of the normal device names. These GUIDs can be used in place of
898 device names for the zpool detach/offline/remove/replace commands.
899 .It Fl L
900 Display real paths for
901 .Ar vdev Ns s
902 resolving all symbolic links. This can be used to look up the current block
903 device name regardless of the /dev/disk/ path used to open it.
904 .It Fl n
905 Displays the configuration that would be used without actually adding the
906 .Ar vdev Ns s .
907 The actual pool creation can still fail due to insufficient privileges or
908 device sharing.
909 .It Fl P
910 Display real paths for
911 .Ar vdev Ns s
912 instead of only the last component of the path. This can be used in
913 conjunction with the
914 .Fl L
915 flag.
916 .It Fl o Ar property Ns = Ns Ar value
917 Sets the given pool properties. See the
918 .Sx Properties
919 section for a list of valid properties that can be set. The only property
920 supported at the moment is ashift.
921 .El
922 .It Xo
923 .Nm
924 .Cm attach
925 .Op Fl f
926 .Oo Fl o Ar property Ns = Ns Ar value Oc
927 .Ar pool device new_device
928 .Xc
929 Attaches
930 .Ar new_device
931 to the existing
932 .Ar device .
933 The existing device cannot be part of a raidz configuration.
934 If
935 .Ar device
936 is not currently part of a mirrored configuration,
937 .Ar device
938 automatically transforms into a two-way mirror of
939 .Ar device
940 and
941 .Ar new_device .
942 If
943 .Ar device
944 is part of a two-way mirror, attaching
945 .Ar new_device
946 creates a three-way mirror, and so on.
947 In either case,
948 .Ar new_device
949 begins to resilver immediately.
950 .Bl -tag -width Ds
951 .It Fl f
952 Forces use of
953 .Ar new_device ,
954 even if its appears to be in use.
955 Not all devices can be overridden in this manner.
956 .It Fl o Ar property Ns = Ns Ar value
957 Sets the given pool properties. See the
958 .Sx Properties
959 section for a list of valid properties that can be set. The only property
960 supported at the moment is ashift.
961 .El
962 .It Xo
963 .Nm
964 .Cm checkpoint
965 .Op Fl d, -discard
966 .Ar pool
967 .Xc
968 Checkpoints the current state of
969 .Ar pool
970 , which can be later restored by
971 .Nm zpool Cm import --rewind-to-checkpoint .
972 The existence of a checkpoint in a pool prohibits the following
973 .Nm zpool
974 commands:
975 .Cm remove ,
976 .Cm attach ,
977 .Cm detach ,
978 .Cm split ,
979 and
980 .Cm reguid .
981 In addition, it may break reservation boundaries if the pool lacks free
982 space.
983 The
984 .Nm zpool Cm status
985 command indicates the existence of a checkpoint or the progress of discarding a
986 checkpoint from a pool.
987 The
988 .Nm zpool Cm list
989 command reports how much space the checkpoint takes from the pool.
990 .Bl -tag -width Ds
991 .It Fl d, -discard
992 Discards an existing checkpoint from
993 .Ar pool .
994 .El
995 .It Xo
996 .Nm
997 .Cm clear
998 .Ar pool
999 .Op Ar device
1000 .Xc
1001 Clears device errors in a pool.
1002 If no arguments are specified, all device errors within the pool are cleared.
1003 If one or more devices is specified, only those errors associated with the
1004 specified device or devices are cleared.
1005 .It Xo
1006 .Nm
1007 .Cm create
1008 .Op Fl dfn
1009 .Op Fl m Ar mountpoint
1010 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1011 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1012 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1013 .Op Fl R Ar root
1014 .Op Fl t Ar tname
1015 .Ar pool vdev Ns ...
1016 .Xc
1017 Creates a new storage pool containing the virtual devices specified on the
1018 command line.
1019 The pool name must begin with a letter, and can only contain
1020 alphanumeric characters as well as underscore
1021 .Pq Qq Sy _ ,
1022 dash
1023 .Pq Qq Sy \&- ,
1024 colon
1025 .Pq Qq Sy \&: ,
1026 space
1027 .Pq Qq Sy \&\ ,
1028 and period
1029 .Pq Qq Sy \&. .
1030 The pool names
1031 .Sy mirror ,
1032 .Sy raidz ,
1033 .Sy spare
1034 and
1035 .Sy log
1036 are reserved, as are names beginning with
1037 .Sy mirror ,
1038 .Sy raidz ,
1039 .Sy spare ,
1040 and the pattern
1041 .Sy c[0-9] .
1042 The
1043 .Ar vdev
1044 specification is described in the
1045 .Sx Virtual Devices
1046 section.
1047 .Pp
1048 The command verifies that each device specified is accessible and not currently
1049 in use by another subsystem.
1050 There are some uses, such as being currently mounted, or specified as the
1051 dedicated dump device, that prevents a device from ever being used by ZFS.
1052 Other uses, such as having a preexisting UFS file system, can be overridden with
1053 the
1054 .Fl f
1055 option.
1056 .Pp
1057 The command also checks that the replication strategy for the pool is
1058 consistent.
1059 An attempt to combine redundant and non-redundant storage in a single pool, or
1060 to mix disks and files, results in an error unless
1061 .Fl f
1062 is specified.
1063 The use of differently sized devices within a single raidz or mirror group is
1064 also flagged as an error unless
1065 .Fl f
1066 is specified.
1067 .Pp
1068 Unless the
1069 .Fl R
1070 option is specified, the default mount point is
1071 .Pa / Ns Ar pool .
1072 The mount point must not exist or must be empty, or else the root dataset
1073 cannot be mounted.
1074 This can be overridden with the
1075 .Fl m
1076 option.
1077 .Pp
1078 By default all supported features are enabled on the new pool unless the
1079 .Fl d
1080 option is specified.
1081 .Bl -tag -width Ds
1082 .It Fl d
1083 Do not enable any features on the new pool.
1084 Individual features can be enabled by setting their corresponding properties to
1085 .Sy enabled
1086 with the
1087 .Fl o
1088 option.
1089 See
1090 .Xr zpool-features 5
1091 for details about feature properties.
1092 .It Fl f
1093 Forces use of
1094 .Ar vdev Ns s ,
1095 even if they appear in use or specify a conflicting replication level.
1096 Not all devices can be overridden in this manner.
1097 .It Fl m Ar mountpoint
1098 Sets the mount point for the root dataset.
1099 The default mount point is
1100 .Pa /pool
1101 or
1102 .Pa altroot/pool
1103 if
1104 .Ar altroot
1105 is specified.
1106 The mount point must be an absolute path,
1107 .Sy legacy ,
1108 or
1109 .Sy none .
1110 For more information on dataset mount points, see
1111 .Xr zfs 8 .
1112 .It Fl n
1113 Displays the configuration that would be used without actually creating the
1114 pool.
1115 The actual pool creation can still fail due to insufficient privileges or
1116 device sharing.
1117 .It Fl o Ar property Ns = Ns Ar value
1118 Sets the given pool properties.
1119 See the
1120 .Sx Properties
1121 section for a list of valid properties that can be set.
1122 .It Fl o Ar feature@feature Ns = Ns Ar value
1123 Sets the given pool feature. See the
1124 .Xr zpool-features 5
1125 section for a list of valid features that can be set.
1126 Value can be either disabled or enabled.
1127 .It Fl O Ar file-system-property Ns = Ns Ar value
1128 Sets the given file system properties in the root file system of the pool.
1129 See the
1130 .Sx Properties
1131 section of
1132 .Xr zfs 8
1133 for a list of valid properties that can be set.
1134 .It Fl R Ar root
1135 Equivalent to
1136 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1137 .It Fl t Ar tname
1138 Sets the in-core pool name to
1139 .Sy tname
1140 while the on-disk name will be the name specified as the pool name
1141 .Sy pool .
1142 This will set the default cachefile property to none. This is intended
1143 to handle name space collisions when creating pools for other systems,
1144 such as virtual machines or physical machines whose pools live on network
1145 block devices.
1146 .El
1147 .It Xo
1148 .Nm
1149 .Cm destroy
1150 .Op Fl f
1151 .Ar pool
1152 .Xc
1153 Destroys the given pool, freeing up any devices for other use.
1154 This command tries to unmount any active datasets before destroying the pool.
1155 .Bl -tag -width Ds
1156 .It Fl f
1157 Forces any active datasets contained within the pool to be unmounted.
1158 .El
1159 .It Xo
1160 .Nm
1161 .Cm detach
1162 .Ar pool device
1163 .Xc
1164 Detaches
1165 .Ar device
1166 from a mirror.
1167 The operation is refused if there are no other valid replicas of the data.
1168 If device may be re-added to the pool later on then consider the
1169 .Sy zpool offline
1170 command instead.
1171 .It Xo
1172 .Nm
1173 .Cm events
1174 .Op Fl vHf Oo Ar pool Oc | Fl c
1175 .Xc
1176 Lists all recent events generated by the ZFS kernel modules. These events
1177 are consumed by the
1178 .Xr zed 8
1179 and used to automate administrative tasks such as replacing a failed device
1180 with a hot spare. For more information about the subclasses and event payloads
1181 that can be generated see the
1182 .Xr zfs-events 5
1183 man page.
1184 .Bl -tag -width Ds
1185 .It Fl c
1186 Clear all previous events.
1187 .It Fl f
1188 Follow mode.
1189 .It Fl H
1190 Scripted mode. Do not display headers, and separate fields by a
1191 single tab instead of arbitrary space.
1192 .It Fl v
1193 Print the entire payload for each event.
1194 .El
1195 .It Xo
1196 .Nm
1197 .Cm export
1198 .Op Fl a
1199 .Op Fl f
1200 .Ar pool Ns ...
1201 .Xc
1202 Exports the given pools from the system.
1203 All devices are marked as exported, but are still considered in use by other
1204 subsystems.
1205 The devices can be moved between systems
1206 .Pq even those of different endianness
1207 and imported as long as a sufficient number of devices are present.
1208 .Pp
1209 Before exporting the pool, all datasets within the pool are unmounted.
1210 A pool can not be exported if it has a shared spare that is currently being
1211 used.
1212 .Pp
1213 For pools to be portable, you must give the
1214 .Nm
1215 command whole disks, not just partitions, so that ZFS can label the disks with
1216 portable EFI labels.
1217 Otherwise, disk drivers on platforms of different endianness will not recognize
1218 the disks.
1219 .Bl -tag -width Ds
1220 .It Fl a
1221 Exports all pools imported on the system.
1222 .It Fl f
1223 Forcefully unmount all datasets, using the
1224 .Nm unmount Fl f
1225 command.
1226 .Pp
1227 This command will forcefully export the pool even if it has a shared spare that
1228 is currently being used.
1229 This may lead to potential data corruption.
1230 .El
1231 .It Xo
1232 .Nm
1233 .Cm get
1234 .Op Fl Hp
1235 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1236 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1237 .Oo Ar pool Oc Ns ...
1238 .Xc
1239 Retrieves the given list of properties
1240 .Po
1241 or all properties if
1242 .Sy all
1243 is used
1244 .Pc
1245 for the specified storage pool(s).
1246 These properties are displayed with the following fields:
1247 .Bd -literal
1248 name Name of storage pool
1249 property Property name
1250 value Property value
1251 source Property source, either 'default' or 'local'.
1252 .Ed
1253 .Pp
1254 See the
1255 .Sx Properties
1256 section for more information on the available pool properties.
1257 .Bl -tag -width Ds
1258 .It Fl H
1259 Scripted mode.
1260 Do not display headers, and separate fields by a single tab instead of arbitrary
1261 space.
1262 .It Fl o Ar field
1263 A comma-separated list of columns to display.
1264 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1265 is the default value.
1266 .It Fl p
1267 Display numbers in parsable (exact) values.
1268 .El
1269 .It Xo
1270 .Nm
1271 .Cm history
1272 .Op Fl il
1273 .Oo Ar pool Oc Ns ...
1274 .Xc
1275 Displays the command history of the specified pool(s) or all pools if no pool is
1276 specified.
1277 .Bl -tag -width Ds
1278 .It Fl i
1279 Displays internally logged ZFS events in addition to user initiated events.
1280 .It Fl l
1281 Displays log records in long format, which in addition to standard format
1282 includes, the user name, the hostname, and the zone in which the operation was
1283 performed.
1284 .El
1285 .It Xo
1286 .Nm
1287 .Cm import
1288 .Op Fl D
1289 .Op Fl d Ar dir Ns | Ns device
1290 .Xc
1291 Lists pools available to import.
1292 If the
1293 .Fl d
1294 option is not specified, this command searches for devices in
1295 .Pa /dev .
1296 The
1297 .Fl d
1298 option can be specified multiple times, and all directories are searched.
1299 If the device appears to be part of an exported pool, this command displays a
1300 summary of the pool with the name of the pool, a numeric identifier, as well as
1301 the vdev layout and current health of the device for each device or file.
1302 Destroyed pools, pools that were previously destroyed with the
1303 .Nm zpool Cm destroy
1304 command, are not listed unless the
1305 .Fl D
1306 option is specified.
1307 .Pp
1308 The numeric identifier is unique, and can be used instead of the pool name when
1309 multiple exported pools of the same name are available.
1310 .Bl -tag -width Ds
1311 .It Fl c Ar cachefile
1312 Reads configuration from the given
1313 .Ar cachefile
1314 that was created with the
1315 .Sy cachefile
1316 pool property.
1317 This
1318 .Ar cachefile
1319 is used instead of searching for devices.
1320 .It Fl d Ar dir Ns | Ns Ar device
1321 Uses
1322 .Ar device
1323 or searches for devices or files in
1324 .Ar dir .
1325 The
1326 .Fl d
1327 option can be specified multiple times.
1328 .It Fl D
1329 Lists destroyed pools only.
1330 .El
1331 .It Xo
1332 .Nm
1333 .Cm import
1334 .Fl a
1335 .Op Fl DflmN
1336 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1337 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1338 .Op Fl o Ar mntopts
1339 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1340 .Op Fl R Ar root
1341 .Op Fl s
1342 .Xc
1343 Imports all pools found in the search directories.
1344 Identical to the previous command, except that all pools with a sufficient
1345 number of devices available are imported.
1346 Destroyed pools, pools that were previously destroyed with the
1347 .Nm zpool Cm destroy
1348 command, will not be imported unless the
1349 .Fl D
1350 option is specified.
1351 .Bl -tag -width Ds
1352 .It Fl a
1353 Searches for and imports all pools found.
1354 .It Fl c Ar cachefile
1355 Reads configuration from the given
1356 .Ar cachefile
1357 that was created with the
1358 .Sy cachefile
1359 pool property.
1360 This
1361 .Ar cachefile
1362 is used instead of searching for devices.
1363 .It Fl d Ar dir Ns | Ns Ar device
1364 Uses
1365 .Ar device
1366 or searches for devices or files in
1367 .Ar dir .
1368 The
1369 .Fl d
1370 option can be specified multiple times.
1371 This option is incompatible with the
1372 .Fl c
1373 option.
1374 .It Fl D
1375 Imports destroyed pools only.
1376 The
1377 .Fl f
1378 option is also required.
1379 .It Fl f
1380 Forces import, even if the pool appears to be potentially active.
1381 .It Fl F
1382 Recovery mode for a non-importable pool.
1383 Attempt to return the pool to an importable state by discarding the last few
1384 transactions.
1385 Not all damaged pools can be recovered by using this option.
1386 If successful, the data from the discarded transactions is irretrievably lost.
1387 This option is ignored if the pool is importable or already imported.
1388 .It Fl l
1389 Indicates that this command will request encryption keys for all encrypted
1390 datasets it attempts to mount as it is bringing the pool online. Note that if
1391 any datasets have a
1392 .Sy keylocation
1393 of
1394 .Sy prompt
1395 this command will block waiting for the keys to be entered. Without this flag
1396 encrypted datasets will be left unavailable until the keys are loaded.
1397 .It Fl m
1398 Allows a pool to import when there is a missing log device.
1399 Recent transactions can be lost because the log device will be discarded.
1400 .It Fl n
1401 Used with the
1402 .Fl F
1403 recovery option.
1404 Determines whether a non-importable pool can be made importable again, but does
1405 not actually perform the pool recovery.
1406 For more details about pool recovery mode, see the
1407 .Fl F
1408 option, above.
1409 .It Fl N
1410 Import the pool without mounting any file systems.
1411 .It Fl o Ar mntopts
1412 Comma-separated list of mount options to use when mounting datasets within the
1413 pool.
1414 See
1415 .Xr zfs 8
1416 for a description of dataset properties and mount options.
1417 .It Fl o Ar property Ns = Ns Ar value
1418 Sets the specified property on the imported pool.
1419 See the
1420 .Sx Properties
1421 section for more information on the available pool properties.
1422 .It Fl R Ar root
1423 Sets the
1424 .Sy cachefile
1425 property to
1426 .Sy none
1427 and the
1428 .Sy altroot
1429 property to
1430 .Ar root .
1431 .It Fl -rewind-to-checkpoint
1432 Rewinds pool to the checkpointed state.
1433 Once the pool is imported with this flag there is no way to undo the rewind.
1434 All changes and data that were written after the checkpoint are lost!
1435 The only exception is when the
1436 .Sy readonly
1437 mounting option is enabled.
1438 In this case, the checkpointed state of the pool is opened and an
1439 administrator can see how the pool would look like if they were
1440 to fully rewind.
1441 .It Fl s
1442 Scan using the default search path, the libblkid cache will not be
1443 consulted. A custom search path may be specified by setting the
1444 ZPOOL_IMPORT_PATH environment variable.
1445 .It Fl X
1446 Used with the
1447 .Fl F
1448 recovery option. Determines whether extreme
1449 measures to find a valid txg should take place. This allows the pool to
1450 be rolled back to a txg which is no longer guaranteed to be consistent.
1451 Pools imported at an inconsistent txg may contain uncorrectable
1452 checksum errors. For more details about pool recovery mode, see the
1453 .Fl F
1454 option, above. WARNING: This option can be extremely hazardous to the
1455 health of your pool and should only be used as a last resort.
1456 .It Fl T
1457 Specify the txg to use for rollback. Implies
1458 .Fl FX .
1459 For more details
1460 about pool recovery mode, see the
1461 .Fl X
1462 option, above. WARNING: This option can be extremely hazardous to the
1463 health of your pool and should only be used as a last resort.
1464 .El
1465 .It Xo
1466 .Nm
1467 .Cm import
1468 .Op Fl Dflm
1469 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1470 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1471 .Op Fl o Ar mntopts
1472 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1473 .Op Fl R Ar root
1474 .Op Fl s
1475 .Ar pool Ns | Ns Ar id
1476 .Op Ar newpool
1477 .Xc
1478 Imports a specific pool.
1479 A pool can be identified by its name or the numeric identifier.
1480 If
1481 .Ar newpool
1482 is specified, the pool is imported using the name
1483 .Ar newpool .
1484 Otherwise, it is imported with the same name as its exported name.
1485 .Pp
1486 If a device is removed from a system without running
1487 .Nm zpool Cm export
1488 first, the device appears as potentially active.
1489 It cannot be determined if this was a failed export, or whether the device is
1490 really in use from another host.
1491 To import a pool in this state, the
1492 .Fl f
1493 option is required.
1494 .Bl -tag -width Ds
1495 .It Fl c Ar cachefile
1496 Reads configuration from the given
1497 .Ar cachefile
1498 that was created with the
1499 .Sy cachefile
1500 pool property.
1501 This
1502 .Ar cachefile
1503 is used instead of searching for devices.
1504 .It Fl d Ar dir Ns | Ns Ar device
1505 Uses
1506 .Ar device
1507 or searches for devices or files in
1508 .Ar dir .
1509 The
1510 .Fl d
1511 option can be specified multiple times.
1512 This option is incompatible with the
1513 .Fl c
1514 option.
1515 .It Fl D
1516 Imports destroyed pool.
1517 The
1518 .Fl f
1519 option is also required.
1520 .It Fl f
1521 Forces import, even if the pool appears to be potentially active.
1522 .It Fl F
1523 Recovery mode for a non-importable pool.
1524 Attempt to return the pool to an importable state by discarding the last few
1525 transactions.
1526 Not all damaged pools can be recovered by using this option.
1527 If successful, the data from the discarded transactions is irretrievably lost.
1528 This option is ignored if the pool is importable or already imported.
1529 .It Fl l
1530 Indicates that this command will request encryption keys for all encrypted
1531 datasets it attempts to mount as it is bringing the pool online. Note that if
1532 any datasets have a
1533 .Sy keylocation
1534 of
1535 .Sy prompt
1536 this command will block waiting for the keys to be entered. Without this flag
1537 encrypted datasets will be left unavailable until the keys are loaded.
1538 .It Fl m
1539 Allows a pool to import when there is a missing log device.
1540 Recent transactions can be lost because the log device will be discarded.
1541 .It Fl n
1542 Used with the
1543 .Fl F
1544 recovery option.
1545 Determines whether a non-importable pool can be made importable again, but does
1546 not actually perform the pool recovery.
1547 For more details about pool recovery mode, see the
1548 .Fl F
1549 option, above.
1550 .It Fl o Ar mntopts
1551 Comma-separated list of mount options to use when mounting datasets within the
1552 pool.
1553 See
1554 .Xr zfs 8
1555 for a description of dataset properties and mount options.
1556 .It Fl o Ar property Ns = Ns Ar value
1557 Sets the specified property on the imported pool.
1558 See the
1559 .Sx Properties
1560 section for more information on the available pool properties.
1561 .It Fl R Ar root
1562 Sets the
1563 .Sy cachefile
1564 property to
1565 .Sy none
1566 and the
1567 .Sy altroot
1568 property to
1569 .Ar root .
1570 .It Fl s
1571 Scan using the default search path, the libblkid cache will not be
1572 consulted. A custom search path may be specified by setting the
1573 ZPOOL_IMPORT_PATH environment variable.
1574 .It Fl X
1575 Used with the
1576 .Fl F
1577 recovery option. Determines whether extreme
1578 measures to find a valid txg should take place. This allows the pool to
1579 be rolled back to a txg which is no longer guaranteed to be consistent.
1580 Pools imported at an inconsistent txg may contain uncorrectable
1581 checksum errors. For more details about pool recovery mode, see the
1582 .Fl F
1583 option, above. WARNING: This option can be extremely hazardous to the
1584 health of your pool and should only be used as a last resort.
1585 .It Fl T
1586 Specify the txg to use for rollback. Implies
1587 .Fl FX .
1588 For more details
1589 about pool recovery mode, see the
1590 .Fl X
1591 option, above. WARNING: This option can be extremely hazardous to the
1592 health of your pool and should only be used as a last resort.
1593 .It Fl t
1594 Used with
1595 .Sy newpool .
1596 Specifies that
1597 .Sy newpool
1598 is temporary. Temporary pool names last until export. Ensures that
1599 the original pool name will be used in all label updates and therefore
1600 is retained upon export.
1601 Will also set -o cachefile=none when not explicitly specified.
1602 .El
1603 .It Xo
1604 .Nm
1605 .Cm initialize
1606 .Op Fl c | Fl s
1607 .Ar pool
1608 .Op Ar device Ns ...
1609 .Xc
1610 Begins initializing by writing to all unallocated regions on the specified
1611 devices, or all eligible devices in the pool if no individual devices are
1612 specified.
1613 Only leaf data or log devices may be initialized.
1614 .Bl -tag -width Ds
1615 .It Fl c, -cancel
1616 Cancel initializing on the specified devices, or all eligible devices if none
1617 are specified.
1618 If one or more target devices are invalid or are not currently being
1619 initialized, the command will fail and no cancellation will occur on any device.
1620 .It Fl s -suspend
1621 Suspend initializing on the specified devices, or all eligible devices if none
1622 are specified.
1623 If one or more target devices are invalid or are not currently being
1624 initialized, the command will fail and no suspension will occur on any device.
1625 Initializing can then be resumed by running
1626 .Nm zpool Cm initialize
1627 with no flags on the relevant target devices.
1628 .El
1629 .It Xo
1630 .Nm
1631 .Cm iostat
1632 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1633 .Op Fl T Sy u Ns | Ns Sy d
1634 .Op Fl ghHLnpPvy
1635 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1636 .Op Ar interval Op Ar count
1637 .Xc
1638 Displays I/O statistics for the given pools/vdevs. You can pass in a
1639 list of pools, a pool and list of vdevs in that pool, or a list of any
1640 vdevs from any pool. If no items are specified, statistics for every
1641 pool in the system are shown.
1642 When given an
1643 .Ar interval ,
1644 the statistics are printed every
1645 .Ar interval
1646 seconds until ^C is pressed. If
1647 .Fl n
1648 flag is specified the headers are displayed only once, otherwise they are
1649 displayed periodically. If count is specified, the command exits
1650 after count reports are printed. The first report printed is always
1651 the statistics since boot regardless of whether
1652 .Ar interval
1653 and
1654 .Ar count
1655 are passed. However, this behavior can be suppressed with the
1656 .Fl y
1657 flag. Also note that the units of
1658 .Sy K ,
1659 .Sy M ,
1660 .Sy G ...
1661 that are printed in the report are in base 1024. To get the raw
1662 values, use the
1663 .Fl p
1664 flag.
1665 .Bl -tag -width Ds
1666 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1667 Run a script (or scripts) on each vdev and include the output as a new column
1668 in the
1669 .Nm zpool Cm iostat
1670 output. Users can run any script found in their
1671 .Pa ~/.zpool.d
1672 directory or from the system
1673 .Pa /etc/zfs/zpool.d
1674 directory. Script names containing the slash (/) character are not allowed.
1675 The default search path can be overridden by setting the
1676 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1677 .Fl c
1678 if they have the ZPOOL_SCRIPTS_AS_ROOT
1679 environment variable set. If a script requires the use of a privileged
1680 command, like
1681 .Xr smartctl 8 ,
1682 then it's recommended you allow the user access to it in
1683 .Pa /etc/sudoers
1684 or add the user to the
1685 .Pa /etc/sudoers.d/zfs
1686 file.
1687 .Pp
1688 If
1689 .Fl c
1690 is passed without a script name, it prints a list of all scripts.
1691 .Fl c
1692 also sets verbose mode
1693 .No \&( Ns Fl v Ns No \&).
1694 .Pp
1695 Script output should be in the form of "name=value". The column name is
1696 set to "name" and the value is set to "value". Multiple lines can be
1697 used to output multiple columns. The first line of output not in the
1698 "name=value" format is displayed without a column title, and no more
1699 output after that is displayed. This can be useful for printing error
1700 messages. Blank or NULL values are printed as a '-' to make output
1701 awk-able.
1702 .Pp
1703 The following environment variables are set before running each script:
1704 .Bl -tag -width "VDEV_PATH"
1705 .It Sy VDEV_PATH
1706 Full path to the vdev
1707 .El
1708 .Bl -tag -width "VDEV_UPATH"
1709 .It Sy VDEV_UPATH
1710 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1711 multipath, or partitioned vdevs.
1712 .El
1713 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1714 .It Sy VDEV_ENC_SYSFS_PATH
1715 The sysfs path to the enclosure for the vdev (if any).
1716 .El
1717 .It Fl T Sy u Ns | Ns Sy d
1718 Display a time stamp.
1719 Specify
1720 .Sy u
1721 for a printed representation of the internal representation of time.
1722 See
1723 .Xr time 2 .
1724 Specify
1725 .Sy d
1726 for standard date format.
1727 See
1728 .Xr date 1 .
1729 .It Fl g
1730 Display vdev GUIDs instead of the normal device names. These GUIDs
1731 can be used in place of device names for the zpool
1732 detach/offline/remove/replace commands.
1733 .It Fl H
1734 Scripted mode. Do not display headers, and separate fields by a
1735 single tab instead of arbitrary space.
1736 .It Fl L
1737 Display real paths for vdevs resolving all symbolic links. This can
1738 be used to look up the current block device name regardless of the
1739 .Pa /dev/disk/
1740 path used to open it.
1741 .It Fl n
1742 Print headers only once when passed
1743 .It Fl p
1744 Display numbers in parsable (exact) values. Time values are in
1745 nanoseconds.
1746 .It Fl P
1747 Display full paths for vdevs instead of only the last component of
1748 the path. This can be used in conjunction with the
1749 .Fl L
1750 flag.
1751 .It Fl r
1752 Print request size histograms for the leaf ZIOs. This includes
1753 histograms of individual ZIOs (
1754 .Ar ind )
1755 and aggregate ZIOs (
1756 .Ar agg ).
1757 These stats can be useful for seeing how well the ZFS IO aggregator is
1758 working. Do not confuse these request size stats with the block layer
1759 requests; it's possible ZIOs can be broken up before being sent to the
1760 block device.
1761 .It Fl v
1762 Verbose statistics Reports usage statistics for individual vdevs within the
1763 pool, in addition to the pool-wide statistics.
1764 .It Fl y
1765 Omit statistics since boot.
1766 Normally the first line of output reports the statistics since boot.
1767 This option suppresses that first line of output.
1768 .Ar interval
1769 .It Fl w
1770 Display latency histograms:
1771 .Pp
1772 .Ar total_wait :
1773 Total IO time (queuing + disk IO time).
1774 .Ar disk_wait :
1775 Disk IO time (time reading/writing the disk).
1776 .Ar syncq_wait :
1777 Amount of time IO spent in synchronous priority queues. Does not include
1778 disk time.
1779 .Ar asyncq_wait :
1780 Amount of time IO spent in asynchronous priority queues. Does not include
1781 disk time.
1782 .Ar scrub :
1783 Amount of time IO spent in scrub queue. Does not include disk time.
1784 .It Fl l
1785 Include average latency statistics:
1786 .Pp
1787 .Ar total_wait :
1788 Average total IO time (queuing + disk IO time).
1789 .Ar disk_wait :
1790 Average disk IO time (time reading/writing the disk).
1791 .Ar syncq_wait :
1792 Average amount of time IO spent in synchronous priority queues. Does
1793 not include disk time.
1794 .Ar asyncq_wait :
1795 Average amount of time IO spent in asynchronous priority queues.
1796 Does not include disk time.
1797 .Ar scrub :
1798 Average queuing time in scrub queue. Does not include disk time.
1799 .It Fl q
1800 Include active queue statistics. Each priority queue has both
1801 pending (
1802 .Ar pend )
1803 and active (
1804 .Ar activ )
1805 IOs. Pending IOs are waiting to
1806 be issued to the disk, and active IOs have been issued to disk and are
1807 waiting for completion. These stats are broken out by priority queue:
1808 .Pp
1809 .Ar syncq_read/write :
1810 Current number of entries in synchronous priority
1811 queues.
1812 .Ar asyncq_read/write :
1813 Current number of entries in asynchronous priority queues.
1814 .Ar scrubq_read :
1815 Current number of entries in scrub queue.
1816 .Pp
1817 All queue statistics are instantaneous measurements of the number of
1818 entries in the queues. If you specify an interval, the measurements
1819 will be sampled from the end of the interval.
1820 .El
1821 .It Xo
1822 .Nm
1823 .Cm labelclear
1824 .Op Fl f
1825 .Ar device
1826 .Xc
1827 Removes ZFS label information from the specified
1828 .Ar device .
1829 The
1830 .Ar device
1831 must not be part of an active pool configuration.
1832 .Bl -tag -width Ds
1833 .It Fl f
1834 Treat exported or foreign devices as inactive.
1835 .El
1836 .It Xo
1837 .Nm
1838 .Cm list
1839 .Op Fl HgLpPv
1840 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1841 .Op Fl T Sy u Ns | Ns Sy d
1842 .Oo Ar pool Oc Ns ...
1843 .Op Ar interval Op Ar count
1844 .Xc
1845 Lists the given pools along with a health status and space usage.
1846 If no
1847 .Ar pool Ns s
1848 are specified, all pools in the system are listed.
1849 When given an
1850 .Ar interval ,
1851 the information is printed every
1852 .Ar interval
1853 seconds until ^C is pressed.
1854 If
1855 .Ar count
1856 is specified, the command exits after
1857 .Ar count
1858 reports are printed.
1859 .Bl -tag -width Ds
1860 .It Fl g
1861 Display vdev GUIDs instead of the normal device names. These GUIDs
1862 can be used in place of device names for the zpool
1863 detach/offline/remove/replace commands.
1864 .It Fl H
1865 Scripted mode.
1866 Do not display headers, and separate fields by a single tab instead of arbitrary
1867 space.
1868 .It Fl o Ar property
1869 Comma-separated list of properties to display.
1870 See the
1871 .Sx Properties
1872 section for a list of valid properties.
1873 The default list is
1874 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1875 .Cm capacity , dedupratio , health , altroot .
1876 .It Fl L
1877 Display real paths for vdevs resolving all symbolic links. This can
1878 be used to look up the current block device name regardless of the
1879 /dev/disk/ path used to open it.
1880 .It Fl p
1881 Display numbers in parsable
1882 .Pq exact
1883 values.
1884 .It Fl P
1885 Display full paths for vdevs instead of only the last component of
1886 the path. This can be used in conjunction with the
1887 .Fl L
1888 flag.
1889 .It Fl T Sy u Ns | Ns Sy d
1890 Display a time stamp.
1891 Specify
1892 .Sy u
1893 for a printed representation of the internal representation of time.
1894 See
1895 .Xr time 2 .
1896 Specify
1897 .Sy d
1898 for standard date format.
1899 See
1900 .Xr date 1 .
1901 .It Fl v
1902 Verbose statistics.
1903 Reports usage statistics for individual vdevs within the pool, in addition to
1904 the pool-wise statistics.
1905 .El
1906 .It Xo
1907 .Nm
1908 .Cm offline
1909 .Op Fl f
1910 .Op Fl t
1911 .Ar pool Ar device Ns ...
1912 .Xc
1913 Takes the specified physical device offline.
1914 While the
1915 .Ar device
1916 is offline, no attempt is made to read or write to the device.
1917 This command is not applicable to spares.
1918 .Bl -tag -width Ds
1919 .It Fl f
1920 Force fault. Instead of offlining the disk, put it into a faulted
1921 state. The fault will persist across imports unless the
1922 .Fl t
1923 flag was specified.
1924 .It Fl t
1925 Temporary.
1926 Upon reboot, the specified physical device reverts to its previous state.
1927 .El
1928 .It Xo
1929 .Nm
1930 .Cm online
1931 .Op Fl e
1932 .Ar pool Ar device Ns ...
1933 .Xc
1934 Brings the specified physical device online.
1935 This command is not applicable to spares.
1936 .Bl -tag -width Ds
1937 .It Fl e
1938 Expand the device to use all available space.
1939 If the device is part of a mirror or raidz then all devices must be expanded
1940 before the new space will become available to the pool.
1941 .El
1942 .It Xo
1943 .Nm
1944 .Cm reguid
1945 .Ar pool
1946 .Xc
1947 Generates a new unique identifier for the pool.
1948 You must ensure that all devices in this pool are online and healthy before
1949 performing this action.
1950 .It Xo
1951 .Nm
1952 .Cm reopen
1953 .Op Fl n
1954 .Ar pool
1955 .Xc
1956 Reopen all the vdevs associated with the pool.
1957 .Bl -tag -width Ds
1958 .It Fl n
1959 Do not restart an in-progress scrub operation. This is not recommended and can
1960 result in partially resilvered devices unless a second scrub is performed.
1961 .El
1962 .It Xo
1963 .Nm
1964 .Cm remove
1965 .Op Fl np
1966 .Ar pool Ar device Ns ...
1967 .Xc
1968 Removes the specified device from the pool.
1969 This command supports removing hot spare, cache, log, and both mirrored and
1970 non-redundant primary top-level vdevs, including dedup and special vdevs.
1971 When the primary pool storage includes a top-level raidz vdev only hot spare,
1972 cache, and log devices can be removed.
1973 .sp
1974 Removing a top-level vdev reduces the total amount of space in the storage pool.
1975 The specified device will be evacuated by copying all allocated space from it to
1976 the other devices in the pool.
1977 In this case, the
1978 .Nm zpool Cm remove
1979 command initiates the removal and returns, while the evacuation continues in
1980 the background.
1981 The removal progress can be monitored with
1982 .Nm zpool Cm status .
1983 If an IO error is encountered during the removal process it will be
1984 cancelled. The
1985 .Sy device_removal
1986 feature flag must be enabled to remove a top-level vdev, see
1987 .Xr zpool-features 5 .
1988 .Pp
1989 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1990 same.
1991 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1992 the
1993 .Nm zpool Cm detach
1994 command.
1995 .Bl -tag -width Ds
1996 .It Fl n
1997 Do not actually perform the removal ("no-op").
1998 Instead, print the estimated amount of memory that will be used by the
1999 mapping table after the removal completes.
2000 This is nonzero only for top-level vdevs.
2001 .El
2002 .Bl -tag -width Ds
2003 .It Fl p
2004 Used in conjunction with the
2005 .Fl n
2006 flag, displays numbers as parsable (exact) values.
2007 .El
2008 .It Xo
2009 .Nm
2010 .Cm remove
2011 .Fl s
2012 .Ar pool
2013 .Xc
2014 Stops and cancels an in-progress removal of a top-level vdev.
2015 .It Xo
2016 .Nm
2017 .Cm replace
2018 .Op Fl f
2019 .Op Fl o Ar property Ns = Ns Ar value
2020 .Ar pool Ar device Op Ar new_device
2021 .Xc
2022 Replaces
2023 .Ar old_device
2024 with
2025 .Ar new_device .
2026 This is equivalent to attaching
2027 .Ar new_device ,
2028 waiting for it to resilver, and then detaching
2029 .Ar old_device .
2030 .Pp
2031 The size of
2032 .Ar new_device
2033 must be greater than or equal to the minimum size of all the devices in a mirror
2034 or raidz configuration.
2035 .Pp
2036 .Ar new_device
2037 is required if the pool is not redundant.
2038 If
2039 .Ar new_device
2040 is not specified, it defaults to
2041 .Ar old_device .
2042 This form of replacement is useful after an existing disk has failed and has
2043 been physically replaced.
2044 In this case, the new disk may have the same
2045 .Pa /dev
2046 path as the old device, even though it is actually a different disk.
2047 ZFS recognizes this.
2048 .Bl -tag -width Ds
2049 .It Fl f
2050 Forces use of
2051 .Ar new_device ,
2052 even if its appears to be in use.
2053 Not all devices can be overridden in this manner.
2054 .It Fl o Ar property Ns = Ns Ar value
2055 Sets the given pool properties. See the
2056 .Sx Properties
2057 section for a list of valid properties that can be set.
2058 The only property supported at the moment is
2059 .Sy ashift .
2060 .El
2061 .It Xo
2062 .Nm
2063 .Cm scrub
2064 .Op Fl s | Fl p
2065 .Ar pool Ns ...
2066 .Xc
2067 Begins a scrub or resumes a paused scrub.
2068 The scrub examines all data in the specified pools to verify that it checksums
2069 correctly.
2070 For replicated
2071 .Pq mirror or raidz
2072 devices, ZFS automatically repairs any damage discovered during the scrub.
2073 The
2074 .Nm zpool Cm status
2075 command reports the progress of the scrub and summarizes the results of the
2076 scrub upon completion.
2077 .Pp
2078 Scrubbing and resilvering are very similar operations.
2079 The difference is that resilvering only examines data that ZFS knows to be out
2080 of date
2081 .Po
2082 for example, when attaching a new device to a mirror or replacing an existing
2083 device
2084 .Pc ,
2085 whereas scrubbing examines all data to discover silent errors due to hardware
2086 faults or disk failure.
2087 .Pp
2088 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2089 one at a time.
2090 If a scrub is paused, the
2091 .Nm zpool Cm scrub
2092 resumes it.
2093 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2094 resilver completes.
2095 .Bl -tag -width Ds
2096 .It Fl s
2097 Stop scrubbing.
2098 .El
2099 .Bl -tag -width Ds
2100 .It Fl p
2101 Pause scrubbing.
2102 Scrub pause state and progress are periodically synced to disk.
2103 If the system is restarted or pool is exported during a paused scrub,
2104 even after import, scrub will remain paused until it is resumed.
2105 Once resumed the scrub will pick up from the place where it was last
2106 checkpointed to disk.
2107 To resume a paused scrub issue
2108 .Nm zpool Cm scrub
2109 again.
2110 .El
2111 .It Xo
2112 .Nm
2113 .Cm resilver
2114 .Ar pool Ns ...
2115 .Xc
2116 Starts a resilver. If an existing resilver is already running it will be
2117 restarted from the beginning. Any drives that were scheduled for a deferred
2118 resilver will be added to the new one.
2119 .It Xo
2120 .Nm
2121 .Cm set
2122 .Ar property Ns = Ns Ar value
2123 .Ar pool
2124 .Xc
2125 Sets the given property on the specified pool.
2126 See the
2127 .Sx Properties
2128 section for more information on what properties can be set and acceptable
2129 values.
2130 .It Xo
2131 .Nm
2132 .Cm split
2133 .Op Fl gLlnP
2134 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2135 .Op Fl R Ar root
2136 .Ar pool newpool
2137 .Op Ar device ...
2138 .Xc
2139 Splits devices off
2140 .Ar pool
2141 creating
2142 .Ar newpool .
2143 All vdevs in
2144 .Ar pool
2145 must be mirrors and the pool must not be in the process of resilvering.
2146 At the time of the split,
2147 .Ar newpool
2148 will be a replica of
2149 .Ar pool .
2150 By default, the
2151 last device in each mirror is split from
2152 .Ar pool
2153 to create
2154 .Ar newpool .
2155 .Pp
2156 The optional device specification causes the specified device(s) to be
2157 included in the new
2158 .Ar pool
2159 and, should any devices remain unspecified,
2160 the last device in each mirror is used as would be by default.
2161 .Bl -tag -width Ds
2162 .It Fl g
2163 Display vdev GUIDs instead of the normal device names. These GUIDs
2164 can be used in place of device names for the zpool
2165 detach/offline/remove/replace commands.
2166 .It Fl L
2167 Display real paths for vdevs resolving all symbolic links. This can
2168 be used to look up the current block device name regardless of the
2169 .Pa /dev/disk/
2170 path used to open it.
2171 .It Fl l
2172 Indicates that this command will request encryption keys for all encrypted
2173 datasets it attempts to mount as it is bringing the new pool online. Note that
2174 if any datasets have a
2175 .Sy keylocation
2176 of
2177 .Sy prompt
2178 this command will block waiting for the keys to be entered. Without this flag
2179 encrypted datasets will be left unavailable until the keys are loaded.
2180 .It Fl n
2181 Do dry run, do not actually perform the split.
2182 Print out the expected configuration of
2183 .Ar newpool .
2184 .It Fl P
2185 Display full paths for vdevs instead of only the last component of
2186 the path. This can be used in conjunction with the
2187 .Fl L
2188 flag.
2189 .It Fl o Ar property Ns = Ns Ar value
2190 Sets the specified property for
2191 .Ar newpool .
2192 See the
2193 .Sx Properties
2194 section for more information on the available pool properties.
2195 .It Fl R Ar root
2196 Set
2197 .Sy altroot
2198 for
2199 .Ar newpool
2200 to
2201 .Ar root
2202 and automatically import it.
2203 .El
2204 .It Xo
2205 .Nm
2206 .Cm status
2207 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2208 .Op Fl DigLpPsvx
2209 .Op Fl T Sy u Ns | Ns Sy d
2210 .Oo Ar pool Oc Ns ...
2211 .Op Ar interval Op Ar count
2212 .Xc
2213 Displays the detailed health status for the given pools.
2214 If no
2215 .Ar pool
2216 is specified, then the status of each pool in the system is displayed.
2217 For more information on pool and device health, see the
2218 .Sx Device Failure and Recovery
2219 section.
2220 .Pp
2221 If a scrub or resilver is in progress, this command reports the percentage done
2222 and the estimated time to completion.
2223 Both of these are only approximate, because the amount of data in the pool and
2224 the other workloads on the system can change.
2225 .Bl -tag -width Ds
2226 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2227 Run a script (or scripts) on each vdev and include the output as a new column
2228 in the
2229 .Nm zpool Cm status
2230 output. See the
2231 .Fl c
2232 option of
2233 .Nm zpool Cm iostat
2234 for complete details.
2235 .It Fl i
2236 Display vdev initialization status.
2237 .It Fl g
2238 Display vdev GUIDs instead of the normal device names. These GUIDs
2239 can be used in place of device names for the zpool
2240 detach/offline/remove/replace commands.
2241 .It Fl L
2242 Display real paths for vdevs resolving all symbolic links. This can
2243 be used to look up the current block device name regardless of the
2244 .Pa /dev/disk/
2245 path used to open it.
2246 .It Fl p
2247 Display numbers in parsable (exact) values.
2248 .It Fl P
2249 Display full paths for vdevs instead of only the last component of
2250 the path. This can be used in conjunction with the
2251 .Fl L
2252 flag.
2253 .It Fl D
2254 Display a histogram of deduplication statistics, showing the allocated
2255 .Pq physically present on disk
2256 and referenced
2257 .Pq logically referenced in the pool
2258 block counts and sizes by reference count.
2259 .It Fl s
2260 Display the number of leaf VDEV slow IOs. This is the number of IOs that
2261 didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2262 This does not necessarily mean the IOs failed to complete, just took an
2263 unreasonably long amount of time. This may indicate a problem with the
2264 underlying storage.
2265 .It Fl T Sy u Ns | Ns Sy d
2266 Display a time stamp.
2267 Specify
2268 .Sy u
2269 for a printed representation of the internal representation of time.
2270 See
2271 .Xr time 2 .
2272 Specify
2273 .Sy d
2274 for standard date format.
2275 See
2276 .Xr date 1 .
2277 .It Fl v
2278 Displays verbose data error information, printing out a complete list of all
2279 data errors since the last complete pool scrub.
2280 .It Fl x
2281 Only display status for pools that are exhibiting errors or are otherwise
2282 unavailable.
2283 Warnings about pools not using the latest on-disk format will not be included.
2284 .El
2285 .It Xo
2286 .Nm
2287 .Cm sync
2288 .Op Ar pool ...
2289 .Xc
2290 This command forces all in-core dirty data to be written to the primary
2291 pool storage and not the ZIL. It will also update administrative
2292 information including quota reporting. Without arguments,
2293 .Sy zpool sync
2294 will sync all pools on the system. Otherwise, it will sync only the
2295 specified pool(s).
2296 .It Xo
2297 .Nm
2298 .Cm upgrade
2299 .Xc
2300 Displays pools which do not have all supported features enabled and pools
2301 formatted using a legacy ZFS version number.
2302 These pools can continue to be used, but some features may not be available.
2303 Use
2304 .Nm zpool Cm upgrade Fl a
2305 to enable all features on all pools.
2306 .It Xo
2307 .Nm
2308 .Cm upgrade
2309 .Fl v
2310 .Xc
2311 Displays legacy ZFS versions supported by the current software.
2312 See
2313 .Xr zpool-features 5
2314 for a description of feature flags features supported by the current software.
2315 .It Xo
2316 .Nm
2317 .Cm upgrade
2318 .Op Fl V Ar version
2319 .Fl a Ns | Ns Ar pool Ns ...
2320 .Xc
2321 Enables all supported features on the given pool.
2322 Once this is done, the pool will no longer be accessible on systems that do not
2323 support feature flags.
2324 See
2325 .Xr zpool-features 5
2326 for details on compatibility with systems that support feature flags, but do not
2327 support all features enabled on the pool.
2328 .Bl -tag -width Ds
2329 .It Fl a
2330 Enables all supported features on all pools.
2331 .It Fl V Ar version
2332 Upgrade to the specified legacy version.
2333 If the
2334 .Fl V
2335 flag is specified, no features will be enabled on the pool.
2336 This option can only be used to increase the version number up to the last
2337 supported legacy version number.
2338 .El
2339 .El
2340 .Sh EXIT STATUS
2341 The following exit values are returned:
2342 .Bl -tag -width Ds
2343 .It Sy 0
2344 Successful completion.
2345 .It Sy 1
2346 An error occurred.
2347 .It Sy 2
2348 Invalid command line options were specified.
2349 .El
2350 .Sh EXAMPLES
2351 .Bl -tag -width Ds
2352 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2353 The following command creates a pool with a single raidz root vdev that
2354 consists of six disks.
2355 .Bd -literal
2356 # zpool create tank raidz sda sdb sdc sdd sde sdf
2357 .Ed
2358 .It Sy Example 2 No Creating a Mirrored Storage Pool
2359 The following command creates a pool with two mirrors, where each mirror
2360 contains two disks.
2361 .Bd -literal
2362 # zpool create tank mirror sda sdb mirror sdc sdd
2363 .Ed
2364 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2365 The following command creates an unmirrored pool using two disk partitions.
2366 .Bd -literal
2367 # zpool create tank sda1 sdb2
2368 .Ed
2369 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2370 The following command creates an unmirrored pool using files.
2371 While not recommended, a pool based on files can be useful for experimental
2372 purposes.
2373 .Bd -literal
2374 # zpool create tank /path/to/file/a /path/to/file/b
2375 .Ed
2376 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2377 The following command adds two mirrored disks to the pool
2378 .Em tank ,
2379 assuming the pool is already made up of two-way mirrors.
2380 The additional space is immediately available to any datasets within the pool.
2381 .Bd -literal
2382 # zpool add tank mirror sda sdb
2383 .Ed
2384 .It Sy Example 6 No Listing Available ZFS Storage Pools
2385 The following command lists all available pools on the system.
2386 In this case, the pool
2387 .Em zion
2388 is faulted due to a missing device.
2389 The results from this command are similar to the following:
2390 .Bd -literal
2391 # zpool list
2392 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2393 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2394 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2395 zion - - - - - - - FAULTED -
2396 .Ed
2397 .It Sy Example 7 No Destroying a ZFS Storage Pool
2398 The following command destroys the pool
2399 .Em tank
2400 and any datasets contained within.
2401 .Bd -literal
2402 # zpool destroy -f tank
2403 .Ed
2404 .It Sy Example 8 No Exporting a ZFS Storage Pool
2405 The following command exports the devices in pool
2406 .Em tank
2407 so that they can be relocated or later imported.
2408 .Bd -literal
2409 # zpool export tank
2410 .Ed
2411 .It Sy Example 9 No Importing a ZFS Storage Pool
2412 The following command displays available pools, and then imports the pool
2413 .Em tank
2414 for use on the system.
2415 The results from this command are similar to the following:
2416 .Bd -literal
2417 # zpool import
2418 pool: tank
2419 id: 15451357997522795478
2420 state: ONLINE
2421 action: The pool can be imported using its name or numeric identifier.
2422 config:
2423
2424 tank ONLINE
2425 mirror ONLINE
2426 sda ONLINE
2427 sdb ONLINE
2428
2429 # zpool import tank
2430 .Ed
2431 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2432 The following command upgrades all ZFS Storage pools to the current version of
2433 the software.
2434 .Bd -literal
2435 # zpool upgrade -a
2436 This system is currently running ZFS version 2.
2437 .Ed
2438 .It Sy Example 11 No Managing Hot Spares
2439 The following command creates a new pool with an available hot spare:
2440 .Bd -literal
2441 # zpool create tank mirror sda sdb spare sdc
2442 .Ed
2443 .Pp
2444 If one of the disks were to fail, the pool would be reduced to the degraded
2445 state.
2446 The failed device can be replaced using the following command:
2447 .Bd -literal
2448 # zpool replace tank sda sdd
2449 .Ed
2450 .Pp
2451 Once the data has been resilvered, the spare is automatically removed and is
2452 made available for use should another device fail.
2453 The hot spare can be permanently removed from the pool using the following
2454 command:
2455 .Bd -literal
2456 # zpool remove tank sdc
2457 .Ed
2458 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2459 The following command creates a ZFS storage pool consisting of two, two-way
2460 mirrors and mirrored log devices:
2461 .Bd -literal
2462 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2463 sde sdf
2464 .Ed
2465 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2466 The following command adds two disks for use as cache devices to a ZFS storage
2467 pool:
2468 .Bd -literal
2469 # zpool add pool cache sdc sdd
2470 .Ed
2471 .Pp
2472 Once added, the cache devices gradually fill with content from main memory.
2473 Depending on the size of your cache devices, it could take over an hour for
2474 them to fill.
2475 Capacity and reads can be monitored using the
2476 .Cm iostat
2477 option as follows:
2478 .Bd -literal
2479 # zpool iostat -v pool 5
2480 .Ed
2481 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2482 The following commands remove the mirrored log device
2483 .Sy mirror-2
2484 and mirrored top-level data device
2485 .Sy mirror-1 .
2486 .Pp
2487 Given this configuration:
2488 .Bd -literal
2489 pool: tank
2490 state: ONLINE
2491 scrub: none requested
2492 config:
2493
2494 NAME STATE READ WRITE CKSUM
2495 tank ONLINE 0 0 0
2496 mirror-0 ONLINE 0 0 0
2497 sda ONLINE 0 0 0
2498 sdb ONLINE 0 0 0
2499 mirror-1 ONLINE 0 0 0
2500 sdc ONLINE 0 0 0
2501 sdd ONLINE 0 0 0
2502 logs
2503 mirror-2 ONLINE 0 0 0
2504 sde ONLINE 0 0 0
2505 sdf ONLINE 0 0 0
2506 .Ed
2507 .Pp
2508 The command to remove the mirrored log
2509 .Sy mirror-2
2510 is:
2511 .Bd -literal
2512 # zpool remove tank mirror-2
2513 .Ed
2514 .Pp
2515 The command to remove the mirrored data
2516 .Sy mirror-1
2517 is:
2518 .Bd -literal
2519 # zpool remove tank mirror-1
2520 .Ed
2521 .It Sy Example 15 No Displaying expanded space on a device
2522 The following command displays the detailed information for the pool
2523 .Em data .
2524 This pool is comprised of a single raidz vdev where one of its devices
2525 increased its capacity by 10GB.
2526 In this example, the pool will not be able to utilize this extra capacity until
2527 all the devices under the raidz vdev have been expanded.
2528 .Bd -literal
2529 # zpool list -v data
2530 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2531 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2532 raidz1 23.9G 14.6G 9.30G - 48%
2533 sda - - - - -
2534 sdb - - - 10G -
2535 sdc - - - - -
2536 .Ed
2537 .It Sy Example 16 No Adding output columns
2538 Additional columns can be added to the
2539 .Nm zpool Cm status
2540 and
2541 .Nm zpool Cm iostat
2542 output with
2543 .Fl c
2544 option.
2545 .Bd -literal
2546 # zpool status -c vendor,model,size
2547 NAME STATE READ WRITE CKSUM vendor model size
2548 tank ONLINE 0 0 0
2549 mirror-0 ONLINE 0 0 0
2550 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2551 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2552 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2553 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2554 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2555 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2556
2557 # zpool iostat -vc slaves
2558 capacity operations bandwidth
2559 pool alloc free read write read write slaves
2560 ---------- ----- ----- ----- ----- ----- ----- ---------
2561 tank 20.4G 7.23T 26 152 20.7M 21.6M
2562 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2563 U1 - - 0 31 1.46K 20.6M sdb sdff
2564 U10 - - 0 1 3.77K 13.3K sdas sdgw
2565 U11 - - 0 1 288K 13.3K sdat sdgx
2566 U12 - - 0 1 78.4K 13.3K sdau sdgy
2567 U13 - - 0 1 128K 13.3K sdav sdgz
2568 U14 - - 0 1 63.2K 13.3K sdfk sdg
2569 .Ed
2570 .El
2571 .Sh ENVIRONMENT VARIABLES
2572 .Bl -tag -width "ZFS_ABORT"
2573 .It Ev ZFS_ABORT
2574 Cause
2575 .Nm zpool
2576 to dump core on exit for the purposes of running
2577 .Sy ::findleaks .
2578 .El
2579 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2580 .It Ev ZPOOL_IMPORT_PATH
2581 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2582 .Nm zpool
2583 looks for device nodes and files.
2584 Similar to the
2585 .Fl d
2586 option in
2587 .Nm zpool import .
2588 .El
2589 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2590 .It Ev ZPOOL_VDEV_NAME_GUID
2591 Cause
2592 .Nm zpool subcommands to output vdev guids by default. This behavior
2593 is identical to the
2594 .Nm zpool status -g
2595 command line option.
2596 .El
2597 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2598 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2599 Cause
2600 .Nm zpool
2601 subcommands to follow links for vdev names by default. This behavior is identical to the
2602 .Nm zpool status -L
2603 command line option.
2604 .El
2605 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2606 .It Ev ZPOOL_VDEV_NAME_PATH
2607 Cause
2608 .Nm zpool
2609 subcommands to output full vdev path names by default. This
2610 behavior is identical to the
2611 .Nm zpool status -p
2612 command line option.
2613 .El
2614 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2615 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2616 Older ZFS on Linux implementations had issues when attempting to display pool
2617 config VDEV names if a
2618 .Sy devid
2619 NVP value is present in the pool's config.
2620 .Pp
2621 For example, a pool that originated on illumos platform would have a devid
2622 value in the config and
2623 .Nm zpool status
2624 would fail when listing the config.
2625 This would also be true for future Linux based pools.
2626 .Pp
2627 A pool can be stripped of any
2628 .Sy devid
2629 values on import or prevented from adding
2630 them on
2631 .Nm zpool create
2632 or
2633 .Nm zpool add
2634 by setting
2635 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2636 .El
2637 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2638 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2639 Allow a privileged user to run the
2640 .Nm zpool status/iostat
2641 with the
2642 .Fl c
2643 option. Normally, only unprivileged users are allowed to run
2644 .Fl c .
2645 .El
2646 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2647 .It Ev ZPOOL_SCRIPTS_PATH
2648 The search path for scripts when running
2649 .Nm zpool status/iostat
2650 with the
2651 .Fl c
2652 option. This is a colon-separated list of directories and overrides the default
2653 .Pa ~/.zpool.d
2654 and
2655 .Pa /etc/zfs/zpool.d
2656 search paths.
2657 .El
2658 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2659 .It Ev ZPOOL_SCRIPTS_ENABLED
2660 Allow a user to run
2661 .Nm zpool status/iostat
2662 with the
2663 .Fl c
2664 option. If
2665 .Sy ZPOOL_SCRIPTS_ENABLED
2666 is not set, it is assumed that the user is allowed to run
2667 .Nm zpool status/iostat -c .
2668 .El
2669 .Sh INTERFACE STABILITY
2670 .Sy Evolving
2671 .Sh SEE ALSO
2672 .Xr zfs-events 5 ,
2673 .Xr zfs-module-parameters 5 ,
2674 .Xr zpool-features 5 ,
2675 .Xr zed 8 ,
2676 .Xr zfs 8