]> git.proxmox.com Git - mirror_zfs.git/blob - man/man8/zpool.8
eb93d3bb37bae595604f1fa7f3dfdc31dd9ee14a
[mirror_zfs.git] / man / man8 / zpool.8
1 .\"
2 .\" CDDL HEADER START
3 .\"
4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
7 .\"
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
12 .\"
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
18 .\"
19 .\" CDDL HEADER END
20 .\"
21 .\"
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29 .\"
30 .Dd November 29, 2018
31 .Dt ZPOOL 8 SMM
32 .Os Linux
33 .Sh NAME
34 .Nm zpool
35 .Nd configure ZFS storage pools
36 .Sh SYNOPSIS
37 .Nm
38 .Fl ?V
39 .Nm
40 .Cm add
41 .Op Fl fgLnP
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
43 .Ar pool vdev Ns ...
44 .Nm
45 .Cm attach
46 .Op Fl f
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
49 .Nm
50 .Cm checkpoint
51 .Op Fl d, -discard
52 .Ar pool
53 .Nm
54 .Cm clear
55 .Ar pool
56 .Op Ar device
57 .Nm
58 .Cm create
59 .Op Fl dfn
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
64 .Op Fl R Ar root
65 .Ar pool vdev Ns ...
66 .Nm
67 .Cm destroy
68 .Op Fl f
69 .Ar pool
70 .Nm
71 .Cm detach
72 .Ar pool device
73 .Nm
74 .Cm events
75 .Op Fl vHf Oo Ar pool Oc | Fl c
76 .Nm
77 .Cm export
78 .Op Fl a
79 .Op Fl f
80 .Ar pool Ns ...
81 .Nm
82 .Cm get
83 .Op Fl Hp
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86 .Oo Ar pool Oc Ns ...
87 .Nm
88 .Cm history
89 .Op Fl il
90 .Oo Ar pool Oc Ns ...
91 .Nm
92 .Cm import
93 .Op Fl D
94 .Op Fl d Ar dir Ns | Ns device
95 .Nm
96 .Cm import
97 .Fl a
98 .Op Fl DflmN
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
102 .Op Fl o Ar mntopts
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
104 .Op Fl R Ar root
105 .Nm
106 .Cm import
107 .Op Fl Dflm
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
111 .Op Fl o Ar mntopts
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
113 .Op Fl R Ar root
114 .Op Fl s
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
117 .Nm
118 .Cm initialize
119 .Op Fl c | Fl s
120 .Ar pool
121 .Op Ar device Ns ...
122 .Nm
123 .Cm iostat
124 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
125 .Op Fl T Sy u Ns | Ns Sy d
126 .Op Fl ghHLnpPvy
127 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
128 .Op Ar interval Op Ar count
129 .Nm
130 .Cm labelclear
131 .Op Fl f
132 .Ar device
133 .Nm
134 .Cm list
135 .Op Fl HgLpPv
136 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
137 .Op Fl T Sy u Ns | Ns Sy d
138 .Oo Ar pool Oc Ns ...
139 .Op Ar interval Op Ar count
140 .Nm
141 .Cm offline
142 .Op Fl f
143 .Op Fl t
144 .Ar pool Ar device Ns ...
145 .Nm
146 .Cm online
147 .Op Fl e
148 .Ar pool Ar device Ns ...
149 .Nm
150 .Cm reguid
151 .Ar pool
152 .Nm
153 .Cm reopen
154 .Op Fl n
155 .Ar pool
156 .Nm
157 .Cm remove
158 .Op Fl np
159 .Ar pool Ar device Ns ...
160 .Nm
161 .Cm remove
162 .Fl s
163 .Ar pool
164 .Nm
165 .Cm replace
166 .Op Fl f
167 .Oo Fl o Ar property Ns = Ns Ar value Oc
168 .Ar pool Ar device Op Ar new_device
169 .Nm
170 .Cm resilver
171 .Ar pool Ns ...
172 .Nm
173 .Cm scrub
174 .Op Fl s | Fl p
175 .Ar pool Ns ...
176 .Nm
177 .Cm trim
178 .Op Fl d
179 .Op Fl r Ar rate
180 .Op Fl c | Fl s
181 .Ar pool
182 .Op Ar device Ns ...
183 .Nm
184 .Cm set
185 .Ar property Ns = Ns Ar value
186 .Ar pool
187 .Nm
188 .Cm split
189 .Op Fl gLlnP
190 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
191 .Op Fl R Ar root
192 .Ar pool newpool
193 .Oo Ar device Oc Ns ...
194 .Nm
195 .Cm status
196 .Oo Fl c Ar SCRIPT Oc
197 .Op Fl DigLpPstvx
198 .Op Fl T Sy u Ns | Ns Sy d
199 .Oo Ar pool Oc Ns ...
200 .Op Ar interval Op Ar count
201 .Nm
202 .Cm sync
203 .Oo Ar pool Oc Ns ...
204 .Nm
205 .Cm upgrade
206 .Nm
207 .Cm upgrade
208 .Fl v
209 .Nm
210 .Cm upgrade
211 .Op Fl V Ar version
212 .Fl a Ns | Ns Ar pool Ns ...
213 .Nm
214 .Cm version
215 .Sh DESCRIPTION
216 The
217 .Nm
218 command configures ZFS storage pools.
219 A storage pool is a collection of devices that provides physical storage and
220 data replication for ZFS datasets.
221 All datasets within a storage pool share the same space.
222 See
223 .Xr zfs 8
224 for information on managing datasets.
225 .Ss Virtual Devices (vdevs)
226 A "virtual device" describes a single device or a collection of devices
227 organized according to certain performance and fault characteristics.
228 The following virtual devices are supported:
229 .Bl -tag -width Ds
230 .It Sy disk
231 A block device, typically located under
232 .Pa /dev .
233 ZFS can use individual slices or partitions, though the recommended mode of
234 operation is to use whole disks.
235 A disk can be specified by a full path, or it can be a shorthand name
236 .Po the relative portion of the path under
237 .Pa /dev
238 .Pc .
239 A whole disk can be specified by omitting the slice or partition designation.
240 For example,
241 .Pa sda
242 is equivalent to
243 .Pa /dev/sda .
244 When given a whole disk, ZFS automatically labels the disk, if necessary.
245 .It Sy file
246 A regular file.
247 The use of files as a backing store is strongly discouraged.
248 It is designed primarily for experimental purposes, as the fault tolerance of a
249 file is only as good as the file system of which it is a part.
250 A file must be specified by a full path.
251 .It Sy mirror
252 A mirror of two or more devices.
253 Data is replicated in an identical fashion across all components of a mirror.
254 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
255 failing before data integrity is compromised.
256 .It Sy raidz , raidz1 , raidz2 , raidz3
257 A variation on RAID-5 that allows for better distribution of parity and
258 eliminates the RAID-5
259 .Qq write hole
260 .Pq in which data and parity become inconsistent after a power loss .
261 Data and parity is striped across all disks within a raidz group.
262 .Pp
263 A raidz group can have single-, double-, or triple-parity, meaning that the
264 raidz group can sustain one, two, or three failures, respectively, without
265 losing any data.
266 The
267 .Sy raidz1
268 vdev type specifies a single-parity raidz group; the
269 .Sy raidz2
270 vdev type specifies a double-parity raidz group; and the
271 .Sy raidz3
272 vdev type specifies a triple-parity raidz group.
273 The
274 .Sy raidz
275 vdev type is an alias for
276 .Sy raidz1 .
277 .Pp
278 A raidz group with N disks of size X with P parity disks can hold approximately
279 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
280 compromised.
281 The minimum number of devices in a raidz group is one more than the number of
282 parity disks.
283 The recommended number is between 3 and 9 to help increase performance.
284 .It Sy spare
285 A pseudo-vdev which keeps track of available hot spares for a pool.
286 For more information, see the
287 .Sx Hot Spares
288 section.
289 .It Sy log
290 A separate intent log device.
291 If more than one log device is specified, then writes are load-balanced between
292 devices.
293 Log devices can be mirrored.
294 However, raidz vdev types are not supported for the intent log.
295 For more information, see the
296 .Sx Intent Log
297 section.
298 .It Sy dedup
299 A device dedicated solely for dedup data.
300 The redundancy of this device should match the redundancy of the other normal
301 devices in the pool. If more than one dedup device is specified, then
302 allocations are load-balanced between those devices.
303 .It Sy special
304 A device dedicated solely for allocating various kinds of internal metadata,
305 and optionally small file data.
306 The redundancy of this device should match the redundancy of the other normal
307 devices in the pool. If more than one special device is specified, then
308 allocations are load-balanced between those devices.
309 .Pp
310 For more information on special allocations, see the
311 .Sx Special Allocation Class
312 section.
313 .It Sy cache
314 A device used to cache storage pool data.
315 A cache device cannot be configured as a mirror or raidz group.
316 For more information, see the
317 .Sx Cache Devices
318 section.
319 .El
320 .Pp
321 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
322 contain files or disks.
323 Mirrors of mirrors
324 .Pq or other combinations
325 are not allowed.
326 .Pp
327 A pool can have any number of virtual devices at the top of the configuration
328 .Po known as
329 .Qq root vdevs
330 .Pc .
331 Data is dynamically distributed across all top-level devices to balance data
332 among devices.
333 As new virtual devices are added, ZFS automatically places data on the newly
334 available devices.
335 .Pp
336 Virtual devices are specified one at a time on the command line, separated by
337 whitespace.
338 The keywords
339 .Sy mirror
340 and
341 .Sy raidz
342 are used to distinguish where a group ends and another begins.
343 For example, the following creates two root vdevs, each a mirror of two disks:
344 .Bd -literal
345 # zpool create mypool mirror sda sdb mirror sdc sdd
346 .Ed
347 .Ss Device Failure and Recovery
348 ZFS supports a rich set of mechanisms for handling device failure and data
349 corruption.
350 All metadata and data is checksummed, and ZFS automatically repairs bad data
351 from a good copy when corruption is detected.
352 .Pp
353 In order to take advantage of these features, a pool must make use of some form
354 of redundancy, using either mirrored or raidz groups.
355 While ZFS supports running in a non-redundant configuration, where each root
356 vdev is simply a disk or file, this is strongly discouraged.
357 A single case of bit corruption can render some or all of your data unavailable.
358 .Pp
359 A pool's health status is described by one of three states: online, degraded,
360 or faulted.
361 An online pool has all devices operating normally.
362 A degraded pool is one in which one or more devices have failed, but the data is
363 still available due to a redundant configuration.
364 A faulted pool has corrupted metadata, or one or more faulted devices, and
365 insufficient replicas to continue functioning.
366 .Pp
367 The health of the top-level vdev, such as mirror or raidz device, is
368 potentially impacted by the state of its associated vdevs, or component
369 devices.
370 A top-level vdev or component device is in one of the following states:
371 .Bl -tag -width "DEGRADED"
372 .It Sy DEGRADED
373 One or more top-level vdevs is in the degraded state because one or more
374 component devices are offline.
375 Sufficient replicas exist to continue functioning.
376 .Pp
377 One or more component devices is in the degraded or faulted state, but
378 sufficient replicas exist to continue functioning.
379 The underlying conditions are as follows:
380 .Bl -bullet
381 .It
382 The number of checksum errors exceeds acceptable levels and the device is
383 degraded as an indication that something may be wrong.
384 ZFS continues to use the device as necessary.
385 .It
386 The number of I/O errors exceeds acceptable levels.
387 The device could not be marked as faulted because there are insufficient
388 replicas to continue functioning.
389 .El
390 .It Sy FAULTED
391 One or more top-level vdevs is in the faulted state because one or more
392 component devices are offline.
393 Insufficient replicas exist to continue functioning.
394 .Pp
395 One or more component devices is in the faulted state, and insufficient
396 replicas exist to continue functioning.
397 The underlying conditions are as follows:
398 .Bl -bullet
399 .It
400 The device could be opened, but the contents did not match expected values.
401 .It
402 The number of I/O errors exceeds acceptable levels and the device is faulted to
403 prevent further use of the device.
404 .El
405 .It Sy OFFLINE
406 The device was explicitly taken offline by the
407 .Nm zpool Cm offline
408 command.
409 .It Sy ONLINE
410 The device is online and functioning.
411 .It Sy REMOVED
412 The device was physically removed while the system was running.
413 Device removal detection is hardware-dependent and may not be supported on all
414 platforms.
415 .It Sy UNAVAIL
416 The device could not be opened.
417 If a pool is imported when a device was unavailable, then the device will be
418 identified by a unique identifier instead of its path since the path was never
419 correct in the first place.
420 .El
421 .Pp
422 If a device is removed and later re-attached to the system, ZFS attempts
423 to put the device online automatically.
424 Device attach detection is hardware-dependent and might not be supported on all
425 platforms.
426 .Ss Hot Spares
427 ZFS allows devices to be associated with pools as
428 .Qq hot spares .
429 These devices are not actively used in the pool, but when an active device
430 fails, it is automatically replaced by a hot spare.
431 To create a pool with hot spares, specify a
432 .Sy spare
433 vdev with any number of devices.
434 For example,
435 .Bd -literal
436 # zpool create pool mirror sda sdb spare sdc sdd
437 .Ed
438 .Pp
439 Spares can be shared across multiple pools, and can be added with the
440 .Nm zpool Cm add
441 command and removed with the
442 .Nm zpool Cm remove
443 command.
444 Once a spare replacement is initiated, a new
445 .Sy spare
446 vdev is created within the configuration that will remain there until the
447 original device is replaced.
448 At this point, the hot spare becomes available again if another device fails.
449 .Pp
450 If a pool has a shared spare that is currently being used, the pool can not be
451 exported since other pools may use this shared spare, which may lead to
452 potential data corruption.
453 .Pp
454 Shared spares add some risk. If the pools are imported on different hosts, and
455 both pools suffer a device failure at the same time, both could attempt to use
456 the spare at the same time. This may not be detected, resulting in data
457 corruption.
458 .Pp
459 An in-progress spare replacement can be cancelled by detaching the hot spare.
460 If the original faulted device is detached, then the hot spare assumes its
461 place in the configuration, and is removed from the spare list of all active
462 pools.
463 .Pp
464 Spares cannot replace log devices.
465 .Ss Intent Log
466 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
467 transactions.
468 For instance, databases often require their transactions to be on stable storage
469 devices when returning from a system call.
470 NFS and other applications can also use
471 .Xr fsync 2
472 to ensure data stability.
473 By default, the intent log is allocated from blocks within the main pool.
474 However, it might be possible to get better performance using separate intent
475 log devices such as NVRAM or a dedicated disk.
476 For example:
477 .Bd -literal
478 # zpool create pool sda sdb log sdc
479 .Ed
480 .Pp
481 Multiple log devices can also be specified, and they can be mirrored.
482 See the
483 .Sx EXAMPLES
484 section for an example of mirroring multiple log devices.
485 .Pp
486 Log devices can be added, replaced, attached, detached and removed. In
487 addition, log devices are imported and exported as part of the pool
488 that contains them.
489 Mirrored devices can be removed by specifying the top-level mirror vdev.
490 .Ss Cache Devices
491 Devices can be added to a storage pool as
492 .Qq cache devices .
493 These devices provide an additional layer of caching between main memory and
494 disk.
495 For read-heavy workloads, where the working set size is much larger than what
496 can be cached in main memory, using cache devices allow much more of this
497 working set to be served from low latency media.
498 Using cache devices provides the greatest performance improvement for random
499 read-workloads of mostly static content.
500 .Pp
501 To create a pool with cache devices, specify a
502 .Sy cache
503 vdev with any number of devices.
504 For example:
505 .Bd -literal
506 # zpool create pool sda sdb cache sdc sdd
507 .Ed
508 .Pp
509 Cache devices cannot be mirrored or part of a raidz configuration.
510 If a read error is encountered on a cache device, that read I/O is reissued to
511 the original storage pool device, which might be part of a mirrored or raidz
512 configuration.
513 .Pp
514 The content of the cache devices is considered volatile, as is the case with
515 other system caches.
516 .Ss Pool checkpoint
517 Before starting critical procedures that include destructive actions (e.g
518 .Nm zfs Cm destroy
519 ), an administrator can checkpoint the pool's state and in the case of a
520 mistake or failure, rewind the entire pool back to the checkpoint.
521 Otherwise, the checkpoint can be discarded when the procedure has completed
522 successfully.
523 .Pp
524 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
525 with care as it contains every part of the pool's state, from properties to vdev
526 configuration.
527 Thus, while a pool has a checkpoint certain operations are not allowed.
528 Specifically, vdev removal/attach/detach, mirror splitting, and
529 changing the pool's guid.
530 Adding a new vdev is supported but in the case of a rewind it will have to be
531 added again.
532 Finally, users of this feature should keep in mind that scrubs in a pool that
533 has a checkpoint do not repair checkpointed data.
534 .Pp
535 To create a checkpoint for a pool:
536 .Bd -literal
537 # zpool checkpoint pool
538 .Ed
539 .Pp
540 To later rewind to its checkpointed state, you need to first export it and
541 then rewind it during import:
542 .Bd -literal
543 # zpool export pool
544 # zpool import --rewind-to-checkpoint pool
545 .Ed
546 .Pp
547 To discard the checkpoint from a pool:
548 .Bd -literal
549 # zpool checkpoint -d pool
550 .Ed
551 .Pp
552 Dataset reservations (controlled by the
553 .Nm reservation
554 or
555 .Nm refreservation
556 zfs properties) may be unenforceable while a checkpoint exists, because the
557 checkpoint is allowed to consume the dataset's reservation.
558 Finally, data that is part of the checkpoint but has been freed in the
559 current state of the pool won't be scanned during a scrub.
560 .Ss Special Allocation Class
561 The allocations in the special class are dedicated to specific block types.
562 By default this includes all metadata, the indirect blocks of user data, and
563 any dedup data. The class can also be provisioned to accept a limited
564 percentage of small file data blocks.
565 .Pp
566 A pool must always have at least one general (non-specified) vdev before
567 other devices can be assigned to the special class. If the special class
568 becomes full, then allocations intended for it will spill back into the
569 normal class.
570 .Pp
571 Dedup data can be excluded from the special class by setting the
572 .Sy zfs_ddt_data_is_special
573 zfs module parameter to false (0).
574 .Pp
575 Inclusion of small file blocks in the special class is opt-in. Each dataset
576 can control the size of small file blocks allowed in the special class by
577 setting the
578 .Sy special_small_blocks
579 dataset property. It defaults to zero, so you must opt-in by setting it to a
580 non-zero value. See
581 .Xr zfs 8
582 for more info on setting this property.
583 .Ss Properties
584 Each pool has several properties associated with it.
585 Some properties are read-only statistics while others are configurable and
586 change the behavior of the pool.
587 .Pp
588 The following are read-only properties:
589 .Bl -tag -width Ds
590 .It Cm allocated
591 Amount of storage used within the pool.
592 See
593 .Sy fragmentation
594 and
595 .Sy free
596 for more information.
597 .It Sy capacity
598 Percentage of pool space used.
599 This property can also be referred to by its shortened column name,
600 .Sy cap .
601 .It Sy expandsize
602 Amount of uninitialized space within the pool or device that can be used to
603 increase the total capacity of the pool.
604 Uninitialized space consists of any space on an EFI labeled vdev which has not
605 been brought online
606 .Po e.g, using
607 .Nm zpool Cm online Fl e
608 .Pc .
609 This space occurs when a LUN is dynamically expanded.
610 .It Sy fragmentation
611 The amount of fragmentation in the pool. As the amount of space
612 .Sy allocated
613 increases, it becomes more difficult to locate
614 .Sy free
615 space. This may result in lower write performance compared to pools with more
616 unfragmented free space.
617 .It Sy free
618 The amount of free space available in the pool.
619 By contrast, the
620 .Xr zfs 8
621 .Sy available
622 property describes how much new data can be written to ZFS filesystems/volumes.
623 The zpool
624 .Sy free
625 property is not generally useful for this purpose, and can be substantially more than the zfs
626 .Sy available
627 space. This discrepancy is due to several factors, including raidz party; zfs
628 reservation, quota, refreservation, and refquota properties; and space set aside by
629 .Sy spa_slop_shift
630 (see
631 .Xr zfs-module-parameters 5
632 for more information).
633 .It Sy freeing
634 After a file system or snapshot is destroyed, the space it was using is
635 returned to the pool asynchronously.
636 .Sy freeing
637 is the amount of space remaining to be reclaimed.
638 Over time
639 .Sy freeing
640 will decrease while
641 .Sy free
642 increases.
643 .It Sy health
644 The current health of the pool.
645 Health can be one of
646 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
647 .It Sy guid
648 A unique identifier for the pool.
649 .It Sy load_guid
650 A unique identifier for the pool.
651 Unlike the
652 .Sy guid
653 property, this identifier is generated every time we load the pool (e.g. does
654 not persist across imports/exports) and never changes while the pool is loaded
655 (even if a
656 .Sy reguid
657 operation takes place).
658 .It Sy size
659 Total size of the storage pool.
660 .It Sy unsupported@ Ns Em feature_guid
661 Information about unsupported features that are enabled on the pool.
662 See
663 .Xr zpool-features 5
664 for details.
665 .El
666 .Pp
667 The space usage properties report actual physical space available to the
668 storage pool.
669 The physical space can be different from the total amount of space that any
670 contained datasets can actually use.
671 The amount of space used in a raidz configuration depends on the characteristics
672 of the data being written.
673 In addition, ZFS reserves some space for internal accounting that the
674 .Xr zfs 8
675 command takes into account, but the
676 .Nm
677 command does not.
678 For non-full pools of a reasonable size, these effects should be invisible.
679 For small pools, or pools that are close to being completely full, these
680 discrepancies may become more noticeable.
681 .Pp
682 The following property can be set at creation time and import time:
683 .Bl -tag -width Ds
684 .It Sy altroot
685 Alternate root directory.
686 If set, this directory is prepended to any mount points within the pool.
687 This can be used when examining an unknown pool where the mount points cannot be
688 trusted, or in an alternate boot environment, where the typical paths are not
689 valid.
690 .Sy altroot
691 is not a persistent property.
692 It is valid only while the system is up.
693 Setting
694 .Sy altroot
695 defaults to using
696 .Sy cachefile Ns = Ns Sy none ,
697 though this may be overridden using an explicit setting.
698 .El
699 .Pp
700 The following property can be set only at import time:
701 .Bl -tag -width Ds
702 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
703 If set to
704 .Sy on ,
705 the pool will be imported in read-only mode.
706 This property can also be referred to by its shortened column name,
707 .Sy rdonly .
708 .El
709 .Pp
710 The following properties can be set at creation time and import time, and later
711 changed with the
712 .Nm zpool Cm set
713 command:
714 .Bl -tag -width Ds
715 .It Sy ashift Ns = Ns Sy ashift
716 Pool sector size exponent, to the power of
717 .Sy 2
718 (internally referred to as
719 .Sy ashift
720 ). Values from 9 to 16, inclusive, are valid; also, the
721 value 0 (the default) means to auto-detect using the kernel's block
722 layer and a ZFS internal exception list. I/O operations will be aligned
723 to the specified size boundaries. Additionally, the minimum (disk)
724 write size will be set to the specified size, so this represents a
725 space vs. performance trade-off. For optimal performance, the pool
726 sector size should be greater than or equal to the sector size of the
727 underlying disks. The typical case for setting this property is when
728 performance is important and the underlying disks use 4KiB sectors but
729 report 512B sectors to the OS (for compatibility reasons); in that
730 case, set
731 .Sy ashift=12
732 (which is 1<<12 = 4096). When set, this property is
733 used as the default hint value in subsequent vdev operations (add,
734 attach and replace). Changing this value will not modify any existing
735 vdev, not even on disk replacement; however it can be used, for
736 instance, to replace a dying 512B sectors disk with a newer 4KiB
737 sectors device: this will probably result in bad performance but at the
738 same time could prevent loss of data.
739 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
740 Controls automatic pool expansion when the underlying LUN is grown.
741 If set to
742 .Sy on ,
743 the pool will be resized according to the size of the expanded device.
744 If the device is part of a mirror or raidz then all devices within that
745 mirror/raidz group must be expanded before the new space is made available to
746 the pool.
747 The default behavior is
748 .Sy off .
749 This property can also be referred to by its shortened column name,
750 .Sy expand .
751 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
752 Controls automatic device replacement.
753 If set to
754 .Sy off ,
755 device replacement must be initiated by the administrator by using the
756 .Nm zpool Cm replace
757 command.
758 If set to
759 .Sy on ,
760 any new device, found in the same physical location as a device that previously
761 belonged to the pool, is automatically formatted and replaced.
762 The default behavior is
763 .Sy off .
764 This property can also be referred to by its shortened column name,
765 .Sy replace .
766 Autoreplace can also be used with virtual disks (like device
767 mapper) provided that you use the /dev/disk/by-vdev paths setup by
768 vdev_id.conf. See the
769 .Xr vdev_id 8
770 man page for more details.
771 Autoreplace and autoonline require the ZFS Event Daemon be configured and
772 running. See the
773 .Xr zed 8
774 man page for more details.
775 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
776 Identifies the default bootable dataset for the root pool. This property is
777 expected to be set mainly by the installation and upgrade programs.
778 Not all Linux distribution boot processes use the bootfs property.
779 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
780 Controls the location of where the pool configuration is cached.
781 Discovering all pools on system startup requires a cached copy of the
782 configuration data that is stored on the root file system.
783 All pools in this cache are automatically imported when the system boots.
784 Some environments, such as install and clustering, need to cache this
785 information in a different location so that pools are not automatically
786 imported.
787 Setting this property caches the pool configuration in a different location that
788 can later be imported with
789 .Nm zpool Cm import Fl c .
790 Setting it to the value
791 .Sy none
792 creates a temporary pool that is never cached, and the
793 .Qq
794 .Pq empty string
795 uses the default location.
796 .Pp
797 Multiple pools can share the same cache file.
798 Because the kernel destroys and recreates this file when pools are added and
799 removed, care should be taken when attempting to access this file.
800 When the last pool using a
801 .Sy cachefile
802 is exported or destroyed, the file will be empty.
803 .It Sy comment Ns = Ns Ar text
804 A text string consisting of printable ASCII characters that will be stored
805 such that it is available even if the pool becomes faulted.
806 An administrator can provide additional information about a pool using this
807 property.
808 .It Sy dedupditto Ns = Ns Ar number
809 This property is deprecated. In a future release, it will no longer have any
810 effect.
811 .Pp
812 Threshold for the number of block ditto copies.
813 If the reference count for a deduplicated block increases above this number, a
814 new ditto copy of this block is automatically stored.
815 The default setting is
816 .Sy 0
817 which causes no ditto copies to be created for deduplicated blocks.
818 The minimum legal nonzero setting is
819 .Sy 100 .
820 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
821 Controls whether a non-privileged user is granted access based on the dataset
822 permissions defined on the dataset.
823 See
824 .Xr zfs 8
825 for more information on ZFS delegated administration.
826 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
827 Controls the system behavior in the event of catastrophic pool failure.
828 This condition is typically a result of a loss of connectivity to the underlying
829 storage device(s) or a failure of all devices within the pool.
830 The behavior of such an event is determined as follows:
831 .Bl -tag -width "continue"
832 .It Sy wait
833 Blocks all I/O access until the device connectivity is recovered and the errors
834 are cleared.
835 This is the default behavior.
836 .It Sy continue
837 Returns
838 .Er EIO
839 to any new write I/O requests but allows reads to any of the remaining healthy
840 devices.
841 Any write requests that have yet to be committed to disk would be blocked.
842 .It Sy panic
843 Prints out a message to the console and generates a system crash dump.
844 .El
845 .It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
846 When set to
847 .Sy on
848 space which has been recently freed, and is no longer allocated by the pool,
849 will be periodically trimmed. This allows block device vdevs which support
850 BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system
851 supports hole-punching, to reclaim unused blocks. The default setting for
852 this property is
853 .Sy off .
854 .Pp
855 Automatic TRIM does not immediately reclaim blocks after a free. Instead,
856 it will optimistically delay allowing smaller ranges to be aggregated in to
857 a few larger ones. These can then be issued more efficiently to the storage.
858 .Pp
859 Be aware that automatic trimming of recently freed data blocks can put
860 significant stress on the underlying storage devices. This will vary
861 depending of how well the specific device handles these commands. For
862 lower end devices it is often possible to achieve most of the benefits
863 of automatic trimming by running an on-demand (manual) TRIM periodically
864 using the
865 .Nm zpool Cm trim
866 command.
867 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
868 The value of this property is the current state of
869 .Ar feature_name .
870 The only valid value when setting this property is
871 .Sy enabled
872 which moves
873 .Ar feature_name
874 to the enabled state.
875 See
876 .Xr zpool-features 5
877 for details on feature states.
878 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
879 Controls whether information about snapshots associated with this pool is
880 output when
881 .Nm zfs Cm list
882 is run without the
883 .Fl t
884 option.
885 The default value is
886 .Sy off .
887 This property can also be referred to by its shortened name,
888 .Sy listsnaps .
889 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
890 Controls whether a pool activity check should be performed during
891 .Nm zpool Cm import .
892 When a pool is determined to be active it cannot be imported, even with the
893 .Fl f
894 option. This property is intended to be used in failover configurations
895 where multiple hosts have access to a pool on shared storage.
896 .Pp
897 Multihost provides protection on import only. It does not protect against an
898 individual device being used in multiple pools, regardless of the type of vdev.
899 See the discussion under
900 .Sy zpool create.
901 .Pp
902 When this property is on, periodic writes to storage occur to show the pool is
903 in use. See
904 .Sy zfs_multihost_interval
905 in the
906 .Xr zfs-module-parameters 5
907 man page. In order to enable this property each host must set a unique hostid.
908 See
909 .Xr genhostid 1
910 .Xr zgenhostid 8
911 .Xr spl-module-parameters 5
912 for additional details. The default value is
913 .Sy off .
914 .It Sy version Ns = Ns Ar version
915 The current on-disk version of the pool.
916 This can be increased, but never decreased.
917 The preferred method of updating pools is with the
918 .Nm zpool Cm upgrade
919 command, though this property can be used when a specific version is needed for
920 backwards compatibility.
921 Once feature flags are enabled on a pool this property will no longer have a
922 value.
923 .El
924 .Ss Subcommands
925 All subcommands that modify state are logged persistently to the pool in their
926 original form.
927 .Pp
928 The
929 .Nm
930 command provides subcommands to create and destroy storage pools, add capacity
931 to storage pools, and provide information about the storage pools.
932 The following subcommands are supported:
933 .Bl -tag -width Ds
934 .It Xo
935 .Nm
936 .Fl ?
937 .Xc
938 Displays a help message.
939 .It Xo
940 .Nm
941 .Fl V, -version
942 .Xc
943 An alias for the
944 .Nm zpool Cm version
945 subcommand.
946 .It Xo
947 .Nm
948 .Cm add
949 .Op Fl fgLnP
950 .Oo Fl o Ar property Ns = Ns Ar value Oc
951 .Ar pool vdev Ns ...
952 .Xc
953 Adds the specified virtual devices to the given pool.
954 The
955 .Ar vdev
956 specification is described in the
957 .Sx Virtual Devices
958 section.
959 The behavior of the
960 .Fl f
961 option, and the device checks performed are described in the
962 .Nm zpool Cm create
963 subcommand.
964 .Bl -tag -width Ds
965 .It Fl f
966 Forces use of
967 .Ar vdev Ns s ,
968 even if they appear in use or specify a conflicting replication level.
969 Not all devices can be overridden in this manner.
970 .It Fl g
971 Display
972 .Ar vdev ,
973 GUIDs instead of the normal device names. These GUIDs can be used in place of
974 device names for the zpool detach/offline/remove/replace commands.
975 .It Fl L
976 Display real paths for
977 .Ar vdev Ns s
978 resolving all symbolic links. This can be used to look up the current block
979 device name regardless of the /dev/disk/ path used to open it.
980 .It Fl n
981 Displays the configuration that would be used without actually adding the
982 .Ar vdev Ns s .
983 The actual pool creation can still fail due to insufficient privileges or
984 device sharing.
985 .It Fl P
986 Display real paths for
987 .Ar vdev Ns s
988 instead of only the last component of the path. This can be used in
989 conjunction with the
990 .Fl L
991 flag.
992 .It Fl o Ar property Ns = Ns Ar value
993 Sets the given pool properties. See the
994 .Sx Properties
995 section for a list of valid properties that can be set. The only property
996 supported at the moment is ashift.
997 .El
998 .It Xo
999 .Nm
1000 .Cm attach
1001 .Op Fl f
1002 .Oo Fl o Ar property Ns = Ns Ar value Oc
1003 .Ar pool device new_device
1004 .Xc
1005 Attaches
1006 .Ar new_device
1007 to the existing
1008 .Ar device .
1009 The existing device cannot be part of a raidz configuration.
1010 If
1011 .Ar device
1012 is not currently part of a mirrored configuration,
1013 .Ar device
1014 automatically transforms into a two-way mirror of
1015 .Ar device
1016 and
1017 .Ar new_device .
1018 If
1019 .Ar device
1020 is part of a two-way mirror, attaching
1021 .Ar new_device
1022 creates a three-way mirror, and so on.
1023 In either case,
1024 .Ar new_device
1025 begins to resilver immediately.
1026 .Bl -tag -width Ds
1027 .It Fl f
1028 Forces use of
1029 .Ar new_device ,
1030 even if it appears to be in use.
1031 Not all devices can be overridden in this manner.
1032 .It Fl o Ar property Ns = Ns Ar value
1033 Sets the given pool properties. See the
1034 .Sx Properties
1035 section for a list of valid properties that can be set. The only property
1036 supported at the moment is ashift.
1037 .El
1038 .It Xo
1039 .Nm
1040 .Cm checkpoint
1041 .Op Fl d, -discard
1042 .Ar pool
1043 .Xc
1044 Checkpoints the current state of
1045 .Ar pool
1046 , which can be later restored by
1047 .Nm zpool Cm import --rewind-to-checkpoint .
1048 The existence of a checkpoint in a pool prohibits the following
1049 .Nm zpool
1050 commands:
1051 .Cm remove ,
1052 .Cm attach ,
1053 .Cm detach ,
1054 .Cm split ,
1055 and
1056 .Cm reguid .
1057 In addition, it may break reservation boundaries if the pool lacks free
1058 space.
1059 The
1060 .Nm zpool Cm status
1061 command indicates the existence of a checkpoint or the progress of discarding a
1062 checkpoint from a pool.
1063 The
1064 .Nm zpool Cm list
1065 command reports how much space the checkpoint takes from the pool.
1066 .Bl -tag -width Ds
1067 .It Fl d, -discard
1068 Discards an existing checkpoint from
1069 .Ar pool .
1070 .El
1071 .It Xo
1072 .Nm
1073 .Cm clear
1074 .Ar pool
1075 .Op Ar device
1076 .Xc
1077 Clears device errors in a pool.
1078 If no arguments are specified, all device errors within the pool are cleared.
1079 If one or more devices is specified, only those errors associated with the
1080 specified device or devices are cleared.
1081 If multihost is enabled, and the pool has been suspended, this will not
1082 resume I/O. While the pool was suspended, it may have been imported on
1083 another host, and resuming I/O could result in pool damage.
1084 .It Xo
1085 .Nm
1086 .Cm create
1087 .Op Fl dfn
1088 .Op Fl m Ar mountpoint
1089 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1090 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1091 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1092 .Op Fl R Ar root
1093 .Op Fl t Ar tname
1094 .Ar pool vdev Ns ...
1095 .Xc
1096 Creates a new storage pool containing the virtual devices specified on the
1097 command line.
1098 The pool name must begin with a letter, and can only contain
1099 alphanumeric characters as well as underscore
1100 .Pq Qq Sy _ ,
1101 dash
1102 .Pq Qq Sy \&- ,
1103 colon
1104 .Pq Qq Sy \&: ,
1105 space
1106 .Pq Qq Sy \&\ ,
1107 and period
1108 .Pq Qq Sy \&. .
1109 The pool names
1110 .Sy mirror ,
1111 .Sy raidz ,
1112 .Sy spare
1113 and
1114 .Sy log
1115 are reserved, as are names beginning with
1116 .Sy mirror ,
1117 .Sy raidz ,
1118 .Sy spare ,
1119 and the pattern
1120 .Sy c[0-9] .
1121 The
1122 .Ar vdev
1123 specification is described in the
1124 .Sx Virtual Devices
1125 section.
1126 .Pp
1127 The command attempts to verify that each device specified is accessible and not
1128 currently in use by another subsystem. However this check is not robust enough
1129 to detect simultaneous attempts to use a new device in different pools, even if
1130 .Sy multihost
1131 is
1132 .Sy enabled.
1133 The
1134 administrator must ensure that simultaneous invocations of any combination of
1135 .Sy zpool replace ,
1136 .Sy zpool create ,
1137 .Sy zpool add ,
1138 or
1139 .Sy zpool labelclear ,
1140 do not refer to the same device. Using the same device in two pools will
1141 result in pool corruption.
1142 .Pp
1143 There are some uses, such as being currently mounted, or specified as the
1144 dedicated dump device, that prevents a device from ever being used by ZFS.
1145 Other uses, such as having a preexisting UFS file system, can be overridden with
1146 the
1147 .Fl f
1148 option.
1149 .Pp
1150 The command also checks that the replication strategy for the pool is
1151 consistent.
1152 An attempt to combine redundant and non-redundant storage in a single pool, or
1153 to mix disks and files, results in an error unless
1154 .Fl f
1155 is specified.
1156 The use of differently sized devices within a single raidz or mirror group is
1157 also flagged as an error unless
1158 .Fl f
1159 is specified.
1160 .Pp
1161 Unless the
1162 .Fl R
1163 option is specified, the default mount point is
1164 .Pa / Ns Ar pool .
1165 The mount point must not exist or must be empty, or else the root dataset
1166 cannot be mounted.
1167 This can be overridden with the
1168 .Fl m
1169 option.
1170 .Pp
1171 By default all supported features are enabled on the new pool unless the
1172 .Fl d
1173 option is specified.
1174 .Bl -tag -width Ds
1175 .It Fl d
1176 Do not enable any features on the new pool.
1177 Individual features can be enabled by setting their corresponding properties to
1178 .Sy enabled
1179 with the
1180 .Fl o
1181 option.
1182 See
1183 .Xr zpool-features 5
1184 for details about feature properties.
1185 .It Fl f
1186 Forces use of
1187 .Ar vdev Ns s ,
1188 even if they appear in use or specify a conflicting replication level.
1189 Not all devices can be overridden in this manner.
1190 .It Fl m Ar mountpoint
1191 Sets the mount point for the root dataset.
1192 The default mount point is
1193 .Pa /pool
1194 or
1195 .Pa altroot/pool
1196 if
1197 .Ar altroot
1198 is specified.
1199 The mount point must be an absolute path,
1200 .Sy legacy ,
1201 or
1202 .Sy none .
1203 For more information on dataset mount points, see
1204 .Xr zfs 8 .
1205 .It Fl n
1206 Displays the configuration that would be used without actually creating the
1207 pool.
1208 The actual pool creation can still fail due to insufficient privileges or
1209 device sharing.
1210 .It Fl o Ar property Ns = Ns Ar value
1211 Sets the given pool properties.
1212 See the
1213 .Sx Properties
1214 section for a list of valid properties that can be set.
1215 .It Fl o Ar feature@feature Ns = Ns Ar value
1216 Sets the given pool feature. See the
1217 .Xr zpool-features 5
1218 section for a list of valid features that can be set.
1219 Value can be either disabled or enabled.
1220 .It Fl O Ar file-system-property Ns = Ns Ar value
1221 Sets the given file system properties in the root file system of the pool.
1222 See the
1223 .Sx Properties
1224 section of
1225 .Xr zfs 8
1226 for a list of valid properties that can be set.
1227 .It Fl R Ar root
1228 Equivalent to
1229 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1230 .It Fl t Ar tname
1231 Sets the in-core pool name to
1232 .Sy tname
1233 while the on-disk name will be the name specified as the pool name
1234 .Sy pool .
1235 This will set the default cachefile property to none. This is intended
1236 to handle name space collisions when creating pools for other systems,
1237 such as virtual machines or physical machines whose pools live on network
1238 block devices.
1239 .El
1240 .It Xo
1241 .Nm
1242 .Cm destroy
1243 .Op Fl f
1244 .Ar pool
1245 .Xc
1246 Destroys the given pool, freeing up any devices for other use.
1247 This command tries to unmount any active datasets before destroying the pool.
1248 .Bl -tag -width Ds
1249 .It Fl f
1250 Forces any active datasets contained within the pool to be unmounted.
1251 .El
1252 .It Xo
1253 .Nm
1254 .Cm detach
1255 .Ar pool device
1256 .Xc
1257 Detaches
1258 .Ar device
1259 from a mirror.
1260 The operation is refused if there are no other valid replicas of the data.
1261 If device may be re-added to the pool later on then consider the
1262 .Sy zpool offline
1263 command instead.
1264 .It Xo
1265 .Nm
1266 .Cm events
1267 .Op Fl vHf Oo Ar pool Oc | Fl c
1268 .Xc
1269 Lists all recent events generated by the ZFS kernel modules. These events
1270 are consumed by the
1271 .Xr zed 8
1272 and used to automate administrative tasks such as replacing a failed device
1273 with a hot spare. For more information about the subclasses and event payloads
1274 that can be generated see the
1275 .Xr zfs-events 5
1276 man page.
1277 .Bl -tag -width Ds
1278 .It Fl c
1279 Clear all previous events.
1280 .It Fl f
1281 Follow mode.
1282 .It Fl H
1283 Scripted mode. Do not display headers, and separate fields by a
1284 single tab instead of arbitrary space.
1285 .It Fl v
1286 Print the entire payload for each event.
1287 .El
1288 .It Xo
1289 .Nm
1290 .Cm export
1291 .Op Fl a
1292 .Op Fl f
1293 .Ar pool Ns ...
1294 .Xc
1295 Exports the given pools from the system.
1296 All devices are marked as exported, but are still considered in use by other
1297 subsystems.
1298 The devices can be moved between systems
1299 .Pq even those of different endianness
1300 and imported as long as a sufficient number of devices are present.
1301 .Pp
1302 Before exporting the pool, all datasets within the pool are unmounted.
1303 A pool can not be exported if it has a shared spare that is currently being
1304 used.
1305 .Pp
1306 For pools to be portable, you must give the
1307 .Nm
1308 command whole disks, not just partitions, so that ZFS can label the disks with
1309 portable EFI labels.
1310 Otherwise, disk drivers on platforms of different endianness will not recognize
1311 the disks.
1312 .Bl -tag -width Ds
1313 .It Fl a
1314 Exports all pools imported on the system.
1315 .It Fl f
1316 Forcefully unmount all datasets, using the
1317 .Nm unmount Fl f
1318 command.
1319 .Pp
1320 This command will forcefully export the pool even if it has a shared spare that
1321 is currently being used.
1322 This may lead to potential data corruption.
1323 .El
1324 .It Xo
1325 .Nm
1326 .Cm get
1327 .Op Fl Hp
1328 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1329 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1330 .Oo Ar pool Oc Ns ...
1331 .Xc
1332 Retrieves the given list of properties
1333 .Po
1334 or all properties if
1335 .Sy all
1336 is used
1337 .Pc
1338 for the specified storage pool(s).
1339 These properties are displayed with the following fields:
1340 .Bd -literal
1341 name Name of storage pool
1342 property Property name
1343 value Property value
1344 source Property source, either 'default' or 'local'.
1345 .Ed
1346 .Pp
1347 See the
1348 .Sx Properties
1349 section for more information on the available pool properties.
1350 .Bl -tag -width Ds
1351 .It Fl H
1352 Scripted mode.
1353 Do not display headers, and separate fields by a single tab instead of arbitrary
1354 space.
1355 .It Fl o Ar field
1356 A comma-separated list of columns to display.
1357 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1358 is the default value.
1359 .It Fl p
1360 Display numbers in parsable (exact) values.
1361 .El
1362 .It Xo
1363 .Nm
1364 .Cm history
1365 .Op Fl il
1366 .Oo Ar pool Oc Ns ...
1367 .Xc
1368 Displays the command history of the specified pool(s) or all pools if no pool is
1369 specified.
1370 .Bl -tag -width Ds
1371 .It Fl i
1372 Displays internally logged ZFS events in addition to user initiated events.
1373 .It Fl l
1374 Displays log records in long format, which in addition to standard format
1375 includes, the user name, the hostname, and the zone in which the operation was
1376 performed.
1377 .El
1378 .It Xo
1379 .Nm
1380 .Cm import
1381 .Op Fl D
1382 .Op Fl d Ar dir Ns | Ns device
1383 .Xc
1384 Lists pools available to import.
1385 If the
1386 .Fl d
1387 option is not specified, this command searches for devices in
1388 .Pa /dev .
1389 The
1390 .Fl d
1391 option can be specified multiple times, and all directories are searched.
1392 If the device appears to be part of an exported pool, this command displays a
1393 summary of the pool with the name of the pool, a numeric identifier, as well as
1394 the vdev layout and current health of the device for each device or file.
1395 Destroyed pools, pools that were previously destroyed with the
1396 .Nm zpool Cm destroy
1397 command, are not listed unless the
1398 .Fl D
1399 option is specified.
1400 .Pp
1401 The numeric identifier is unique, and can be used instead of the pool name when
1402 multiple exported pools of the same name are available.
1403 .Bl -tag -width Ds
1404 .It Fl c Ar cachefile
1405 Reads configuration from the given
1406 .Ar cachefile
1407 that was created with the
1408 .Sy cachefile
1409 pool property.
1410 This
1411 .Ar cachefile
1412 is used instead of searching for devices.
1413 .It Fl d Ar dir Ns | Ns Ar device
1414 Uses
1415 .Ar device
1416 or searches for devices or files in
1417 .Ar dir .
1418 The
1419 .Fl d
1420 option can be specified multiple times.
1421 .It Fl D
1422 Lists destroyed pools only.
1423 .El
1424 .It Xo
1425 .Nm
1426 .Cm import
1427 .Fl a
1428 .Op Fl DflmN
1429 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1430 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1431 .Op Fl o Ar mntopts
1432 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1433 .Op Fl R Ar root
1434 .Op Fl s
1435 .Xc
1436 Imports all pools found in the search directories.
1437 Identical to the previous command, except that all pools with a sufficient
1438 number of devices available are imported.
1439 Destroyed pools, pools that were previously destroyed with the
1440 .Nm zpool Cm destroy
1441 command, will not be imported unless the
1442 .Fl D
1443 option is specified.
1444 .Bl -tag -width Ds
1445 .It Fl a
1446 Searches for and imports all pools found.
1447 .It Fl c Ar cachefile
1448 Reads configuration from the given
1449 .Ar cachefile
1450 that was created with the
1451 .Sy cachefile
1452 pool property.
1453 This
1454 .Ar cachefile
1455 is used instead of searching for devices.
1456 .It Fl d Ar dir Ns | Ns Ar device
1457 Uses
1458 .Ar device
1459 or searches for devices or files in
1460 .Ar dir .
1461 The
1462 .Fl d
1463 option can be specified multiple times.
1464 This option is incompatible with the
1465 .Fl c
1466 option.
1467 .It Fl D
1468 Imports destroyed pools only.
1469 The
1470 .Fl f
1471 option is also required.
1472 .It Fl f
1473 Forces import, even if the pool appears to be potentially active.
1474 .It Fl F
1475 Recovery mode for a non-importable pool.
1476 Attempt to return the pool to an importable state by discarding the last few
1477 transactions.
1478 Not all damaged pools can be recovered by using this option.
1479 If successful, the data from the discarded transactions is irretrievably lost.
1480 This option is ignored if the pool is importable or already imported.
1481 .It Fl l
1482 Indicates that this command will request encryption keys for all encrypted
1483 datasets it attempts to mount as it is bringing the pool online. Note that if
1484 any datasets have a
1485 .Sy keylocation
1486 of
1487 .Sy prompt
1488 this command will block waiting for the keys to be entered. Without this flag
1489 encrypted datasets will be left unavailable until the keys are loaded.
1490 .It Fl m
1491 Allows a pool to import when there is a missing log device.
1492 Recent transactions can be lost because the log device will be discarded.
1493 .It Fl n
1494 Used with the
1495 .Fl F
1496 recovery option.
1497 Determines whether a non-importable pool can be made importable again, but does
1498 not actually perform the pool recovery.
1499 For more details about pool recovery mode, see the
1500 .Fl F
1501 option, above.
1502 .It Fl N
1503 Import the pool without mounting any file systems.
1504 .It Fl o Ar mntopts
1505 Comma-separated list of mount options to use when mounting datasets within the
1506 pool.
1507 See
1508 .Xr zfs 8
1509 for a description of dataset properties and mount options.
1510 .It Fl o Ar property Ns = Ns Ar value
1511 Sets the specified property on the imported pool.
1512 See the
1513 .Sx Properties
1514 section for more information on the available pool properties.
1515 .It Fl R Ar root
1516 Sets the
1517 .Sy cachefile
1518 property to
1519 .Sy none
1520 and the
1521 .Sy altroot
1522 property to
1523 .Ar root .
1524 .It Fl -rewind-to-checkpoint
1525 Rewinds pool to the checkpointed state.
1526 Once the pool is imported with this flag there is no way to undo the rewind.
1527 All changes and data that were written after the checkpoint are lost!
1528 The only exception is when the
1529 .Sy readonly
1530 mounting option is enabled.
1531 In this case, the checkpointed state of the pool is opened and an
1532 administrator can see how the pool would look like if they were
1533 to fully rewind.
1534 .It Fl s
1535 Scan using the default search path, the libblkid cache will not be
1536 consulted. A custom search path may be specified by setting the
1537 ZPOOL_IMPORT_PATH environment variable.
1538 .It Fl X
1539 Used with the
1540 .Fl F
1541 recovery option. Determines whether extreme
1542 measures to find a valid txg should take place. This allows the pool to
1543 be rolled back to a txg which is no longer guaranteed to be consistent.
1544 Pools imported at an inconsistent txg may contain uncorrectable
1545 checksum errors. For more details about pool recovery mode, see the
1546 .Fl F
1547 option, above. WARNING: This option can be extremely hazardous to the
1548 health of your pool and should only be used as a last resort.
1549 .It Fl T
1550 Specify the txg to use for rollback. Implies
1551 .Fl FX .
1552 For more details
1553 about pool recovery mode, see the
1554 .Fl X
1555 option, above. WARNING: This option can be extremely hazardous to the
1556 health of your pool and should only be used as a last resort.
1557 .El
1558 .It Xo
1559 .Nm
1560 .Cm import
1561 .Op Fl Dflm
1562 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1563 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1564 .Op Fl o Ar mntopts
1565 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1566 .Op Fl R Ar root
1567 .Op Fl s
1568 .Ar pool Ns | Ns Ar id
1569 .Op Ar newpool
1570 .Xc
1571 Imports a specific pool.
1572 A pool can be identified by its name or the numeric identifier.
1573 If
1574 .Ar newpool
1575 is specified, the pool is imported using the name
1576 .Ar newpool .
1577 Otherwise, it is imported with the same name as its exported name.
1578 .Pp
1579 If a device is removed from a system without running
1580 .Nm zpool Cm export
1581 first, the device appears as potentially active.
1582 It cannot be determined if this was a failed export, or whether the device is
1583 really in use from another host.
1584 To import a pool in this state, the
1585 .Fl f
1586 option is required.
1587 .Bl -tag -width Ds
1588 .It Fl c Ar cachefile
1589 Reads configuration from the given
1590 .Ar cachefile
1591 that was created with the
1592 .Sy cachefile
1593 pool property.
1594 This
1595 .Ar cachefile
1596 is used instead of searching for devices.
1597 .It Fl d Ar dir Ns | Ns Ar device
1598 Uses
1599 .Ar device
1600 or searches for devices or files in
1601 .Ar dir .
1602 The
1603 .Fl d
1604 option can be specified multiple times.
1605 This option is incompatible with the
1606 .Fl c
1607 option.
1608 .It Fl D
1609 Imports destroyed pool.
1610 The
1611 .Fl f
1612 option is also required.
1613 .It Fl f
1614 Forces import, even if the pool appears to be potentially active.
1615 .It Fl F
1616 Recovery mode for a non-importable pool.
1617 Attempt to return the pool to an importable state by discarding the last few
1618 transactions.
1619 Not all damaged pools can be recovered by using this option.
1620 If successful, the data from the discarded transactions is irretrievably lost.
1621 This option is ignored if the pool is importable or already imported.
1622 .It Fl l
1623 Indicates that this command will request encryption keys for all encrypted
1624 datasets it attempts to mount as it is bringing the pool online. Note that if
1625 any datasets have a
1626 .Sy keylocation
1627 of
1628 .Sy prompt
1629 this command will block waiting for the keys to be entered. Without this flag
1630 encrypted datasets will be left unavailable until the keys are loaded.
1631 .It Fl m
1632 Allows a pool to import when there is a missing log device.
1633 Recent transactions can be lost because the log device will be discarded.
1634 .It Fl n
1635 Used with the
1636 .Fl F
1637 recovery option.
1638 Determines whether a non-importable pool can be made importable again, but does
1639 not actually perform the pool recovery.
1640 For more details about pool recovery mode, see the
1641 .Fl F
1642 option, above.
1643 .It Fl o Ar mntopts
1644 Comma-separated list of mount options to use when mounting datasets within the
1645 pool.
1646 See
1647 .Xr zfs 8
1648 for a description of dataset properties and mount options.
1649 .It Fl o Ar property Ns = Ns Ar value
1650 Sets the specified property on the imported pool.
1651 See the
1652 .Sx Properties
1653 section for more information on the available pool properties.
1654 .It Fl R Ar root
1655 Sets the
1656 .Sy cachefile
1657 property to
1658 .Sy none
1659 and the
1660 .Sy altroot
1661 property to
1662 .Ar root .
1663 .It Fl s
1664 Scan using the default search path, the libblkid cache will not be
1665 consulted. A custom search path may be specified by setting the
1666 ZPOOL_IMPORT_PATH environment variable.
1667 .It Fl X
1668 Used with the
1669 .Fl F
1670 recovery option. Determines whether extreme
1671 measures to find a valid txg should take place. This allows the pool to
1672 be rolled back to a txg which is no longer guaranteed to be consistent.
1673 Pools imported at an inconsistent txg may contain uncorrectable
1674 checksum errors. For more details about pool recovery mode, see the
1675 .Fl F
1676 option, above. WARNING: This option can be extremely hazardous to the
1677 health of your pool and should only be used as a last resort.
1678 .It Fl T
1679 Specify the txg to use for rollback. Implies
1680 .Fl FX .
1681 For more details
1682 about pool recovery mode, see the
1683 .Fl X
1684 option, above. WARNING: This option can be extremely hazardous to the
1685 health of your pool and should only be used as a last resort.
1686 .It Fl t
1687 Used with
1688 .Sy newpool .
1689 Specifies that
1690 .Sy newpool
1691 is temporary. Temporary pool names last until export. Ensures that
1692 the original pool name will be used in all label updates and therefore
1693 is retained upon export.
1694 Will also set -o cachefile=none when not explicitly specified.
1695 .El
1696 .It Xo
1697 .Nm
1698 .Cm initialize
1699 .Op Fl c | Fl s
1700 .Ar pool
1701 .Op Ar device Ns ...
1702 .Xc
1703 Begins initializing by writing to all unallocated regions on the specified
1704 devices, or all eligible devices in the pool if no individual devices are
1705 specified.
1706 Only leaf data or log devices may be initialized.
1707 .Bl -tag -width Ds
1708 .It Fl c, -cancel
1709 Cancel initializing on the specified devices, or all eligible devices if none
1710 are specified.
1711 If one or more target devices are invalid or are not currently being
1712 initialized, the command will fail and no cancellation will occur on any device.
1713 .It Fl s -suspend
1714 Suspend initializing on the specified devices, or all eligible devices if none
1715 are specified.
1716 If one or more target devices are invalid or are not currently being
1717 initialized, the command will fail and no suspension will occur on any device.
1718 Initializing can then be resumed by running
1719 .Nm zpool Cm initialize
1720 with no flags on the relevant target devices.
1721 .El
1722 .It Xo
1723 .Nm
1724 .Cm iostat
1725 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1726 .Op Fl T Sy u Ns | Ns Sy d
1727 .Op Fl ghHLnpPvy
1728 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1729 .Op Ar interval Op Ar count
1730 .Xc
1731 Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
1732 be observed via
1733 .Xr iostat 1 .
1734 If writes are located nearby, they may be merged into a single
1735 larger operation. Additional I/O may be generated depending on the level of
1736 vdev redundancy.
1737 To filter output, you may pass in a list of pools, a pool and list of vdevs
1738 in that pool, or a list of any vdevs from any pool. If no items are specified,
1739 statistics for every pool in the system are shown.
1740 When given an
1741 .Ar interval ,
1742 the statistics are printed every
1743 .Ar interval
1744 seconds until ^C is pressed. If
1745 .Fl n
1746 flag is specified the headers are displayed only once, otherwise they are
1747 displayed periodically. If count is specified, the command exits
1748 after count reports are printed. The first report printed is always
1749 the statistics since boot regardless of whether
1750 .Ar interval
1751 and
1752 .Ar count
1753 are passed. However, this behavior can be suppressed with the
1754 .Fl y
1755 flag. Also note that the units of
1756 .Sy K ,
1757 .Sy M ,
1758 .Sy G ...
1759 that are printed in the report are in base 1024. To get the raw
1760 values, use the
1761 .Fl p
1762 flag.
1763 .Bl -tag -width Ds
1764 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1765 Run a script (or scripts) on each vdev and include the output as a new column
1766 in the
1767 .Nm zpool Cm iostat
1768 output. Users can run any script found in their
1769 .Pa ~/.zpool.d
1770 directory or from the system
1771 .Pa /etc/zfs/zpool.d
1772 directory. Script names containing the slash (/) character are not allowed.
1773 The default search path can be overridden by setting the
1774 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1775 .Fl c
1776 if they have the ZPOOL_SCRIPTS_AS_ROOT
1777 environment variable set. If a script requires the use of a privileged
1778 command, like
1779 .Xr smartctl 8 ,
1780 then it's recommended you allow the user access to it in
1781 .Pa /etc/sudoers
1782 or add the user to the
1783 .Pa /etc/sudoers.d/zfs
1784 file.
1785 .Pp
1786 If
1787 .Fl c
1788 is passed without a script name, it prints a list of all scripts.
1789 .Fl c
1790 also sets verbose mode
1791 .No \&( Ns Fl v Ns No \&).
1792 .Pp
1793 Script output should be in the form of "name=value". The column name is
1794 set to "name" and the value is set to "value". Multiple lines can be
1795 used to output multiple columns. The first line of output not in the
1796 "name=value" format is displayed without a column title, and no more
1797 output after that is displayed. This can be useful for printing error
1798 messages. Blank or NULL values are printed as a '-' to make output
1799 awk-able.
1800 .Pp
1801 The following environment variables are set before running each script:
1802 .Bl -tag -width "VDEV_PATH"
1803 .It Sy VDEV_PATH
1804 Full path to the vdev
1805 .El
1806 .Bl -tag -width "VDEV_UPATH"
1807 .It Sy VDEV_UPATH
1808 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1809 multipath, or partitioned vdevs.
1810 .El
1811 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1812 .It Sy VDEV_ENC_SYSFS_PATH
1813 The sysfs path to the enclosure for the vdev (if any).
1814 .El
1815 .It Fl T Sy u Ns | Ns Sy d
1816 Display a time stamp.
1817 Specify
1818 .Sy u
1819 for a printed representation of the internal representation of time.
1820 See
1821 .Xr time 2 .
1822 Specify
1823 .Sy d
1824 for standard date format.
1825 See
1826 .Xr date 1 .
1827 .It Fl g
1828 Display vdev GUIDs instead of the normal device names. These GUIDs
1829 can be used in place of device names for the zpool
1830 detach/offline/remove/replace commands.
1831 .It Fl H
1832 Scripted mode. Do not display headers, and separate fields by a
1833 single tab instead of arbitrary space.
1834 .It Fl L
1835 Display real paths for vdevs resolving all symbolic links. This can
1836 be used to look up the current block device name regardless of the
1837 .Pa /dev/disk/
1838 path used to open it.
1839 .It Fl n
1840 Print headers only once when passed
1841 .It Fl p
1842 Display numbers in parsable (exact) values. Time values are in
1843 nanoseconds.
1844 .It Fl P
1845 Display full paths for vdevs instead of only the last component of
1846 the path. This can be used in conjunction with the
1847 .Fl L
1848 flag.
1849 .It Fl r
1850 Print request size histograms for the leaf vdev's IO. This includes
1851 histograms of individual IOs (ind) and aggregate IOs (agg). These stats
1852 can be useful for observing how well IO aggregation is working. Note
1853 that TRIM IOs may exceed 16M, but will be counted as 16M.
1854 .It Fl v
1855 Verbose statistics Reports usage statistics for individual vdevs within the
1856 pool, in addition to the pool-wide statistics.
1857 .It Fl y
1858 Omit statistics since boot.
1859 Normally the first line of output reports the statistics since boot.
1860 This option suppresses that first line of output.
1861 .Ar interval
1862 .It Fl w
1863 Display latency histograms:
1864 .Pp
1865 .Ar total_wait :
1866 Total IO time (queuing + disk IO time).
1867 .Ar disk_wait :
1868 Disk IO time (time reading/writing the disk).
1869 .Ar syncq_wait :
1870 Amount of time IO spent in synchronous priority queues. Does not include
1871 disk time.
1872 .Ar asyncq_wait :
1873 Amount of time IO spent in asynchronous priority queues. Does not include
1874 disk time.
1875 .Ar scrub :
1876 Amount of time IO spent in scrub queue. Does not include disk time.
1877 .It Fl l
1878 Include average latency statistics:
1879 .Pp
1880 .Ar total_wait :
1881 Average total IO time (queuing + disk IO time).
1882 .Ar disk_wait :
1883 Average disk IO time (time reading/writing the disk).
1884 .Ar syncq_wait :
1885 Average amount of time IO spent in synchronous priority queues. Does
1886 not include disk time.
1887 .Ar asyncq_wait :
1888 Average amount of time IO spent in asynchronous priority queues.
1889 Does not include disk time.
1890 .Ar scrub :
1891 Average queuing time in scrub queue. Does not include disk time.
1892 .Ar trim :
1893 Average queuing time in trim queue. Does not include disk time.
1894 .It Fl q
1895 Include active queue statistics. Each priority queue has both
1896 pending (
1897 .Ar pend )
1898 and active (
1899 .Ar activ )
1900 IOs. Pending IOs are waiting to
1901 be issued to the disk, and active IOs have been issued to disk and are
1902 waiting for completion. These stats are broken out by priority queue:
1903 .Pp
1904 .Ar syncq_read/write :
1905 Current number of entries in synchronous priority
1906 queues.
1907 .Ar asyncq_read/write :
1908 Current number of entries in asynchronous priority queues.
1909 .Ar scrubq_read :
1910 Current number of entries in scrub queue.
1911 .Ar trimq_write :
1912 Current number of entries in trim queue.
1913 .Pp
1914 All queue statistics are instantaneous measurements of the number of
1915 entries in the queues. If you specify an interval, the measurements
1916 will be sampled from the end of the interval.
1917 .El
1918 .It Xo
1919 .Nm
1920 .Cm labelclear
1921 .Op Fl f
1922 .Ar device
1923 .Xc
1924 Removes ZFS label information from the specified
1925 .Ar device .
1926 The
1927 .Ar device
1928 must not be part of an active pool configuration.
1929 .Bl -tag -width Ds
1930 .It Fl f
1931 Treat exported or foreign devices as inactive.
1932 .El
1933 .It Xo
1934 .Nm
1935 .Cm list
1936 .Op Fl HgLpPv
1937 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1938 .Op Fl T Sy u Ns | Ns Sy d
1939 .Oo Ar pool Oc Ns ...
1940 .Op Ar interval Op Ar count
1941 .Xc
1942 Lists the given pools along with a health status and space usage.
1943 If no
1944 .Ar pool Ns s
1945 are specified, all pools in the system are listed.
1946 When given an
1947 .Ar interval ,
1948 the information is printed every
1949 .Ar interval
1950 seconds until ^C is pressed.
1951 If
1952 .Ar count
1953 is specified, the command exits after
1954 .Ar count
1955 reports are printed.
1956 .Bl -tag -width Ds
1957 .It Fl g
1958 Display vdev GUIDs instead of the normal device names. These GUIDs
1959 can be used in place of device names for the zpool
1960 detach/offline/remove/replace commands.
1961 .It Fl H
1962 Scripted mode.
1963 Do not display headers, and separate fields by a single tab instead of arbitrary
1964 space.
1965 .It Fl o Ar property
1966 Comma-separated list of properties to display.
1967 See the
1968 .Sx Properties
1969 section for a list of valid properties.
1970 The default list is
1971 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1972 .Cm capacity , dedupratio , health , altroot .
1973 .It Fl L
1974 Display real paths for vdevs resolving all symbolic links. This can
1975 be used to look up the current block device name regardless of the
1976 /dev/disk/ path used to open it.
1977 .It Fl p
1978 Display numbers in parsable
1979 .Pq exact
1980 values.
1981 .It Fl P
1982 Display full paths for vdevs instead of only the last component of
1983 the path. This can be used in conjunction with the
1984 .Fl L
1985 flag.
1986 .It Fl T Sy u Ns | Ns Sy d
1987 Display a time stamp.
1988 Specify
1989 .Sy u
1990 for a printed representation of the internal representation of time.
1991 See
1992 .Xr time 2 .
1993 Specify
1994 .Sy d
1995 for standard date format.
1996 See
1997 .Xr date 1 .
1998 .It Fl v
1999 Verbose statistics.
2000 Reports usage statistics for individual vdevs within the pool, in addition to
2001 the pool-wise statistics.
2002 .El
2003 .It Xo
2004 .Nm
2005 .Cm offline
2006 .Op Fl f
2007 .Op Fl t
2008 .Ar pool Ar device Ns ...
2009 .Xc
2010 Takes the specified physical device offline.
2011 While the
2012 .Ar device
2013 is offline, no attempt is made to read or write to the device.
2014 This command is not applicable to spares.
2015 .Bl -tag -width Ds
2016 .It Fl f
2017 Force fault. Instead of offlining the disk, put it into a faulted
2018 state. The fault will persist across imports unless the
2019 .Fl t
2020 flag was specified.
2021 .It Fl t
2022 Temporary.
2023 Upon reboot, the specified physical device reverts to its previous state.
2024 .El
2025 .It Xo
2026 .Nm
2027 .Cm online
2028 .Op Fl e
2029 .Ar pool Ar device Ns ...
2030 .Xc
2031 Brings the specified physical device online.
2032 This command is not applicable to spares.
2033 .Bl -tag -width Ds
2034 .It Fl e
2035 Expand the device to use all available space.
2036 If the device is part of a mirror or raidz then all devices must be expanded
2037 before the new space will become available to the pool.
2038 .El
2039 .It Xo
2040 .Nm
2041 .Cm reguid
2042 .Ar pool
2043 .Xc
2044 Generates a new unique identifier for the pool.
2045 You must ensure that all devices in this pool are online and healthy before
2046 performing this action.
2047 .It Xo
2048 .Nm
2049 .Cm reopen
2050 .Op Fl n
2051 .Ar pool
2052 .Xc
2053 Reopen all the vdevs associated with the pool.
2054 .Bl -tag -width Ds
2055 .It Fl n
2056 Do not restart an in-progress scrub operation. This is not recommended and can
2057 result in partially resilvered devices unless a second scrub is performed.
2058 .El
2059 .It Xo
2060 .Nm
2061 .Cm remove
2062 .Op Fl np
2063 .Ar pool Ar device Ns ...
2064 .Xc
2065 Removes the specified device from the pool.
2066 This command supports removing hot spare, cache, log, and both mirrored and
2067 non-redundant primary top-level vdevs, including dedup and special vdevs.
2068 When the primary pool storage includes a top-level raidz vdev only hot spare,
2069 cache, and log devices can be removed.
2070 .sp
2071 Removing a top-level vdev reduces the total amount of space in the storage pool.
2072 The specified device will be evacuated by copying all allocated space from it to
2073 the other devices in the pool.
2074 In this case, the
2075 .Nm zpool Cm remove
2076 command initiates the removal and returns, while the evacuation continues in
2077 the background.
2078 The removal progress can be monitored with
2079 .Nm zpool Cm status .
2080 If an IO error is encountered during the removal process it will be
2081 cancelled. The
2082 .Sy device_removal
2083 feature flag must be enabled to remove a top-level vdev, see
2084 .Xr zpool-features 5 .
2085 .Pp
2086 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
2087 same.
2088 Non-log devices or data devices that are part of a mirrored configuration can be removed using
2089 the
2090 .Nm zpool Cm detach
2091 command.
2092 .Bl -tag -width Ds
2093 .It Fl n
2094 Do not actually perform the removal ("no-op").
2095 Instead, print the estimated amount of memory that will be used by the
2096 mapping table after the removal completes.
2097 This is nonzero only for top-level vdevs.
2098 .El
2099 .Bl -tag -width Ds
2100 .It Fl p
2101 Used in conjunction with the
2102 .Fl n
2103 flag, displays numbers as parsable (exact) values.
2104 .El
2105 .It Xo
2106 .Nm
2107 .Cm remove
2108 .Fl s
2109 .Ar pool
2110 .Xc
2111 Stops and cancels an in-progress removal of a top-level vdev.
2112 .It Xo
2113 .Nm
2114 .Cm replace
2115 .Op Fl f
2116 .Op Fl o Ar property Ns = Ns Ar value
2117 .Ar pool Ar device Op Ar new_device
2118 .Xc
2119 Replaces
2120 .Ar old_device
2121 with
2122 .Ar new_device .
2123 This is equivalent to attaching
2124 .Ar new_device ,
2125 waiting for it to resilver, and then detaching
2126 .Ar old_device .
2127 .Pp
2128 The size of
2129 .Ar new_device
2130 must be greater than or equal to the minimum size of all the devices in a mirror
2131 or raidz configuration.
2132 .Pp
2133 .Ar new_device
2134 is required if the pool is not redundant.
2135 If
2136 .Ar new_device
2137 is not specified, it defaults to
2138 .Ar old_device .
2139 This form of replacement is useful after an existing disk has failed and has
2140 been physically replaced.
2141 In this case, the new disk may have the same
2142 .Pa /dev
2143 path as the old device, even though it is actually a different disk.
2144 ZFS recognizes this.
2145 .Bl -tag -width Ds
2146 .It Fl f
2147 Forces use of
2148 .Ar new_device ,
2149 even if it appears to be in use.
2150 Not all devices can be overridden in this manner.
2151 .It Fl o Ar property Ns = Ns Ar value
2152 Sets the given pool properties. See the
2153 .Sx Properties
2154 section for a list of valid properties that can be set.
2155 The only property supported at the moment is
2156 .Sy ashift .
2157 .El
2158 .It Xo
2159 .Nm
2160 .Cm scrub
2161 .Op Fl s | Fl p
2162 .Ar pool Ns ...
2163 .Xc
2164 Begins a scrub or resumes a paused scrub.
2165 The scrub examines all data in the specified pools to verify that it checksums
2166 correctly.
2167 For replicated
2168 .Pq mirror or raidz
2169 devices, ZFS automatically repairs any damage discovered during the scrub.
2170 The
2171 .Nm zpool Cm status
2172 command reports the progress of the scrub and summarizes the results of the
2173 scrub upon completion.
2174 .Pp
2175 Scrubbing and resilvering are very similar operations.
2176 The difference is that resilvering only examines data that ZFS knows to be out
2177 of date
2178 .Po
2179 for example, when attaching a new device to a mirror or replacing an existing
2180 device
2181 .Pc ,
2182 whereas scrubbing examines all data to discover silent errors due to hardware
2183 faults or disk failure.
2184 .Pp
2185 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2186 one at a time.
2187 If a scrub is paused, the
2188 .Nm zpool Cm scrub
2189 resumes it.
2190 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2191 resilver completes.
2192 .Pp
2193 Note that, due to changes in pool data on a live system, it is possible for
2194 scrubs to progress slightly beyond 100% completion. During this period, no
2195 completion time estimate will be provided.
2196 .Bl -tag -width Ds
2197 .It Fl s
2198 Stop scrubbing.
2199 .El
2200 .Bl -tag -width Ds
2201 .It Fl p
2202 Pause scrubbing.
2203 Scrub pause state and progress are periodically synced to disk.
2204 If the system is restarted or pool is exported during a paused scrub,
2205 even after import, scrub will remain paused until it is resumed.
2206 Once resumed the scrub will pick up from the place where it was last
2207 checkpointed to disk.
2208 To resume a paused scrub issue
2209 .Nm zpool Cm scrub
2210 again.
2211 .El
2212 .It Xo
2213 .Nm
2214 .Cm resilver
2215 .Ar pool Ns ...
2216 .Xc
2217 Starts a resilver. If an existing resilver is already running it will be
2218 restarted from the beginning. Any drives that were scheduled for a deferred
2219 resilver will be added to the new one.
2220 .It Xo
2221 .Nm
2222 .Cm trim
2223 .Op Fl d
2224 .Op Fl c | Fl s
2225 .Ar pool
2226 .Op Ar device Ns ...
2227 .Xc
2228 Initiates an immediate on-demand TRIM operation for all of the free space in
2229 a pool. This operation informs the underlying storage devices of all blocks
2230 in the pool which are no longer allocated and allows thinly provisioned
2231 devices to reclaim the space.
2232 .Pp
2233 A manual on-demand TRIM operation can be initiated irrespective of the
2234 .Sy autotrim
2235 pool property setting. See the documentation for the
2236 .Sy autotrim
2237 property above for the types of vdev devices which can be trimmed.
2238 .Bl -tag -width Ds
2239 .It Fl d -secure
2240 Causes a secure TRIM to be initiated. When performing a secure TRIM, the
2241 device guarantees that data stored on the trimmed blocks has been erased.
2242 This requires support from the device and is not supported by all SSDs.
2243 .It Fl r -rate Ar rate
2244 Controls the rate at which the TRIM operation progresses. Without this
2245 option TRIM is executed as quickly as possible. The rate, expressed in bytes
2246 per second, is applied on a per-vdev basis and may be set differently for
2247 each leaf vdev.
2248 .It Fl c, -cancel
2249 Cancel trimming on the specified devices, or all eligible devices if none
2250 are specified.
2251 If one or more target devices are invalid or are not currently being
2252 trimmed, the command will fail and no cancellation will occur on any device.
2253 .It Fl s -suspend
2254 Suspend trimming on the specified devices, or all eligible devices if none
2255 are specified.
2256 If one or more target devices are invalid or are not currently being
2257 trimmed, the command will fail and no suspension will occur on any device.
2258 Trimming can then be resumed by running
2259 .Nm zpool Cm trim
2260 with no flags on the relevant target devices.
2261 .El
2262 .It Xo
2263 .Nm
2264 .Cm set
2265 .Ar property Ns = Ns Ar value
2266 .Ar pool
2267 .Xc
2268 Sets the given property on the specified pool.
2269 See the
2270 .Sx Properties
2271 section for more information on what properties can be set and acceptable
2272 values.
2273 .It Xo
2274 .Nm
2275 .Cm split
2276 .Op Fl gLlnP
2277 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2278 .Op Fl R Ar root
2279 .Ar pool newpool
2280 .Op Ar device ...
2281 .Xc
2282 Splits devices off
2283 .Ar pool
2284 creating
2285 .Ar newpool .
2286 All vdevs in
2287 .Ar pool
2288 must be mirrors and the pool must not be in the process of resilvering.
2289 At the time of the split,
2290 .Ar newpool
2291 will be a replica of
2292 .Ar pool .
2293 By default, the
2294 last device in each mirror is split from
2295 .Ar pool
2296 to create
2297 .Ar newpool .
2298 .Pp
2299 The optional device specification causes the specified device(s) to be
2300 included in the new
2301 .Ar pool
2302 and, should any devices remain unspecified,
2303 the last device in each mirror is used as would be by default.
2304 .Bl -tag -width Ds
2305 .It Fl g
2306 Display vdev GUIDs instead of the normal device names. These GUIDs
2307 can be used in place of device names for the zpool
2308 detach/offline/remove/replace commands.
2309 .It Fl L
2310 Display real paths for vdevs resolving all symbolic links. This can
2311 be used to look up the current block device name regardless of the
2312 .Pa /dev/disk/
2313 path used to open it.
2314 .It Fl l
2315 Indicates that this command will request encryption keys for all encrypted
2316 datasets it attempts to mount as it is bringing the new pool online. Note that
2317 if any datasets have a
2318 .Sy keylocation
2319 of
2320 .Sy prompt
2321 this command will block waiting for the keys to be entered. Without this flag
2322 encrypted datasets will be left unavailable until the keys are loaded.
2323 .It Fl n
2324 Do dry run, do not actually perform the split.
2325 Print out the expected configuration of
2326 .Ar newpool .
2327 .It Fl P
2328 Display full paths for vdevs instead of only the last component of
2329 the path. This can be used in conjunction with the
2330 .Fl L
2331 flag.
2332 .It Fl o Ar property Ns = Ns Ar value
2333 Sets the specified property for
2334 .Ar newpool .
2335 See the
2336 .Sx Properties
2337 section for more information on the available pool properties.
2338 .It Fl R Ar root
2339 Set
2340 .Sy altroot
2341 for
2342 .Ar newpool
2343 to
2344 .Ar root
2345 and automatically import it.
2346 .El
2347 .It Xo
2348 .Nm
2349 .Cm status
2350 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2351 .Op Fl DigLpPstvx
2352 .Op Fl T Sy u Ns | Ns Sy d
2353 .Oo Ar pool Oc Ns ...
2354 .Op Ar interval Op Ar count
2355 .Xc
2356 Displays the detailed health status for the given pools.
2357 If no
2358 .Ar pool
2359 is specified, then the status of each pool in the system is displayed.
2360 For more information on pool and device health, see the
2361 .Sx Device Failure and Recovery
2362 section.
2363 .Pp
2364 If a scrub or resilver is in progress, this command reports the percentage done
2365 and the estimated time to completion.
2366 Both of these are only approximate, because the amount of data in the pool and
2367 the other workloads on the system can change.
2368 .Bl -tag -width Ds
2369 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2370 Run a script (or scripts) on each vdev and include the output as a new column
2371 in the
2372 .Nm zpool Cm status
2373 output. See the
2374 .Fl c
2375 option of
2376 .Nm zpool Cm iostat
2377 for complete details.
2378 .It Fl i
2379 Display vdev initialization status.
2380 .It Fl g
2381 Display vdev GUIDs instead of the normal device names. These GUIDs
2382 can be used in place of device names for the zpool
2383 detach/offline/remove/replace commands.
2384 .It Fl L
2385 Display real paths for vdevs resolving all symbolic links. This can
2386 be used to look up the current block device name regardless of the
2387 .Pa /dev/disk/
2388 path used to open it.
2389 .It Fl p
2390 Display numbers in parsable (exact) values.
2391 .It Fl P
2392 Display full paths for vdevs instead of only the last component of
2393 the path. This can be used in conjunction with the
2394 .Fl L
2395 flag.
2396 .It Fl D
2397 Display a histogram of deduplication statistics, showing the allocated
2398 .Pq physically present on disk
2399 and referenced
2400 .Pq logically referenced in the pool
2401 block counts and sizes by reference count.
2402 .It Fl s
2403 Display the number of leaf VDEV slow IOs. This is the number of IOs that
2404 didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2405 This does not necessarily mean the IOs failed to complete, just took an
2406 unreasonably long amount of time. This may indicate a problem with the
2407 underlying storage.
2408 .It Fl t
2409 Display vdev TRIM status.
2410 .It Fl T Sy u Ns | Ns Sy d
2411 Display a time stamp.
2412 Specify
2413 .Sy u
2414 for a printed representation of the internal representation of time.
2415 See
2416 .Xr time 2 .
2417 Specify
2418 .Sy d
2419 for standard date format.
2420 See
2421 .Xr date 1 .
2422 .It Fl v
2423 Displays verbose data error information, printing out a complete list of all
2424 data errors since the last complete pool scrub.
2425 .It Fl x
2426 Only display status for pools that are exhibiting errors or are otherwise
2427 unavailable.
2428 Warnings about pools not using the latest on-disk format will not be included.
2429 .El
2430 .It Xo
2431 .Nm
2432 .Cm sync
2433 .Op Ar pool ...
2434 .Xc
2435 This command forces all in-core dirty data to be written to the primary
2436 pool storage and not the ZIL. It will also update administrative
2437 information including quota reporting. Without arguments,
2438 .Sy zpool sync
2439 will sync all pools on the system. Otherwise, it will sync only the
2440 specified pool(s).
2441 .It Xo
2442 .Nm
2443 .Cm upgrade
2444 .Xc
2445 Displays pools which do not have all supported features enabled and pools
2446 formatted using a legacy ZFS version number.
2447 These pools can continue to be used, but some features may not be available.
2448 Use
2449 .Nm zpool Cm upgrade Fl a
2450 to enable all features on all pools.
2451 .It Xo
2452 .Nm
2453 .Cm upgrade
2454 .Fl v
2455 .Xc
2456 Displays legacy ZFS versions supported by the current software.
2457 See
2458 .Xr zpool-features 5
2459 for a description of feature flags features supported by the current software.
2460 .It Xo
2461 .Nm
2462 .Cm upgrade
2463 .Op Fl V Ar version
2464 .Fl a Ns | Ns Ar pool Ns ...
2465 .Xc
2466 Enables all supported features on the given pool.
2467 Once this is done, the pool will no longer be accessible on systems that do not
2468 support feature flags.
2469 See
2470 .Xr zpool-features 5
2471 for details on compatibility with systems that support feature flags, but do not
2472 support all features enabled on the pool.
2473 .Bl -tag -width Ds
2474 .It Fl a
2475 Enables all supported features on all pools.
2476 .It Fl V Ar version
2477 Upgrade to the specified legacy version.
2478 If the
2479 .Fl V
2480 flag is specified, no features will be enabled on the pool.
2481 This option can only be used to increase the version number up to the last
2482 supported legacy version number.
2483 .El
2484 .It Xo
2485 .Nm
2486 .Cm version
2487 .Xc
2488 Displays the software version of the
2489 .Nm
2490 userland utility and the zfs kernel module.
2491 .El
2492 .Sh EXIT STATUS
2493 The following exit values are returned:
2494 .Bl -tag -width Ds
2495 .It Sy 0
2496 Successful completion.
2497 .It Sy 1
2498 An error occurred.
2499 .It Sy 2
2500 Invalid command line options were specified.
2501 .El
2502 .Sh EXAMPLES
2503 .Bl -tag -width Ds
2504 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2505 The following command creates a pool with a single raidz root vdev that
2506 consists of six disks.
2507 .Bd -literal
2508 # zpool create tank raidz sda sdb sdc sdd sde sdf
2509 .Ed
2510 .It Sy Example 2 No Creating a Mirrored Storage Pool
2511 The following command creates a pool with two mirrors, where each mirror
2512 contains two disks.
2513 .Bd -literal
2514 # zpool create tank mirror sda sdb mirror sdc sdd
2515 .Ed
2516 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2517 The following command creates an unmirrored pool using two disk partitions.
2518 .Bd -literal
2519 # zpool create tank sda1 sdb2
2520 .Ed
2521 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2522 The following command creates an unmirrored pool using files.
2523 While not recommended, a pool based on files can be useful for experimental
2524 purposes.
2525 .Bd -literal
2526 # zpool create tank /path/to/file/a /path/to/file/b
2527 .Ed
2528 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2529 The following command adds two mirrored disks to the pool
2530 .Em tank ,
2531 assuming the pool is already made up of two-way mirrors.
2532 The additional space is immediately available to any datasets within the pool.
2533 .Bd -literal
2534 # zpool add tank mirror sda sdb
2535 .Ed
2536 .It Sy Example 6 No Listing Available ZFS Storage Pools
2537 The following command lists all available pools on the system.
2538 In this case, the pool
2539 .Em zion
2540 is faulted due to a missing device.
2541 The results from this command are similar to the following:
2542 .Bd -literal
2543 # zpool list
2544 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2545 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2546 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2547 zion - - - - - - - FAULTED -
2548 .Ed
2549 .It Sy Example 7 No Destroying a ZFS Storage Pool
2550 The following command destroys the pool
2551 .Em tank
2552 and any datasets contained within.
2553 .Bd -literal
2554 # zpool destroy -f tank
2555 .Ed
2556 .It Sy Example 8 No Exporting a ZFS Storage Pool
2557 The following command exports the devices in pool
2558 .Em tank
2559 so that they can be relocated or later imported.
2560 .Bd -literal
2561 # zpool export tank
2562 .Ed
2563 .It Sy Example 9 No Importing a ZFS Storage Pool
2564 The following command displays available pools, and then imports the pool
2565 .Em tank
2566 for use on the system.
2567 The results from this command are similar to the following:
2568 .Bd -literal
2569 # zpool import
2570 pool: tank
2571 id: 15451357997522795478
2572 state: ONLINE
2573 action: The pool can be imported using its name or numeric identifier.
2574 config:
2575
2576 tank ONLINE
2577 mirror ONLINE
2578 sda ONLINE
2579 sdb ONLINE
2580
2581 # zpool import tank
2582 .Ed
2583 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2584 The following command upgrades all ZFS Storage pools to the current version of
2585 the software.
2586 .Bd -literal
2587 # zpool upgrade -a
2588 This system is currently running ZFS version 2.
2589 .Ed
2590 .It Sy Example 11 No Managing Hot Spares
2591 The following command creates a new pool with an available hot spare:
2592 .Bd -literal
2593 # zpool create tank mirror sda sdb spare sdc
2594 .Ed
2595 .Pp
2596 If one of the disks were to fail, the pool would be reduced to the degraded
2597 state.
2598 The failed device can be replaced using the following command:
2599 .Bd -literal
2600 # zpool replace tank sda sdd
2601 .Ed
2602 .Pp
2603 Once the data has been resilvered, the spare is automatically removed and is
2604 made available for use should another device fail.
2605 The hot spare can be permanently removed from the pool using the following
2606 command:
2607 .Bd -literal
2608 # zpool remove tank sdc
2609 .Ed
2610 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2611 The following command creates a ZFS storage pool consisting of two, two-way
2612 mirrors and mirrored log devices:
2613 .Bd -literal
2614 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2615 sde sdf
2616 .Ed
2617 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2618 The following command adds two disks for use as cache devices to a ZFS storage
2619 pool:
2620 .Bd -literal
2621 # zpool add pool cache sdc sdd
2622 .Ed
2623 .Pp
2624 Once added, the cache devices gradually fill with content from main memory.
2625 Depending on the size of your cache devices, it could take over an hour for
2626 them to fill.
2627 Capacity and reads can be monitored using the
2628 .Cm iostat
2629 option as follows:
2630 .Bd -literal
2631 # zpool iostat -v pool 5
2632 .Ed
2633 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2634 The following commands remove the mirrored log device
2635 .Sy mirror-2
2636 and mirrored top-level data device
2637 .Sy mirror-1 .
2638 .Pp
2639 Given this configuration:
2640 .Bd -literal
2641 pool: tank
2642 state: ONLINE
2643 scrub: none requested
2644 config:
2645
2646 NAME STATE READ WRITE CKSUM
2647 tank ONLINE 0 0 0
2648 mirror-0 ONLINE 0 0 0
2649 sda ONLINE 0 0 0
2650 sdb ONLINE 0 0 0
2651 mirror-1 ONLINE 0 0 0
2652 sdc ONLINE 0 0 0
2653 sdd ONLINE 0 0 0
2654 logs
2655 mirror-2 ONLINE 0 0 0
2656 sde ONLINE 0 0 0
2657 sdf ONLINE 0 0 0
2658 .Ed
2659 .Pp
2660 The command to remove the mirrored log
2661 .Sy mirror-2
2662 is:
2663 .Bd -literal
2664 # zpool remove tank mirror-2
2665 .Ed
2666 .Pp
2667 The command to remove the mirrored data
2668 .Sy mirror-1
2669 is:
2670 .Bd -literal
2671 # zpool remove tank mirror-1
2672 .Ed
2673 .It Sy Example 15 No Displaying expanded space on a device
2674 The following command displays the detailed information for the pool
2675 .Em data .
2676 This pool is comprised of a single raidz vdev where one of its devices
2677 increased its capacity by 10GB.
2678 In this example, the pool will not be able to utilize this extra capacity until
2679 all the devices under the raidz vdev have been expanded.
2680 .Bd -literal
2681 # zpool list -v data
2682 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2683 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2684 raidz1 23.9G 14.6G 9.30G - 48%
2685 sda - - - - -
2686 sdb - - - 10G -
2687 sdc - - - - -
2688 .Ed
2689 .It Sy Example 16 No Adding output columns
2690 Additional columns can be added to the
2691 .Nm zpool Cm status
2692 and
2693 .Nm zpool Cm iostat
2694 output with
2695 .Fl c
2696 option.
2697 .Bd -literal
2698 # zpool status -c vendor,model,size
2699 NAME STATE READ WRITE CKSUM vendor model size
2700 tank ONLINE 0 0 0
2701 mirror-0 ONLINE 0 0 0
2702 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2703 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2704 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2705 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2706 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2707 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2708
2709 # zpool iostat -vc slaves
2710 capacity operations bandwidth
2711 pool alloc free read write read write slaves
2712 ---------- ----- ----- ----- ----- ----- ----- ---------
2713 tank 20.4G 7.23T 26 152 20.7M 21.6M
2714 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2715 U1 - - 0 31 1.46K 20.6M sdb sdff
2716 U10 - - 0 1 3.77K 13.3K sdas sdgw
2717 U11 - - 0 1 288K 13.3K sdat sdgx
2718 U12 - - 0 1 78.4K 13.3K sdau sdgy
2719 U13 - - 0 1 128K 13.3K sdav sdgz
2720 U14 - - 0 1 63.2K 13.3K sdfk sdg
2721 .Ed
2722 .El
2723 .Sh ENVIRONMENT VARIABLES
2724 .Bl -tag -width "ZFS_ABORT"
2725 .It Ev ZFS_ABORT
2726 Cause
2727 .Nm zpool
2728 to dump core on exit for the purposes of running
2729 .Sy ::findleaks .
2730 .El
2731 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2732 .It Ev ZPOOL_IMPORT_PATH
2733 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2734 .Nm zpool
2735 looks for device nodes and files.
2736 Similar to the
2737 .Fl d
2738 option in
2739 .Nm zpool import .
2740 .El
2741 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2742 .It Ev ZPOOL_VDEV_NAME_GUID
2743 Cause
2744 .Nm zpool
2745 subcommands to output vdev guids by default. This behavior is identical to the
2746 .Nm zpool status -g
2747 command line option.
2748 .El
2749 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2750 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2751 Cause
2752 .Nm zpool
2753 subcommands to follow links for vdev names by default. This behavior is identical to the
2754 .Nm zpool status -L
2755 command line option.
2756 .El
2757 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2758 .It Ev ZPOOL_VDEV_NAME_PATH
2759 Cause
2760 .Nm zpool
2761 subcommands to output full vdev path names by default. This
2762 behavior is identical to the
2763 .Nm zpool status -p
2764 command line option.
2765 .El
2766 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2767 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2768 Older ZFS on Linux implementations had issues when attempting to display pool
2769 config VDEV names if a
2770 .Sy devid
2771 NVP value is present in the pool's config.
2772 .Pp
2773 For example, a pool that originated on illumos platform would have a devid
2774 value in the config and
2775 .Nm zpool status
2776 would fail when listing the config.
2777 This would also be true for future Linux based pools.
2778 .Pp
2779 A pool can be stripped of any
2780 .Sy devid
2781 values on import or prevented from adding
2782 them on
2783 .Nm zpool create
2784 or
2785 .Nm zpool add
2786 by setting
2787 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2788 .El
2789 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2790 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2791 Allow a privileged user to run the
2792 .Nm zpool status/iostat
2793 with the
2794 .Fl c
2795 option. Normally, only unprivileged users are allowed to run
2796 .Fl c .
2797 .El
2798 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2799 .It Ev ZPOOL_SCRIPTS_PATH
2800 The search path for scripts when running
2801 .Nm zpool status/iostat
2802 with the
2803 .Fl c
2804 option. This is a colon-separated list of directories and overrides the default
2805 .Pa ~/.zpool.d
2806 and
2807 .Pa /etc/zfs/zpool.d
2808 search paths.
2809 .El
2810 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2811 .It Ev ZPOOL_SCRIPTS_ENABLED
2812 Allow a user to run
2813 .Nm zpool status/iostat
2814 with the
2815 .Fl c
2816 option. If
2817 .Sy ZPOOL_SCRIPTS_ENABLED
2818 is not set, it is assumed that the user is allowed to run
2819 .Nm zpool status/iostat -c .
2820 .El
2821 .Sh INTERFACE STABILITY
2822 .Sy Evolving
2823 .Sh SEE ALSO
2824 .Xr zfs-events 5 ,
2825 .Xr zfs-module-parameters 5 ,
2826 .Xr zpool-features 5 ,
2827 .Xr zed 8 ,
2828 .Xr zfs 8