4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2013 by Delphix. All rights reserved.
24 .\" Copyright 2016 Nexenta Systems, Inc.
25 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
26 .\" Copyright (c) 2017 Datto Inc.
27 .\" Copyright (c) 2017 George Melikov. All Rights Reserved.
34 .Nd configure ZFS storage pools
41 .Oo Fl o Ar property Ns = Ns Ar value Oc
46 .Oo Fl o Ar property Ns = Ns Ar value Oc
47 .Ar pool device new_device
55 .Op Fl m Ar mountpoint
56 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
57 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
58 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
80 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
81 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
90 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
95 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
96 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
98 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
103 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
104 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
106 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
109 .Ar pool Ns | Ns Ar id
110 .Op Ar newpool Oo Fl t Oc
113 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
114 .Op Fl T Sy u Ns | Ns Sy d
116 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
117 .Op Ar interval Op Ar count
125 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
126 .Op Fl T Sy u Ns | Ns Sy d
127 .Oo Ar pool Oc Ns ...
128 .Op Ar interval Op Ar count
133 .Ar pool Ar device Ns ...
137 .Ar pool Ar device Ns ...
146 .Ar pool Ar device Ns ...
150 .Oo Fl o Ar property Ns = Ns Ar value Oc
151 .Ar pool Ar device Op Ar new_device
158 .Ar property Ns = Ns Ar value
163 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
166 .Oo Ar device Oc Ns ...
169 .Oo Fl c Ar SCRIPT Oc
171 .Op Fl T Sy u Ns | Ns Sy d
172 .Oo Ar pool Oc Ns ...
173 .Op Ar interval Op Ar count
176 .Oo Ar pool Oc Ns ...
185 .Fl a Ns | Ns Ar pool Ns ...
189 command configures ZFS storage pools.
190 A storage pool is a collection of devices that provides physical storage and
191 data replication for ZFS datasets.
192 All datasets within a storage pool share the same space.
195 for information on managing datasets.
196 .Ss Virtual Devices (vdevs)
197 A "virtual device" describes a single device or a collection of devices
198 organized according to certain performance and fault characteristics.
199 The following virtual devices are supported:
202 A block device, typically located under
204 ZFS can use individual slices or partitions, though the recommended mode of
205 operation is to use whole disks.
206 A disk can be specified by a full path, or it can be a shorthand name
207 .Po the relative portion of the path under
210 A whole disk can be specified by omitting the slice or partition designation.
215 When given a whole disk, ZFS automatically labels the disk, if necessary.
218 The use of files as a backing store is strongly discouraged.
219 It is designed primarily for experimental purposes, as the fault tolerance of a
220 file is only as good as the file system of which it is a part.
221 A file must be specified by a full path.
223 A mirror of two or more devices.
224 Data is replicated in an identical fashion across all components of a mirror.
225 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
226 failing before data integrity is compromised.
227 .It Sy raidz , raidz1 , raidz2 , raidz3
228 A variation on RAID-5 that allows for better distribution of parity and
229 eliminates the RAID-5
231 .Pq in which data and parity become inconsistent after a power loss .
232 Data and parity is striped across all disks within a raidz group.
234 A raidz group can have single-, double-, or triple-parity, meaning that the
235 raidz group can sustain one, two, or three failures, respectively, without
239 vdev type specifies a single-parity raidz group; the
241 vdev type specifies a double-parity raidz group; and the
243 vdev type specifies a triple-parity raidz group.
246 vdev type is an alias for
249 A raidz group with N disks of size X with P parity disks can hold approximately
250 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
252 The minimum number of devices in a raidz group is one more than the number of
254 The recommended number is between 3 and 9 to help increase performance.
256 A special pseudo-vdev which keeps track of available hot spares for a pool.
257 For more information, see the
261 A separate intent log device.
262 If more than one log device is specified, then writes are load-balanced between
264 Log devices can be mirrored.
265 However, raidz vdev types are not supported for the intent log.
266 For more information, see the
270 A device used to cache storage pool data.
271 A cache device cannot be configured as a mirror or raidz group.
272 For more information, see the
277 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
278 contain files or disks.
280 .Pq or other combinations
283 A pool can have any number of virtual devices at the top of the configuration
287 Data is dynamically distributed across all top-level devices to balance data
289 As new virtual devices are added, ZFS automatically places data on the newly
292 Virtual devices are specified one at a time on the command line, separated by
298 are used to distinguish where a group ends and another begins.
299 For example, the following creates two root vdevs, each a mirror of two disks:
301 # zpool create mypool mirror sda sdb mirror sdc sdd
303 .Ss Device Failure and Recovery
304 ZFS supports a rich set of mechanisms for handling device failure and data
306 All metadata and data is checksummed, and ZFS automatically repairs bad data
307 from a good copy when corruption is detected.
309 In order to take advantage of these features, a pool must make use of some form
310 of redundancy, using either mirrored or raidz groups.
311 While ZFS supports running in a non-redundant configuration, where each root
312 vdev is simply a disk or file, this is strongly discouraged.
313 A single case of bit corruption can render some or all of your data unavailable.
315 A pool's health status is described by one of three states: online, degraded,
317 An online pool has all devices operating normally.
318 A degraded pool is one in which one or more devices have failed, but the data is
319 still available due to a redundant configuration.
320 A faulted pool has corrupted metadata, or one or more faulted devices, and
321 insufficient replicas to continue functioning.
323 The health of the top-level vdev, such as mirror or raidz device, is
324 potentially impacted by the state of its associated vdevs, or component
326 A top-level vdev or component device is in one of the following states:
327 .Bl -tag -width "DEGRADED"
329 One or more top-level vdevs is in the degraded state because one or more
330 component devices are offline.
331 Sufficient replicas exist to continue functioning.
333 One or more component devices is in the degraded or faulted state, but
334 sufficient replicas exist to continue functioning.
335 The underlying conditions are as follows:
338 The number of checksum errors exceeds acceptable levels and the device is
339 degraded as an indication that something may be wrong.
340 ZFS continues to use the device as necessary.
342 The number of I/O errors exceeds acceptable levels.
343 The device could not be marked as faulted because there are insufficient
344 replicas to continue functioning.
347 One or more top-level vdevs is in the faulted state because one or more
348 component devices are offline.
349 Insufficient replicas exist to continue functioning.
351 One or more component devices is in the faulted state, and insufficient
352 replicas exist to continue functioning.
353 The underlying conditions are as follows:
356 The device could be opened, but the contents did not match expected values.
358 The number of I/O errors exceeds acceptable levels and the device is faulted to
359 prevent further use of the device.
362 The device was explicitly taken offline by the
366 The device is online and functioning.
368 The device was physically removed while the system was running.
369 Device removal detection is hardware-dependent and may not be supported on all
372 The device could not be opened.
373 If a pool is imported when a device was unavailable, then the device will be
374 identified by a unique identifier instead of its path since the path was never
375 correct in the first place.
378 If a device is removed and later re-attached to the system, ZFS attempts
379 to put the device online automatically.
380 Device attach detection is hardware-dependent and might not be supported on all
383 ZFS allows devices to be associated with pools as
385 These devices are not actively used in the pool, but when an active device
386 fails, it is automatically replaced by a hot spare.
387 To create a pool with hot spares, specify a
389 vdev with any number of devices.
392 # zpool create pool mirror sda sdb spare sdc sdd
395 Spares can be shared across multiple pools, and can be added with the
397 command and removed with the
400 Once a spare replacement is initiated, a new
402 vdev is created within the configuration that will remain there until the
403 original device is replaced.
404 At this point, the hot spare becomes available again if another device fails.
406 If a pool has a shared spare that is currently being used, the pool can not be
407 exported since other pools may use this shared spare, which may lead to
408 potential data corruption.
410 An in-progress spare replacement can be canceled by detaching the hot spare.
411 If the original faulted device is detached, then the hot spare assumes its
412 place in the configuration, and is removed from the spare list of all active
415 Spares cannot replace log devices.
417 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
419 For instance, databases often require their transactions to be on stable storage
420 devices when returning from a system call.
421 NFS and other applications can also use
423 to ensure data stability.
424 By default, the intent log is allocated from blocks within the main pool.
425 However, it might be possible to get better performance using separate intent
426 log devices such as NVRAM or a dedicated disk.
429 # zpool create pool sda sdb log sdc
432 Multiple log devices can also be specified, and they can be mirrored.
435 section for an example of mirroring multiple log devices.
437 Log devices can be added, replaced, attached, detached, and imported and
438 exported as part of the larger pool.
439 Mirrored log devices can be removed by specifying the top-level mirror for the
442 Devices can be added to a storage pool as
444 These devices provide an additional layer of caching between main memory and
446 For read-heavy workloads, where the working set size is much larger than what
447 can be cached in main memory, using cache devices allow much more of this
448 working set to be served from low latency media.
449 Using cache devices provides the greatest performance improvement for random
450 read-workloads of mostly static content.
452 To create a pool with cache devices, specify a
454 vdev with any number of devices.
457 # zpool create pool sda sdb cache sdc sdd
460 Cache devices cannot be mirrored or part of a raidz configuration.
461 If a read error is encountered on a cache device, that read I/O is reissued to
462 the original storage pool device, which might be part of a mirrored or raidz
465 The content of the cache devices is considered volatile, as is the case with
468 Each pool has several properties associated with it.
469 Some properties are read-only statistics while others are configurable and
470 change the behavior of the pool.
472 The following are read-only properties:
475 Amount of storage available within the pool.
476 This property can also be referred to by its shortened column name,
479 Percentage of pool space used.
480 This property can also be referred to by its shortened column name,
483 Amount of uninitialized space within the pool or device that can be used to
484 increase the total capacity of the pool.
485 Uninitialized space consists of any space on an EFI labeled vdev which has not
488 .Nm zpool Cm online Fl e
490 This space occurs when a LUN is dynamically expanded.
492 The amount of fragmentation in the pool.
494 The amount of free space available in the pool.
496 After a file system or snapshot is destroyed, the space it was using is
497 returned to the pool asynchronously.
499 is the amount of space remaining to be reclaimed.
506 The current health of the pool.
508 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
510 A unique identifier for the pool.
512 Total size of the storage pool.
513 .It Sy unsupported@ Ns Em feature_guid
514 Information about unsupported features that are enabled on the pool.
519 Amount of storage space used within the pool.
522 The space usage properties report actual physical space available to the
524 The physical space can be different from the total amount of space that any
525 contained datasets can actually use.
526 The amount of space used in a raidz configuration depends on the characteristics
527 of the data being written.
528 In addition, ZFS reserves some space for internal accounting that the
530 command takes into account, but the
533 For non-full pools of a reasonable size, these effects should be invisible.
534 For small pools, or pools that are close to being completely full, these
535 discrepancies may become more noticeable.
537 The following property can be set at creation time and import time:
540 Alternate root directory.
541 If set, this directory is prepended to any mount points within the pool.
542 This can be used when examining an unknown pool where the mount points cannot be
543 trusted, or in an alternate boot environment, where the typical paths are not
546 is not a persistent property.
547 It is valid only while the system is up.
551 .Sy cachefile Ns = Ns Sy none ,
552 though this may be overridden using an explicit setting.
555 The following property can be set only at import time:
557 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
560 the pool will be imported in read-only mode.
561 This property can also be referred to by its shortened column name,
565 The following properties can be set at creation time and import time, and later
570 .It Sy ashift Ns = Ns Sy ashift
571 Pool sector size exponent, to the power of
573 (internally referred to as
575 ). Values from 9 to 16, inclusive, are valid; also, the special
576 value 0 (the default) means to auto-detect using the kernel's block
577 layer and a ZFS internal exception list. I/O operations will be aligned
578 to the specified size boundaries. Additionally, the minimum (disk)
579 write size will be set to the specified size, so this represents a
580 space vs. performance trade-off. For optimal performance, the pool
581 sector size should be greater than or equal to the sector size of the
582 underlying disks. The typical case for setting this property is when
583 performance is important and the underlying disks use 4KiB sectors but
584 report 512B sectors to the OS (for compatibility reasons); in that
587 (which is 1<<12 = 4096). When set, this property is
588 used as the default hint value in subsequent vdev operations (add,
589 attach and replace). Changing this value will not modify any existing
590 vdev, not even on disk replacement; however it can be used, for
591 instance, to replace a dying 512B sectors disk with a newer 4KiB
592 sectors device: this will probably result in bad performance but at the
593 same time could prevent loss of data.
594 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
595 Controls automatic pool expansion when the underlying LUN is grown.
598 the pool will be resized according to the size of the expanded device.
599 If the device is part of a mirror or raidz then all devices within that
600 mirror/raidz group must be expanded before the new space is made available to
602 The default behavior is
604 This property can also be referred to by its shortened column name,
606 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
607 Controls automatic device replacement.
610 device replacement must be initiated by the administrator by using the
615 any new device, found in the same physical location as a device that previously
616 belonged to the pool, is automatically formatted and replaced.
617 The default behavior is
619 This property can also be referred to by its shortened column name,
621 Autoreplace can also be used with virtual disks (like device
622 mapper) provided that you use the /dev/disk/by-vdev paths setup by
623 vdev_id.conf. See the
625 man page for more details.
626 Autoreplace and autoonline require the ZFS Event Daemon be configured and
629 man page for more details.
630 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
631 Identifies the default bootable dataset for the root pool. This property is
632 expected to be set mainly by the installation and upgrade programs.
633 Not all Linux distribution boot processes use the bootfs property.
634 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
635 Controls the location of where the pool configuration is cached.
636 Discovering all pools on system startup requires a cached copy of the
637 configuration data that is stored on the root file system.
638 All pools in this cache are automatically imported when the system boots.
639 Some environments, such as install and clustering, need to cache this
640 information in a different location so that pools are not automatically
642 Setting this property caches the pool configuration in a different location that
643 can later be imported with
644 .Nm zpool Cm import Fl c .
645 Setting it to the special value
647 creates a temporary pool that is never cached, and the special value
650 uses the default location.
652 Multiple pools can share the same cache file.
653 Because the kernel destroys and recreates this file when pools are added and
654 removed, care should be taken when attempting to access this file.
655 When the last pool using a
657 is exported or destroyed, the file is removed.
658 .It Sy comment Ns = Ns Ar text
659 A text string consisting of printable ASCII characters that will be stored
660 such that it is available even if the pool becomes faulted.
661 An administrator can provide additional information about a pool using this
663 .It Sy dedupditto Ns = Ns Ar number
664 Threshold for the number of block ditto copies.
665 If the reference count for a deduplicated block increases above this number, a
666 new ditto copy of this block is automatically stored.
667 The default setting is
669 which causes no ditto copies to be created for deduplicated blocks.
670 The minimum legal nonzero setting is
672 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
673 Controls whether a non-privileged user is granted access based on the dataset
674 permissions defined on the dataset.
677 for more information on ZFS delegated administration.
678 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
679 Controls the system behavior in the event of catastrophic pool failure.
680 This condition is typically a result of a loss of connectivity to the underlying
681 storage device(s) or a failure of all devices within the pool.
682 The behavior of such an event is determined as follows:
683 .Bl -tag -width "continue"
685 Blocks all I/O access until the device connectivity is recovered and the errors
687 This is the default behavior.
691 to any new write I/O requests but allows reads to any of the remaining healthy
693 Any write requests that have yet to be committed to disk would be blocked.
695 Prints out a message to the console and generates a system crash dump.
697 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
698 The value of this property is the current state of
700 The only valid value when setting this property is
704 to the enabled state.
707 for details on feature states.
708 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
709 Controls whether information about snapshots associated with this pool is
717 This property can also be referred to by its shortened name,
719 .It Sy version Ns = Ns Ar version
720 The current on-disk version of the pool.
721 This can be increased, but never decreased.
722 The preferred method of updating pools is with the
724 command, though this property can be used when a specific version is needed for
725 backwards compatibility.
726 Once feature flags are enabled on a pool this property will no longer have a
730 All subcommands that modify state are logged persistently to the pool in their
735 command provides subcommands to create and destroy storage pools, add capacity
736 to storage pools, and provide information about the storage pools.
737 The following subcommands are supported:
743 Displays a help message.
748 .Oo Fl o Ar property Ns = Ns Ar value Oc
751 Adds the specified virtual devices to the given pool.
754 specification is described in the
759 option, and the device checks performed are described in the
766 even if they appear in use or specify a conflicting replication level.
767 Not all devices can be overridden in this manner.
771 GUIDs instead of the normal device names. These GUIDs can be used in place of
772 device names for the zpool detach/offline/remove/replace commands.
774 Display real paths for
776 resolving all symbolic links. This can be used to look up the current block
777 device name regardless of the /dev/disk/ path used to open it.
779 Displays the configuration that would be used without actually adding the
781 The actual pool creation can still fail due to insufficient privileges or
784 Display real paths for
786 instead of only the last component of the path. This can be used in
787 conjunction with the -L flag.
788 .It Fl o Ar property Ns = Ns Ar value
789 Sets the given pool properties. See the
791 section for a list of valid properties that can be set. The only property
792 supported at the moment is ashift.
798 .Oo Fl o Ar property Ns = Ns Ar value Oc
799 .Ar pool device new_device
805 The existing device cannot be part of a raidz configuration.
808 is not currently part of a mirrored configuration,
810 automatically transforms into a two-way mirror of
816 is part of a two-way mirror, attaching
818 creates a three-way mirror, and so on.
821 begins to resilver immediately.
826 even if its appears to be in use.
827 Not all devices can be overridden in this manner.
828 .It Fl o Ar property Ns = Ns Ar value
829 Sets the given pool properties. See the
831 section for a list of valid properties that can be set. The only property
832 supported at the moment is ashift.
840 Clears device errors in a pool.
841 If no arguments are specified, all device errors within the pool are cleared.
842 If one or more devices is specified, only those errors associated with the
843 specified device or devices are cleared.
848 .Op Fl m Ar mountpoint
849 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
850 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
851 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
856 Creates a new storage pool containing the virtual devices specified on the
858 The pool name must begin with a letter, and can only contain
859 alphanumeric characters as well as underscore
875 are reserved, as are names beginning with the pattern
879 specification is described in the
883 The command verifies that each device specified is accessible and not currently
884 in use by another subsystem.
885 There are some uses, such as being currently mounted, or specified as the
886 dedicated dump device, that prevents a device from ever being used by ZFS.
887 Other uses, such as having a preexisting UFS file system, can be overridden with
892 The command also checks that the replication strategy for the pool is
894 An attempt to combine redundant and non-redundant storage in a single pool, or
895 to mix disks and files, results in an error unless
898 The use of differently sized devices within a single raidz or mirror group is
899 also flagged as an error unless
905 option is specified, the default mount point is
907 The mount point must not exist or must be empty, or else the root dataset
909 This can be overridden with the
913 By default all supported features are enabled on the new pool unless the
918 Do not enable any features on the new pool.
919 Individual features can be enabled by setting their corresponding properties to
926 for details about feature properties.
930 even if they appear in use or specify a conflicting replication level.
931 Not all devices can be overridden in this manner.
932 .It Fl m Ar mountpoint
933 Sets the mount point for the root dataset.
934 The default mount point is
941 The mount point must be an absolute path,
945 For more information on dataset mount points, see
948 Displays the configuration that would be used without actually creating the
950 The actual pool creation can still fail due to insufficient privileges or
952 .It Fl o Ar property Ns = Ns Ar value
953 Sets the given pool properties.
956 section for a list of valid properties that can be set.
957 .It Fl o Ar feature@feature Ns = Ns Ar value
958 Sets the given pool feature. See the
960 section for a list of valid features that can be set.
961 Value can be either disabled or enabled.
962 .It Fl O Ar file-system-property Ns = Ns Ar value
963 Sets the given file system properties in the root file system of the pool.
968 for a list of valid properties that can be set.
971 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
973 Sets the in-core pool name to
975 while the on-disk name will be the name specified as the pool name
977 This will set the default cachefile property to none. This is intended
978 to handle name space collisions when creating pools for other systems,
979 such as virtual machines or physical machines whose pools live on network
988 Destroys the given pool, freeing up any devices for other use.
989 This command tries to unmount any active datasets before destroying the pool.
992 Forces any active datasets contained within the pool to be unmounted.
1002 The operation is refused if there are no other valid replicas of the data.
1003 If device may be re-added to the pool later on then consider the
1012 Lists all recent events generated by the ZFS kernel modules. These events
1015 and used to automate administrative tasks such as replacing a failed device
1016 with a hot spare. For more information about the subclasses and event payloads
1017 that can be generated see the
1022 Clear all previous events.
1026 Scripted mode. Do not display headers, and separate fields by a
1027 single tab instead of arbitrary space.
1029 Print the entire payload for each event.
1038 Exports the given pools from the system.
1039 All devices are marked as exported, but are still considered in use by other
1041 The devices can be moved between systems
1042 .Pq even those of different endianness
1043 and imported as long as a sufficient number of devices are present.
1045 Before exporting the pool, all datasets within the pool are unmounted.
1046 A pool can not be exported if it has a shared spare that is currently being
1049 For pools to be portable, you must give the
1051 command whole disks, not just partitions, so that ZFS can label the disks with
1052 portable EFI labels.
1053 Otherwise, disk drivers on platforms of different endianness will not recognize
1057 Exports all pools imported on the system.
1059 Forcefully unmount all datasets, using the
1063 This command will forcefully export the pool even if it has a shared spare that
1064 is currently being used.
1065 This may lead to potential data corruption.
1071 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1072 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1075 Retrieves the given list of properties
1077 or all properties if
1081 for the specified storage pool(s).
1082 These properties are displayed with the following fields:
1084 name Name of storage pool
1085 property Property name
1086 value Property value
1087 source Property source, either 'default' or 'local'.
1092 section for more information on the available pool properties.
1096 Do not display headers, and separate fields by a single tab instead of arbitrary
1099 A comma-separated list of columns to display.
1100 .Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
1101 is the default value.
1103 Display numbers in parsable (exact) values.
1109 .Oo Ar pool Oc Ns ...
1111 Displays the command history of the specified pool(s) or all pools if no pool is
1115 Displays internally logged ZFS events in addition to user initiated events.
1117 Displays log records in long format, which in addition to standard format
1118 includes, the user name, the hostname, and the zone in which the operation was
1125 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1127 Lists pools available to import.
1130 option is not specified, this command searches for devices in
1134 option can be specified multiple times, and all directories are searched.
1135 If the device appears to be part of an exported pool, this command displays a
1136 summary of the pool with the name of the pool, a numeric identifier, as well as
1137 the vdev layout and current health of the device for each device or file.
1138 Destroyed pools, pools that were previously destroyed with the
1139 .Nm zpool Cm destroy
1140 command, are not listed unless the
1142 option is specified.
1144 The numeric identifier is unique, and can be used instead of the pool name when
1145 multiple exported pools of the same name are available.
1147 .It Fl c Ar cachefile
1148 Reads configuration from the given
1150 that was created with the
1155 is used instead of searching for devices.
1157 Searches for devices or files in
1161 option can be specified multiple times.
1163 Lists destroyed pools only.
1170 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1171 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1173 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1177 Imports all pools found in the search directories.
1178 Identical to the previous command, except that all pools with a sufficient
1179 number of devices available are imported.
1180 Destroyed pools, pools that were previously destroyed with the
1181 .Nm zpool Cm destroy
1182 command, will not be imported unless the
1184 option is specified.
1187 Searches for and imports all pools found.
1188 .It Fl c Ar cachefile
1189 Reads configuration from the given
1191 that was created with the
1196 is used instead of searching for devices.
1198 Searches for devices or files in
1202 option can be specified multiple times.
1203 This option is incompatible with the
1207 Imports destroyed pools only.
1210 option is also required.
1212 Forces import, even if the pool appears to be potentially active.
1214 Recovery mode for a non-importable pool.
1215 Attempt to return the pool to an importable state by discarding the last few
1217 Not all damaged pools can be recovered by using this option.
1218 If successful, the data from the discarded transactions is irretrievably lost.
1219 This option is ignored if the pool is importable or already imported.
1221 Allows a pool to import when there is a missing log device.
1222 Recent transactions can be lost because the log device will be discarded.
1227 Determines whether a non-importable pool can be made importable again, but does
1228 not actually perform the pool recovery.
1229 For more details about pool recovery mode, see the
1233 Import the pool without mounting any file systems.
1235 Comma-separated list of mount options to use when mounting datasets within the
1239 for a description of dataset properties and mount options.
1240 .It Fl o Ar property Ns = Ns Ar value
1241 Sets the specified property on the imported pool.
1244 section for more information on the available pool properties.
1255 Scan using the default search path, the libblkid cache will not be
1256 consulted. A custom search path may be specified by setting the
1257 ZPOOL_IMPORT_PATH environment variable.
1261 recovery option. Determines whether extreme
1262 measures to find a valid txg should take place. This allows the pool to
1263 be rolled back to a txg which is no longer guaranteed to be consistent.
1264 Pools imported at an inconsistent txg may contain uncorrectable
1265 checksum errors. For more details about pool recovery mode, see the
1267 option, above. WARNING: This option can be extremely hazardous to the
1268 health of your pool and should only be used as a last resort.
1270 Specify the txg to use for rollback. Implies
1273 about pool recovery mode, see the
1275 option, above. WARNING: This option can be extremely hazardous to the
1276 health of your pool and should only be used as a last resort.
1282 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1283 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1285 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1288 .Ar pool Ns | Ns Ar id
1291 Imports a specific pool.
1292 A pool can be identified by its name or the numeric identifier.
1295 is specified, the pool is imported using the name
1297 Otherwise, it is imported with the same name as its exported name.
1299 If a device is removed from a system without running
1301 first, the device appears as potentially active.
1302 It cannot be determined if this was a failed export, or whether the device is
1303 really in use from another host.
1304 To import a pool in this state, the
1308 .It Fl c Ar cachefile
1309 Reads configuration from the given
1311 that was created with the
1316 is used instead of searching for devices.
1318 Searches for devices or files in
1322 option can be specified multiple times.
1323 This option is incompatible with the
1327 Imports destroyed pool.
1330 option is also required.
1332 Forces import, even if the pool appears to be potentially active.
1334 Recovery mode for a non-importable pool.
1335 Attempt to return the pool to an importable state by discarding the last few
1337 Not all damaged pools can be recovered by using this option.
1338 If successful, the data from the discarded transactions is irretrievably lost.
1339 This option is ignored if the pool is importable or already imported.
1341 Allows a pool to import when there is a missing log device.
1342 Recent transactions can be lost because the log device will be discarded.
1347 Determines whether a non-importable pool can be made importable again, but does
1348 not actually perform the pool recovery.
1349 For more details about pool recovery mode, see the
1353 Comma-separated list of mount options to use when mounting datasets within the
1357 for a description of dataset properties and mount options.
1358 .It Fl o Ar property Ns = Ns Ar value
1359 Sets the specified property on the imported pool.
1362 section for more information on the available pool properties.
1373 Scan using the default search path, the libblkid cache will not be
1374 consulted. A custom search path may be specified by setting the
1375 ZPOOL_IMPORT_PATH environment variable.
1379 recovery option. Determines whether extreme
1380 measures to find a valid txg should take place. This allows the pool to
1381 be rolled back to a txg which is no longer guaranteed to be consistent.
1382 Pools imported at an inconsistent txg may contain uncorrectable
1383 checksum errors. For more details about pool recovery mode, see the
1385 option, above. WARNING: This option can be extremely hazardous to the
1386 health of your pool and should only be used as a last resort.
1388 Specify the txg to use for rollback. Implies
1391 about pool recovery mode, see the
1393 option, above. WARNING: This option can be extremely hazardous to the
1394 health of your pool and should only be used as a last resort.
1400 is temporary. Temporary pool names last until export. Ensures that
1401 the original pool name will be used in all label updates and therefore
1402 is retained upon export.
1403 Will also set -o cachefile=none when not explicitly specified.
1408 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1409 .Op Fl T Sy u Ns | Ns Sy d
1411 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1412 .Op Ar interval Op Ar count
1414 Displays I/O statistics for the given pools/vdevs. You can pass in a
1415 list of pools, a pool and list of vdevs in that pool, or a list of any
1416 vdevs from any pool. If no items are specified, statistics for every
1417 pool in the system are shown.
1420 the statistics are printed every
1422 seconds until ^C is pressed. If count is specified, the command exits
1423 after count reports are printed. The first report printed is always
1424 the statistics since boot regardless of whether
1428 are passed. However, this behavior can be suppressed with the
1430 flag. Also note that the units of
1434 that are printed in the report are in base 1024. To get the raw
1439 .It Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1440 Run a script (or scripts) on each vdev and include the output as a new column
1443 output. Users can run any script found in their
1445 directory or from the system
1446 .Pa /etc/zfs/zpool.d
1447 directory. The default search path can be overridden by setting the
1448 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1450 if they have the ZPOOL_SCRIPTS_AS_ROOT
1451 environment variable set. If a script requires the use of a privileged
1454 , then it's recommended you allow the user access to it in
1456 or add the user to the
1457 .Pa /etc/sudoers.d/zfs
1462 is passed without a script name, it prints a list of all scripts.
1464 also sets verbose mode (
1467 Script output should be in the form of "name=value". The column name is
1468 set to "name" and the value is set to "value". Multiple lines can be
1469 used to output multiple columns. The first line of output not in the
1470 "name=value" format is displayed without a column title, and no more
1471 output after that is displayed. This can be useful for printing error
1472 messages. Blank or NULL values are printed as a '-' to make output
1475 The following environment variables are set before running each script:
1477 .Bl -tag -width "VDEV_PATH"
1479 Full path to the vdev
1481 .Bl -tag -width "VDEV_UPATH"
1483 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1484 multipath, or partitioned vdevs.
1486 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1487 .It Sy VDEV_ENC_SYSFS_PATH
1488 The sysfs path to the enclosure for the vdev (if any).
1490 .It Fl T Sy u Ns | Ns Sy d
1491 Display a time stamp.
1494 for a printed representation of the internal representation of time.
1499 for standard date format.
1503 Display vdev GUIDs instead of the normal device names. These GUIDs
1504 can be used in place of device names for the zpool
1505 detach/offline/remove/replace commands.
1507 Scripted mode. Do not display headers, and separate fields by a
1508 single tab instead of arbitrary space.
1510 Display real paths for vdevs resolving all symbolic links. This can
1511 be used to look up the current block device name regardless of the
1513 path used to open it.
1515 Display numbers in parsable (exact) values. Time values are in
1518 Display full paths for vdevs instead of only the last component of
1519 the path. This can be used in conjunction with the
1523 Print request size histograms for the leaf ZIOs. This includes
1524 histograms of individual ZIOs (
1526 and aggregate ZIOs (
1528 These stats can be useful for seeing how well the ZFS IO aggregator is
1529 working. Do not confuse these request size stats with the block layer
1530 requests; it's possible ZIOs can be broken up before being sent to the
1533 Verbose statistics Reports usage statistics for individual vdevs within the
1534 pool, in addition to the pool-wide statistics.
1538 Include average latency statistics:
1541 Average total IO time (queuing + disk IO time).
1543 Average disk IO time (time reading/writing the disk).
1545 Average amount of time IO spent in synchronous priority queues. Does
1546 not include disk time.
1548 Average amount of time IO spent in asynchronous priority queues.
1549 Does not include disk time.
1551 Average queuing time in scrub queue. Does not include disk time.
1553 Include active queue statistics. Each priority queue has both
1558 IOs. Pending IOs are waiting to
1559 be issued to the disk, and active IOs have been issued to disk and are
1560 waiting for completion. These stats are broken out by priority queue:
1562 .Ar syncq_read/write :
1563 Current number of entries in synchronous priority
1565 .Ar asyncq_read/write :
1566 Current number of entries in asynchronous priority queues.
1568 Current number of entries in scrub queue.
1570 All queue statistics are instantaneous measurements of the number of
1571 entries in the queues. If you specify an interval, the measurements
1572 will be sampled from the end of the interval.
1580 Removes ZFS label information from the specified
1584 must not be part of an active pool configuration.
1587 Treat exported or foreign devices as inactive.
1593 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1594 .Op Fl T Sy u Ns | Ns Sy d
1595 .Oo Ar pool Oc Ns ...
1596 .Op Ar interval Op Ar count
1598 Lists the given pools along with a health status and space usage.
1601 are specified, all pools in the system are listed.
1604 the information is printed every
1606 seconds until ^C is pressed.
1609 is specified, the command exits after
1611 reports are printed.
1614 Display vdev GUIDs instead of the normal device names. These GUIDs
1615 can be used in place of device names for the zpool
1616 detach/offline/remove/replace commands.
1619 Do not display headers, and separate fields by a single tab instead of arbitrary
1621 .It Fl o Ar property
1622 Comma-separated list of properties to display.
1625 section for a list of valid properties.
1627 .Sy name, size, alloc, free, fragmentation, expandsize, capacity,
1628 .Sy dedupratio, health, altroot .
1630 Display real paths for vdevs resolving all symbolic links. This can
1631 be used to look up the current block device name regardless of the
1632 /dev/disk/ path used to open it.
1634 Display numbers in parsable
1638 Display full paths for vdevs instead of only the last component of
1639 the path. This can be used in conjunction with the
1641 .It Fl T Sy u Ns | Ns Sy d
1642 Display a time stamp.
1645 for a printed representation of the internal representation of time.
1650 for standard date format.
1655 Reports usage statistics for individual vdevs within the pool, in addition to
1656 the pool-wise statistics.
1663 .Ar pool Ar device Ns ...
1665 Takes the specified physical device offline.
1668 is offline, no attempt is made to read or write to the device.
1669 This command is not applicable to spares.
1672 Force fault. Instead of offlining the disk, put it into a faulted
1673 state. The fault will persist across imports unless the
1678 Upon reboot, the specified physical device reverts to its previous state.
1684 .Ar pool Ar device Ns ...
1686 Brings the specified physical device online.
1687 This command is not applicable to spares or cache devices.
1690 Expand the device to use all available space.
1691 If the device is part of a mirror or raidz then all devices must be expanded
1692 before the new space will become available to the pool.
1699 Generates a new unique identifier for the pool.
1700 You must ensure that all devices in this pool are online and healthy before
1701 performing this action.
1707 Reopen all the vdevs associated with the pool.
1711 .Ar pool Ar device Ns ...
1713 Removes the specified device from the pool.
1714 This command currently only supports removing hot spares, cache, and log
1716 A mirrored log device can be removed by specifying the top-level mirror for the
1718 Non-log devices that are part of a mirrored configuration can be removed using
1722 Non-redundant and raidz devices cannot be removed from a pool.
1727 .Op Fl o Ar property Ns = Ns Ar value
1728 .Ar pool Ar device Op Ar new_device
1734 This is equivalent to attaching
1736 waiting for it to resilver, and then detaching
1741 must be greater than or equal to the minimum size of all the devices in a mirror
1742 or raidz configuration.
1745 is required if the pool is not redundant.
1748 is not specified, it defaults to
1750 This form of replacement is useful after an existing disk has failed and has
1751 been physically replaced.
1752 In this case, the new disk may have the same
1754 path as the old device, even though it is actually a different disk.
1755 ZFS recognizes this.
1760 even if its appears to be in use.
1761 Not all devices can be overridden in this manner.
1762 .It Fl o Ar property Ns = Ns Ar value
1763 Sets the given pool properties. See the
1765 section for a list of valid properties that can be set.
1766 The only property supported at the moment is
1776 The scrub examines all data in the specified pools to verify that it checksums
1780 devices, ZFS automatically repairs any damage discovered during the scrub.
1783 command reports the progress of the scrub and summarizes the results of the
1784 scrub upon completion.
1786 Scrubbing and resilvering are very similar operations.
1787 The difference is that resilvering only examines data that ZFS knows to be out
1790 for example, when attaching a new device to a mirror or replacing an existing
1793 whereas scrubbing examines all data to discover silent errors due to hardware
1794 faults or disk failure.
1796 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1798 If a scrub is already in progress, the
1800 command terminates it and starts a new scrub.
1801 If a resilver is in progress, ZFS does not allow a scrub to be started until the
1810 .Ar property Ns = Ns Ar value
1813 Sets the given property on the specified pool.
1816 section for more information on what properties can be set and acceptable
1822 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1833 must be mirrors and the pool must not be in the process of resilvering.
1834 At the time of the split,
1836 will be a replica of
1839 last device in each mirror is split from
1844 The optional device specification causes the specified device(s) to be
1847 and, should any devices remain unspecified,
1848 the last device in each mirror is used as would be by default.
1851 Display vdev GUIDs instead of the normal device names. These GUIDs
1852 can be used in place of device names for the zpool
1853 detach/offline/remove/replace commands.
1855 Display real paths for vdevs resolving all symbolic links. This can
1856 be used to look up the current block device name regardless of the
1858 path used to open it.
1860 Do dry run, do not actually perform the split.
1861 Print out the expected configuration of
1864 Display full paths for vdevs instead of only the last component of
1865 the path. This can be used in conjunction with the
1867 .It Fl o Ar property Ns = Ns Ar value
1868 Sets the specified property for
1872 section for more information on the available pool properties.
1880 and automatically import it.
1885 .Op Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1887 .Op Fl T Sy u Ns | Ns Sy d
1888 .Oo Ar pool Oc Ns ...
1889 .Op Ar interval Op Ar count
1891 Displays the detailed health status for the given pools.
1894 is specified, then the status of each pool in the system is displayed.
1895 For more information on pool and device health, see the
1896 .Sx Device Failure and Recovery
1899 If a scrub or resilver is in progress, this command reports the percentage done
1900 and the estimated time to completion.
1901 Both of these are only approximate, because the amount of data in the pool and
1902 the other workloads on the system can change.
1904 .It Fl c Op Ar SCRIPT1 , Ar SCRIPT2 ...
1905 Run a script (or scripts) on each vdev and include the output as a new column
1912 for complete details.
1914 Display vdev GUIDs instead of the normal device names. These GUIDs
1915 can be used in place of device names for the zpool
1916 detach/offline/remove/replace commands.
1918 Display real paths for vdevs resolving all symbolic links. This can
1919 be used to look up the current block device name regardless of the
1921 path used to open it.
1923 Display numbers in parsable (exact) values. Time values are in
1926 Display a histogram of deduplication statistics, showing the allocated
1927 .Pq physically present on disk
1929 .Pq logically referenced in the pool
1930 block counts and sizes by reference count.
1931 .It Fl T Sy u Ns | Ns Sy d
1932 Display a time stamp.
1935 for a printed representation of the internal representation of time.
1940 for standard date format.
1944 Displays verbose data error information, printing out a complete list of all
1945 data errors since the last complete pool scrub.
1947 Only display status for pools that are exhibiting errors or are otherwise
1949 Warnings about pools not using the latest on-disk format will not be included.
1956 This command forces all in-core dirty data to be written to the primary
1957 pool storage and not the ZIL. It will also update administrative
1958 information including quota reporting. Without arguments,
1960 will sync all pools on the system. Otherwise, it will sync only the
1966 Displays pools which do not have all supported features enabled and pools
1967 formatted using a legacy ZFS version number.
1968 These pools can continue to be used, but some features may not be available.
1970 .Nm zpool Cm upgrade Fl a
1971 to enable all features on all pools.
1977 Displays legacy ZFS versions supported by the current software.
1979 .Xr zpool-features 5
1980 for a description of feature flags features supported by the current software.
1985 .Fl a Ns | Ns Ar pool Ns ...
1987 Enables all supported features on the given pool.
1988 Once this is done, the pool will no longer be accessible on systems that do not
1989 support feature flags.
1992 for details on compatibility with systems that support feature flags, but do not
1993 support all features enabled on the pool.
1996 Enables all supported features on all pools.
1998 Upgrade to the specified legacy version.
2001 flag is specified, no features will be enabled on the pool.
2002 This option can only be used to increase the version number up to the last
2003 supported legacy version number.
2007 The following exit values are returned:
2010 Successful completion.
2014 Invalid command line options were specified.
2018 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2019 The following command creates a pool with a single raidz root vdev that
2020 consists of six disks.
2022 # zpool create tank raidz sda sdb sdc sdd sde sdf
2024 .It Sy Example 2 No Creating a Mirrored Storage Pool
2025 The following command creates a pool with two mirrors, where each mirror
2028 # zpool create tank mirror sda sdb mirror sdc sdd
2030 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2031 The following command creates an unmirrored pool using two disk partitions.
2033 # zpool create tank sda1 sdb2
2035 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2036 The following command creates an unmirrored pool using files.
2037 While not recommended, a pool based on files can be useful for experimental
2040 # zpool create tank /path/to/file/a /path/to/file/b
2042 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2043 The following command adds two mirrored disks to the pool
2045 assuming the pool is already made up of two-way mirrors.
2046 The additional space is immediately available to any datasets within the pool.
2048 # zpool add tank mirror sda sdb
2050 .It Sy Example 6 No Listing Available ZFS Storage Pools
2051 The following command lists all available pools on the system.
2052 In this case, the pool
2054 is faulted due to a missing device.
2055 The results from this command are similar to the following:
2058 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2059 rpool 19.9G 8.43G 11.4G 33% - 42% 1.00x ONLINE -
2060 tank 61.5G 20.0G 41.5G 48% - 32% 1.00x ONLINE -
2061 zion - - - - - - - FAULTED -
2063 .It Sy Example 7 No Destroying a ZFS Storage Pool
2064 The following command destroys the pool
2066 and any datasets contained within.
2068 # zpool destroy -f tank
2070 .It Sy Example 8 No Exporting a ZFS Storage Pool
2071 The following command exports the devices in pool
2073 so that they can be relocated or later imported.
2077 .It Sy Example 9 No Importing a ZFS Storage Pool
2078 The following command displays available pools, and then imports the pool
2080 for use on the system.
2081 The results from this command are similar to the following:
2085 id: 15451357997522795478
2087 action: The pool can be imported using its name or numeric identifier.
2097 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2098 The following command upgrades all ZFS Storage pools to the current version of
2102 This system is currently running ZFS version 2.
2104 .It Sy Example 11 No Managing Hot Spares
2105 The following command creates a new pool with an available hot spare:
2107 # zpool create tank mirror sda sdb spare sdc
2110 If one of the disks were to fail, the pool would be reduced to the degraded
2112 The failed device can be replaced using the following command:
2114 # zpool replace tank sda sdd
2117 Once the data has been resilvered, the spare is automatically removed and is
2118 made available for use should another device fails.
2119 The hot spare can be permanently removed from the pool using the following
2122 # zpool remove tank sdc
2124 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2125 The following command creates a ZFS storage pool consisting of two, two-way
2126 mirrors and mirrored log devices:
2128 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2131 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2132 The following command adds two disks for use as cache devices to a ZFS storage
2135 # zpool add pool cache sdc sdd
2138 Once added, the cache devices gradually fill with content from main memory.
2139 Depending on the size of your cache devices, it could take over an hour for
2141 Capacity and reads can be monitored using the
2145 # zpool iostat -v pool 5
2147 .It Sy Example 14 No Removing a Mirrored Log Device
2148 The following command removes the mirrored log device
2150 Given this configuration:
2154 scrub: none requested
2157 NAME STATE READ WRITE CKSUM
2159 mirror-0 ONLINE 0 0 0
2162 mirror-1 ONLINE 0 0 0
2166 mirror-2 ONLINE 0 0 0
2171 The command to remove the mirrored log
2175 # zpool remove tank mirror-2
2177 .It Sy Example 15 No Displaying expanded space on a device
2178 The following command displays the detailed information for the pool
2180 This pool is comprised of a single raidz vdev where one of its devices
2181 increased its capacity by 10GB.
2182 In this example, the pool will not be able to utilize this extra capacity until
2183 all the devices under the raidz vdev have been expanded.
2185 # zpool list -v data
2186 NAME SIZE ALLOC FREE FRAG EXPANDSZ CAP DEDUP HEALTH ALTROOT
2187 data 23.9G 14.6G 9.30G 48% - 61% 1.00x ONLINE -
2188 raidz1 23.9G 14.6G 9.30G 48% -
2193 .It Sy Example 16 No Adding output columns
2194 Additional columns can be added to the
2202 # zpool status -c vendor,model,size
2203 NAME STATE READ WRITE CKSUM vendor model size
2205 mirror-0 ONLINE 0 0 0
2206 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2207 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2208 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2209 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2210 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2211 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2213 # zpool iostat -vc slaves
2214 capacity operations bandwidth
2215 pool alloc free read write read write slaves
2216 ---------- ----- ----- ----- ----- ----- ----- ---------
2217 tank 20.4G 7.23T 26 152 20.7M 21.6M
2218 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2219 U1 - - 0 31 1.46K 20.6M sdb sdff
2220 U10 - - 0 1 3.77K 13.3K sdas sdgw
2221 U11 - - 0 1 288K 13.3K sdat sdgx
2222 U12 - - 0 1 78.4K 13.3K sdau sdgy
2223 U13 - - 0 1 128K 13.3K sdav sdgz
2224 U14 - - 0 1 63.2K 13.3K sdfk sdg
2227 .Sh ENVIRONMENT VARIABLES
2228 .Bl -tag -width "ZFS_ABORT"
2232 to dump core on exit for the purposes of running
2235 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2236 .It Ev ZPOOL_IMPORT_PATH
2237 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2239 looks for device nodes and files.
2245 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2246 .It Ev ZPOOL_VDEV_NAME_GUID
2248 .Nm zpool subcommands to output vdev guids by default. This behavior
2251 command line option.
2253 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2254 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2257 subcommands to follow links for vdev names by default. This behavior is identical to the
2259 command line option.
2261 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2262 .It Ev ZPOOL_VDEV_NAME_PATH
2265 subcommands to output full vdev path names by default. This
2266 behavior is identical to the
2268 command line option.
2270 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2271 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2272 Older ZFS on Linux implementations had issues when attempting to display pool
2273 config VDEV names if a
2275 NVP value is present in the pool's config.
2277 For example, a pool that originated on illumos platform would have a devid
2278 value in the config and
2280 would fail when listing the config.
2281 This would also be true for future Linux based pools.
2283 A pool can be stripped of any
2285 values on import or prevented from adding
2291 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2293 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2294 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2295 Allow a privilaged user to run the
2296 .Nm zpool status/iostat
2299 option. Normally, only unprivilaged users are allowed to run
2302 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2303 .It Ev ZPOOL_SCRIPTS_PATH
2304 The search path for scripts when running
2305 .Nm zpool status/iostat
2308 option. This is a colon-separated list of directories and overrides the default
2311 .Pa /etc/zfs/zpool.d
2314 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2315 .It Ev ZPOOL_SCRIPTS_ENABLED
2317 .Nm zpool status/iostat
2321 .Sy ZPOOL_SCRIPTS_ENABLED
2322 is not set, it is assumed that the user is allowed to run
2323 .Nm zpool status/iostat -c .
2324 .Sh INTERFACE STABILITY
2330 .Xr zfs-module-parameters 5 ,
2331 .Xr zpool-features 5