4 .\" The contents of this file are subject to the terms of the
5 .\" Common Development and Distribution License (the "License").
6 .\" You may not use this file except in compliance with the License.
8 .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 .\" or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions
11 .\" and limitations under the License.
13 .\" When distributing Covered Code, include this CDDL HEADER in each
14 .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 .\" If applicable, add the following below this CDDL HEADER, with the
16 .\" fields enclosed by brackets "[]" replaced with your own identifying
17 .\" information: Portions Copyright [yyyy] [name of copyright owner]
22 .\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23 .\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24 .\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25 .\" Copyright (c) 2017 Datto Inc.
26 .\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27 .\" Copyright 2017 Nexenta Systems, Inc.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
35 .Nd configure ZFS storage pools
42 .Oo Fl o Ar property Ns = Ns Ar value Oc
47 .Oo Fl o Ar property Ns = Ns Ar value Oc
48 .Ar pool device new_device
60 .Op Fl m Ar mountpoint
61 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
62 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc
63 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
75 .Op Fl vHf Oo Ar pool Oc | Fl c
84 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
94 .Op Fl d Ar dir Ns | Ns device
99 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
100 .Op Fl -rewind-to-checkpoint
101 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
103 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
108 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
109 .Op Fl -rewind-to-checkpoint
110 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
112 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
115 .Ar pool Ns | Ns Ar id
116 .Op Ar newpool Oo Fl t Oc
119 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
120 .Op Fl T Sy u Ns | Ns Sy d
122 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
123 .Op Ar interval Op Ar count
131 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
132 .Op Fl T Sy u Ns | Ns Sy d
133 .Oo Ar pool Oc Ns ...
134 .Op Ar interval Op Ar count
139 .Ar pool Ar device Ns ...
143 .Ar pool Ar device Ns ...
154 .Ar pool Ar device Ns ...
162 .Oo Fl o Ar property Ns = Ns Ar value Oc
163 .Ar pool Ar device Op Ar new_device
173 .Ar property Ns = Ns Ar value
178 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
181 .Oo Ar device Oc Ns ...
184 .Oo Fl c Ar SCRIPT Oc
186 .Op Fl T Sy u Ns | Ns Sy d
187 .Oo Ar pool Oc Ns ...
188 .Op Ar interval Op Ar count
191 .Oo Ar pool Oc Ns ...
200 .Fl a Ns | Ns Ar pool Ns ...
204 command configures ZFS storage pools.
205 A storage pool is a collection of devices that provides physical storage and
206 data replication for ZFS datasets.
207 All datasets within a storage pool share the same space.
210 for information on managing datasets.
211 .Ss Virtual Devices (vdevs)
212 A "virtual device" describes a single device or a collection of devices
213 organized according to certain performance and fault characteristics.
214 The following virtual devices are supported:
217 A block device, typically located under
219 ZFS can use individual slices or partitions, though the recommended mode of
220 operation is to use whole disks.
221 A disk can be specified by a full path, or it can be a shorthand name
222 .Po the relative portion of the path under
225 A whole disk can be specified by omitting the slice or partition designation.
230 When given a whole disk, ZFS automatically labels the disk, if necessary.
233 The use of files as a backing store is strongly discouraged.
234 It is designed primarily for experimental purposes, as the fault tolerance of a
235 file is only as good as the file system of which it is a part.
236 A file must be specified by a full path.
238 A mirror of two or more devices.
239 Data is replicated in an identical fashion across all components of a mirror.
240 A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
241 failing before data integrity is compromised.
242 .It Sy raidz , raidz1 , raidz2 , raidz3
243 A variation on RAID-5 that allows for better distribution of parity and
244 eliminates the RAID-5
246 .Pq in which data and parity become inconsistent after a power loss .
247 Data and parity is striped across all disks within a raidz group.
249 A raidz group can have single-, double-, or triple-parity, meaning that the
250 raidz group can sustain one, two, or three failures, respectively, without
254 vdev type specifies a single-parity raidz group; the
256 vdev type specifies a double-parity raidz group; and the
258 vdev type specifies a triple-parity raidz group.
261 vdev type is an alias for
264 A raidz group with N disks of size X with P parity disks can hold approximately
265 (N-P)*X bytes and can withstand P device(s) failing before data integrity is
267 The minimum number of devices in a raidz group is one more than the number of
269 The recommended number is between 3 and 9 to help increase performance.
271 A special pseudo-vdev which keeps track of available hot spares for a pool.
272 For more information, see the
276 A separate intent log device.
277 If more than one log device is specified, then writes are load-balanced between
279 Log devices can be mirrored.
280 However, raidz vdev types are not supported for the intent log.
281 For more information, see the
285 A device dedicated solely for allocating dedup data.
286 The redundancy of this device should match the redundancy of the other normal
287 devices in the pool. If more than one dedup device is specified, then
288 allocations are load-balanced between devices.
290 A device dedicated solely for allocating various kinds of internal metadata,
291 and optionally small file data.
292 The redundancy of this device should match the redundancy of the other normal
293 devices in the pool. If more than one special device is specified, then
294 allocations are load-balanced between devices.
296 For more information on special allocations, see the
297 .Sx Special Allocation Class
300 A device used to cache storage pool data.
301 A cache device cannot be configured as a mirror or raidz group.
302 For more information, see the
307 Virtual devices cannot be nested, so a mirror or raidz virtual device can only
308 contain files or disks.
310 .Pq or other combinations
313 A pool can have any number of virtual devices at the top of the configuration
317 Data is dynamically distributed across all top-level devices to balance data
319 As new virtual devices are added, ZFS automatically places data on the newly
322 Virtual devices are specified one at a time on the command line, separated by
328 are used to distinguish where a group ends and another begins.
329 For example, the following creates two root vdevs, each a mirror of two disks:
331 # zpool create mypool mirror sda sdb mirror sdc sdd
333 .Ss Device Failure and Recovery
334 ZFS supports a rich set of mechanisms for handling device failure and data
336 All metadata and data is checksummed, and ZFS automatically repairs bad data
337 from a good copy when corruption is detected.
339 In order to take advantage of these features, a pool must make use of some form
340 of redundancy, using either mirrored or raidz groups.
341 While ZFS supports running in a non-redundant configuration, where each root
342 vdev is simply a disk or file, this is strongly discouraged.
343 A single case of bit corruption can render some or all of your data unavailable.
345 A pool's health status is described by one of three states: online, degraded,
347 An online pool has all devices operating normally.
348 A degraded pool is one in which one or more devices have failed, but the data is
349 still available due to a redundant configuration.
350 A faulted pool has corrupted metadata, or one or more faulted devices, and
351 insufficient replicas to continue functioning.
353 The health of the top-level vdev, such as mirror or raidz device, is
354 potentially impacted by the state of its associated vdevs, or component
356 A top-level vdev or component device is in one of the following states:
357 .Bl -tag -width "DEGRADED"
359 One or more top-level vdevs is in the degraded state because one or more
360 component devices are offline.
361 Sufficient replicas exist to continue functioning.
363 One or more component devices is in the degraded or faulted state, but
364 sufficient replicas exist to continue functioning.
365 The underlying conditions are as follows:
368 The number of checksum errors exceeds acceptable levels and the device is
369 degraded as an indication that something may be wrong.
370 ZFS continues to use the device as necessary.
372 The number of I/O errors exceeds acceptable levels.
373 The device could not be marked as faulted because there are insufficient
374 replicas to continue functioning.
377 One or more top-level vdevs is in the faulted state because one or more
378 component devices are offline.
379 Insufficient replicas exist to continue functioning.
381 One or more component devices is in the faulted state, and insufficient
382 replicas exist to continue functioning.
383 The underlying conditions are as follows:
386 The device could be opened, but the contents did not match expected values.
388 The number of I/O errors exceeds acceptable levels and the device is faulted to
389 prevent further use of the device.
392 The device was explicitly taken offline by the
396 The device is online and functioning.
398 The device was physically removed while the system was running.
399 Device removal detection is hardware-dependent and may not be supported on all
402 The device could not be opened.
403 If a pool is imported when a device was unavailable, then the device will be
404 identified by a unique identifier instead of its path since the path was never
405 correct in the first place.
408 If a device is removed and later re-attached to the system, ZFS attempts
409 to put the device online automatically.
410 Device attach detection is hardware-dependent and might not be supported on all
413 ZFS allows devices to be associated with pools as
415 These devices are not actively used in the pool, but when an active device
416 fails, it is automatically replaced by a hot spare.
417 To create a pool with hot spares, specify a
419 vdev with any number of devices.
422 # zpool create pool mirror sda sdb spare sdc sdd
425 Spares can be shared across multiple pools, and can be added with the
427 command and removed with the
430 Once a spare replacement is initiated, a new
432 vdev is created within the configuration that will remain there until the
433 original device is replaced.
434 At this point, the hot spare becomes available again if another device fails.
436 If a pool has a shared spare that is currently being used, the pool can not be
437 exported since other pools may use this shared spare, which may lead to
438 potential data corruption.
440 An in-progress spare replacement can be cancelled by detaching the hot spare.
441 If the original faulted device is detached, then the hot spare assumes its
442 place in the configuration, and is removed from the spare list of all active
445 Spares cannot replace log devices.
447 The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
449 For instance, databases often require their transactions to be on stable storage
450 devices when returning from a system call.
451 NFS and other applications can also use
453 to ensure data stability.
454 By default, the intent log is allocated from blocks within the main pool.
455 However, it might be possible to get better performance using separate intent
456 log devices such as NVRAM or a dedicated disk.
459 # zpool create pool sda sdb log sdc
462 Multiple log devices can also be specified, and they can be mirrored.
465 section for an example of mirroring multiple log devices.
467 Log devices can be added, replaced, attached, detached and removed. In
468 addition, log devices are imported and exported as part of the pool
470 Mirrored devices can be removed by specifying the top-level mirror vdev.
472 Devices can be added to a storage pool as
474 These devices provide an additional layer of caching between main memory and
476 For read-heavy workloads, where the working set size is much larger than what
477 can be cached in main memory, using cache devices allow much more of this
478 working set to be served from low latency media.
479 Using cache devices provides the greatest performance improvement for random
480 read-workloads of mostly static content.
482 To create a pool with cache devices, specify a
484 vdev with any number of devices.
487 # zpool create pool sda sdb cache sdc sdd
490 Cache devices cannot be mirrored or part of a raidz configuration.
491 If a read error is encountered on a cache device, that read I/O is reissued to
492 the original storage pool device, which might be part of a mirrored or raidz
495 The content of the cache devices is considered volatile, as is the case with
498 Before starting critical procedures that include destructive actions (e.g
500 ), an administrator can checkpoint the pool's state and in the case of a
501 mistake or failure, rewind the entire pool back to the checkpoint.
502 Otherwise, the checkpoint can be discarded when the procedure has completed
505 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
506 with care as it contains every part of the pool's state, from properties to vdev
508 Thus, while a pool has a checkpoint certain operations are not allowed.
509 Specifically, vdev removal/attach/detach, mirror splitting, and
510 changing the pool's guid.
511 Adding a new vdev is supported but in the case of a rewind it will have to be
513 Finally, users of this feature should keep in mind that scrubs in a pool that
514 has a checkpoint do not repair checkpointed data.
516 To create a checkpoint for a pool:
518 # zpool checkpoint pool
521 To later rewind to its checkpointed state, you need to first export it and
522 then rewind it during import:
525 # zpool import --rewind-to-checkpoint pool
528 To discard the checkpoint from a pool:
530 # zpool checkpoint -d pool
533 Dataset reservations (controlled by the
537 zfs properties) may be unenforceable while a checkpoint exists, because the
538 checkpoint is allowed to consume the dataset's reservation.
539 Finally, data that is part of the checkpoint but has been freed in the
540 current state of the pool won't be scanned during a scrub.
541 .Ss Special Allocation Class
542 The allocations in the special class are dedicated to specific block types.
543 By default this includes all metadata, the indirect blocks of user data, and
544 any dedup data. The class can also be provisioned to accept a limited
545 percentage of small file data blocks.
547 A pool must always have at least one general (non-specified) vdev before
548 other devices can be assigned to the special class. If the special class
549 becomes full, then allocations intended for it will spill back into the
552 Dedup data can be excluded from the special class by setting the
553 .Sy zfs_ddt_data_is_special
554 zfs module parameter to false (0).
556 Inclusion of small file blocks in the special class is opt-in. Each dataset
557 can control the size of small file blocks allowed in the special class by
559 .Sy special_small_blocks
560 dataset property. It defaults to zero so you must opt-in by setting it to a
563 for more info on setting this property.
565 Each pool has several properties associated with it.
566 Some properties are read-only statistics while others are configurable and
567 change the behavior of the pool.
569 The following are read-only properties:
572 Amount of storage used within the pool.
574 Percentage of pool space used.
575 This property can also be referred to by its shortened column name,
578 Amount of uninitialized space within the pool or device that can be used to
579 increase the total capacity of the pool.
580 Uninitialized space consists of any space on an EFI labeled vdev which has not
583 .Nm zpool Cm online Fl e
585 This space occurs when a LUN is dynamically expanded.
587 The amount of fragmentation in the pool.
589 The amount of free space available in the pool.
591 After a file system or snapshot is destroyed, the space it was using is
592 returned to the pool asynchronously.
594 is the amount of space remaining to be reclaimed.
601 The current health of the pool.
603 .Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
605 A unique identifier for the pool.
607 A unique identifier for the pool.
610 property, this identifier is generated every time we load the pool (e.g. does
611 not persist across imports/exports) and never changes while the pool is loaded
614 operation takes place).
616 Total size of the storage pool.
617 .It Sy unsupported@ Ns Em feature_guid
618 Information about unsupported features that are enabled on the pool.
624 The space usage properties report actual physical space available to the
626 The physical space can be different from the total amount of space that any
627 contained datasets can actually use.
628 The amount of space used in a raidz configuration depends on the characteristics
629 of the data being written.
630 In addition, ZFS reserves some space for internal accounting that the
632 command takes into account, but the
635 For non-full pools of a reasonable size, these effects should be invisible.
636 For small pools, or pools that are close to being completely full, these
637 discrepancies may become more noticeable.
639 The following property can be set at creation time and import time:
642 Alternate root directory.
643 If set, this directory is prepended to any mount points within the pool.
644 This can be used when examining an unknown pool where the mount points cannot be
645 trusted, or in an alternate boot environment, where the typical paths are not
648 is not a persistent property.
649 It is valid only while the system is up.
653 .Sy cachefile Ns = Ns Sy none ,
654 though this may be overridden using an explicit setting.
657 The following property can be set only at import time:
659 .It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
662 the pool will be imported in read-only mode.
663 This property can also be referred to by its shortened column name,
667 The following properties can be set at creation time and import time, and later
672 .It Sy ashift Ns = Ns Sy ashift
673 Pool sector size exponent, to the power of
675 (internally referred to as
677 ). Values from 9 to 16, inclusive, are valid; also, the special
678 value 0 (the default) means to auto-detect using the kernel's block
679 layer and a ZFS internal exception list. I/O operations will be aligned
680 to the specified size boundaries. Additionally, the minimum (disk)
681 write size will be set to the specified size, so this represents a
682 space vs. performance trade-off. For optimal performance, the pool
683 sector size should be greater than or equal to the sector size of the
684 underlying disks. The typical case for setting this property is when
685 performance is important and the underlying disks use 4KiB sectors but
686 report 512B sectors to the OS (for compatibility reasons); in that
689 (which is 1<<12 = 4096). When set, this property is
690 used as the default hint value in subsequent vdev operations (add,
691 attach and replace). Changing this value will not modify any existing
692 vdev, not even on disk replacement; however it can be used, for
693 instance, to replace a dying 512B sectors disk with a newer 4KiB
694 sectors device: this will probably result in bad performance but at the
695 same time could prevent loss of data.
696 .It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
697 Controls automatic pool expansion when the underlying LUN is grown.
700 the pool will be resized according to the size of the expanded device.
701 If the device is part of a mirror or raidz then all devices within that
702 mirror/raidz group must be expanded before the new space is made available to
704 The default behavior is
706 This property can also be referred to by its shortened column name,
708 .It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
709 Controls automatic device replacement.
712 device replacement must be initiated by the administrator by using the
717 any new device, found in the same physical location as a device that previously
718 belonged to the pool, is automatically formatted and replaced.
719 The default behavior is
721 This property can also be referred to by its shortened column name,
723 Autoreplace can also be used with virtual disks (like device
724 mapper) provided that you use the /dev/disk/by-vdev paths setup by
725 vdev_id.conf. See the
727 man page for more details.
728 Autoreplace and autoonline require the ZFS Event Daemon be configured and
731 man page for more details.
732 .It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns / Ns Ar dataset
733 Identifies the default bootable dataset for the root pool. This property is
734 expected to be set mainly by the installation and upgrade programs.
735 Not all Linux distribution boot processes use the bootfs property.
736 .It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
737 Controls the location of where the pool configuration is cached.
738 Discovering all pools on system startup requires a cached copy of the
739 configuration data that is stored on the root file system.
740 All pools in this cache are automatically imported when the system boots.
741 Some environments, such as install and clustering, need to cache this
742 information in a different location so that pools are not automatically
744 Setting this property caches the pool configuration in a different location that
745 can later be imported with
746 .Nm zpool Cm import Fl c .
747 Setting it to the special value
749 creates a temporary pool that is never cached, and the special value
752 uses the default location.
754 Multiple pools can share the same cache file.
755 Because the kernel destroys and recreates this file when pools are added and
756 removed, care should be taken when attempting to access this file.
757 When the last pool using a
759 is exported or destroyed, the file will be empty.
760 .It Sy comment Ns = Ns Ar text
761 A text string consisting of printable ASCII characters that will be stored
762 such that it is available even if the pool becomes faulted.
763 An administrator can provide additional information about a pool using this
765 .It Sy dedupditto Ns = Ns Ar number
766 Threshold for the number of block ditto copies.
767 If the reference count for a deduplicated block increases above this number, a
768 new ditto copy of this block is automatically stored.
769 The default setting is
771 which causes no ditto copies to be created for deduplicated blocks.
772 The minimum legal nonzero setting is
774 .It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
775 Controls whether a non-privileged user is granted access based on the dataset
776 permissions defined on the dataset.
779 for more information on ZFS delegated administration.
780 .It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
781 Controls the system behavior in the event of catastrophic pool failure.
782 This condition is typically a result of a loss of connectivity to the underlying
783 storage device(s) or a failure of all devices within the pool.
784 The behavior of such an event is determined as follows:
785 .Bl -tag -width "continue"
787 Blocks all I/O access until the device connectivity is recovered and the errors
789 This is the default behavior.
793 to any new write I/O requests but allows reads to any of the remaining healthy
795 Any write requests that have yet to be committed to disk would be blocked.
797 Prints out a message to the console and generates a system crash dump.
799 .It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
800 The value of this property is the current state of
802 The only valid value when setting this property is
806 to the enabled state.
809 for details on feature states.
810 .It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
811 Controls whether information about snapshots associated with this pool is
819 This property can also be referred to by its shortened name,
821 .It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
822 Controls whether a pool activity check should be performed during
823 .Nm zpool Cm import .
824 When a pool is determined to be active it cannot be imported, even with the
826 option. This property is intended to be used in failover configurations
827 where multiple hosts have access to a pool on shared storage. When this
828 property is on, periodic writes to storage occur to show the pool is in use.
830 .Sy zfs_multihost_interval
832 .Xr zfs-module-parameters 5
833 man page. In order to enable this property each host must set a unique hostid.
837 .Xr spl-module-parameters 5
838 for additional details. The default value is
840 .It Sy version Ns = Ns Ar version
841 The current on-disk version of the pool.
842 This can be increased, but never decreased.
843 The preferred method of updating pools is with the
845 command, though this property can be used when a specific version is needed for
846 backwards compatibility.
847 Once feature flags are enabled on a pool this property will no longer have a
851 All subcommands that modify state are logged persistently to the pool in their
856 command provides subcommands to create and destroy storage pools, add capacity
857 to storage pools, and provide information about the storage pools.
858 The following subcommands are supported:
864 Displays a help message.
869 .Oo Fl o Ar property Ns = Ns Ar value Oc
872 Adds the specified virtual devices to the given pool.
875 specification is described in the
880 option, and the device checks performed are described in the
887 even if they appear in use or specify a conflicting replication level.
888 Not all devices can be overridden in this manner.
892 GUIDs instead of the normal device names. These GUIDs can be used in place of
893 device names for the zpool detach/offline/remove/replace commands.
895 Display real paths for
897 resolving all symbolic links. This can be used to look up the current block
898 device name regardless of the /dev/disk/ path used to open it.
900 Displays the configuration that would be used without actually adding the
902 The actual pool creation can still fail due to insufficient privileges or
905 Display real paths for
907 instead of only the last component of the path. This can be used in
911 .It Fl o Ar property Ns = Ns Ar value
912 Sets the given pool properties. See the
914 section for a list of valid properties that can be set. The only property
915 supported at the moment is ashift.
921 .Oo Fl o Ar property Ns = Ns Ar value Oc
922 .Ar pool device new_device
928 The existing device cannot be part of a raidz configuration.
931 is not currently part of a mirrored configuration,
933 automatically transforms into a two-way mirror of
939 is part of a two-way mirror, attaching
941 creates a three-way mirror, and so on.
944 begins to resilver immediately.
949 even if its appears to be in use.
950 Not all devices can be overridden in this manner.
951 .It Fl o Ar property Ns = Ns Ar value
952 Sets the given pool properties. See the
954 section for a list of valid properties that can be set. The only property
955 supported at the moment is ashift.
963 Checkpoints the current state of
965 , which can be later restored by
966 .Nm zpool Cm import --rewind-to-checkpoint .
967 The existence of a checkpoint in a pool prohibits the following
976 In addition, it may break reservation boundaries if the pool lacks free
980 command indicates the existence of a checkpoint or the progress of discarding a
981 checkpoint from a pool.
984 command reports how much space the checkpoint takes from the pool.
987 Discards an existing checkpoint from
996 Clears device errors in a pool.
997 If no arguments are specified, all device errors within the pool are cleared.
998 If one or more devices is specified, only those errors associated with the
999 specified device or devices are cleared.
1004 .Op Fl m Ar mountpoint
1005 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1006 .Oo Fl o Ar feature@feature Ns = Ns Ar value Oc Ns ...
1007 .Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1010 .Ar pool vdev Ns ...
1012 Creates a new storage pool containing the virtual devices specified on the
1014 The pool name must begin with a letter, and can only contain
1015 alphanumeric characters as well as underscore
1031 are reserved, as are names beginning with
1039 specification is described in the
1043 The command verifies that each device specified is accessible and not currently
1044 in use by another subsystem.
1045 There are some uses, such as being currently mounted, or specified as the
1046 dedicated dump device, that prevents a device from ever being used by ZFS.
1047 Other uses, such as having a preexisting UFS file system, can be overridden with
1052 The command also checks that the replication strategy for the pool is
1054 An attempt to combine redundant and non-redundant storage in a single pool, or
1055 to mix disks and files, results in an error unless
1058 The use of differently sized devices within a single raidz or mirror group is
1059 also flagged as an error unless
1065 option is specified, the default mount point is
1067 The mount point must not exist or must be empty, or else the root dataset
1069 This can be overridden with the
1073 By default all supported features are enabled on the new pool unless the
1075 option is specified.
1078 Do not enable any features on the new pool.
1079 Individual features can be enabled by setting their corresponding properties to
1085 .Xr zpool-features 5
1086 for details about feature properties.
1090 even if they appear in use or specify a conflicting replication level.
1091 Not all devices can be overridden in this manner.
1092 .It Fl m Ar mountpoint
1093 Sets the mount point for the root dataset.
1094 The default mount point is
1101 The mount point must be an absolute path,
1105 For more information on dataset mount points, see
1108 Displays the configuration that would be used without actually creating the
1110 The actual pool creation can still fail due to insufficient privileges or
1112 .It Fl o Ar property Ns = Ns Ar value
1113 Sets the given pool properties.
1116 section for a list of valid properties that can be set.
1117 .It Fl o Ar feature@feature Ns = Ns Ar value
1118 Sets the given pool feature. See the
1119 .Xr zpool-features 5
1120 section for a list of valid features that can be set.
1121 Value can be either disabled or enabled.
1122 .It Fl O Ar file-system-property Ns = Ns Ar value
1123 Sets the given file system properties in the root file system of the pool.
1128 for a list of valid properties that can be set.
1131 .Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1133 Sets the in-core pool name to
1135 while the on-disk name will be the name specified as the pool name
1137 This will set the default cachefile property to none. This is intended
1138 to handle name space collisions when creating pools for other systems,
1139 such as virtual machines or physical machines whose pools live on network
1148 Destroys the given pool, freeing up any devices for other use.
1149 This command tries to unmount any active datasets before destroying the pool.
1152 Forces any active datasets contained within the pool to be unmounted.
1162 The operation is refused if there are no other valid replicas of the data.
1163 If device may be re-added to the pool later on then consider the
1169 .Op Fl vHf Oo Ar pool Oc | Fl c
1171 Lists all recent events generated by the ZFS kernel modules. These events
1174 and used to automate administrative tasks such as replacing a failed device
1175 with a hot spare. For more information about the subclasses and event payloads
1176 that can be generated see the
1181 Clear all previous events.
1185 Scripted mode. Do not display headers, and separate fields by a
1186 single tab instead of arbitrary space.
1188 Print the entire payload for each event.
1197 Exports the given pools from the system.
1198 All devices are marked as exported, but are still considered in use by other
1200 The devices can be moved between systems
1201 .Pq even those of different endianness
1202 and imported as long as a sufficient number of devices are present.
1204 Before exporting the pool, all datasets within the pool are unmounted.
1205 A pool can not be exported if it has a shared spare that is currently being
1208 For pools to be portable, you must give the
1210 command whole disks, not just partitions, so that ZFS can label the disks with
1211 portable EFI labels.
1212 Otherwise, disk drivers on platforms of different endianness will not recognize
1216 Exports all pools imported on the system.
1218 Forcefully unmount all datasets, using the
1222 This command will forcefully export the pool even if it has a shared spare that
1223 is currently being used.
1224 This may lead to potential data corruption.
1230 .Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1231 .Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1232 .Oo Ar pool Oc Ns ...
1234 Retrieves the given list of properties
1236 or all properties if
1240 for the specified storage pool(s).
1241 These properties are displayed with the following fields:
1243 name Name of storage pool
1244 property Property name
1245 value Property value
1246 source Property source, either 'default' or 'local'.
1251 section for more information on the available pool properties.
1255 Do not display headers, and separate fields by a single tab instead of arbitrary
1258 A comma-separated list of columns to display.
1259 .Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1260 is the default value.
1262 Display numbers in parsable (exact) values.
1268 .Oo Ar pool Oc Ns ...
1270 Displays the command history of the specified pool(s) or all pools if no pool is
1274 Displays internally logged ZFS events in addition to user initiated events.
1276 Displays log records in long format, which in addition to standard format
1277 includes, the user name, the hostname, and the zone in which the operation was
1284 .Op Fl d Ar dir Ns | Ns device
1286 Lists pools available to import.
1289 option is not specified, this command searches for devices in
1293 option can be specified multiple times, and all directories are searched.
1294 If the device appears to be part of an exported pool, this command displays a
1295 summary of the pool with the name of the pool, a numeric identifier, as well as
1296 the vdev layout and current health of the device for each device or file.
1297 Destroyed pools, pools that were previously destroyed with the
1298 .Nm zpool Cm destroy
1299 command, are not listed unless the
1301 option is specified.
1303 The numeric identifier is unique, and can be used instead of the pool name when
1304 multiple exported pools of the same name are available.
1306 .It Fl c Ar cachefile
1307 Reads configuration from the given
1309 that was created with the
1314 is used instead of searching for devices.
1315 .It Fl d Ar dir Ns | Ns Ar device
1318 or searches for devices or files in
1322 option can be specified multiple times.
1324 Lists destroyed pools only.
1331 .Op Fl F Oo Fl n Oc Oo Fl T Oc Oo Fl X Oc
1332 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1334 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1338 Imports all pools found in the search directories.
1339 Identical to the previous command, except that all pools with a sufficient
1340 number of devices available are imported.
1341 Destroyed pools, pools that were previously destroyed with the
1342 .Nm zpool Cm destroy
1343 command, will not be imported unless the
1345 option is specified.
1348 Searches for and imports all pools found.
1349 .It Fl c Ar cachefile
1350 Reads configuration from the given
1352 that was created with the
1357 is used instead of searching for devices.
1358 .It Fl d Ar dir Ns | Ns Ar device
1361 or searches for devices or files in
1365 option can be specified multiple times.
1366 This option is incompatible with the
1370 Imports destroyed pools only.
1373 option is also required.
1375 Forces import, even if the pool appears to be potentially active.
1377 Recovery mode for a non-importable pool.
1378 Attempt to return the pool to an importable state by discarding the last few
1380 Not all damaged pools can be recovered by using this option.
1381 If successful, the data from the discarded transactions is irretrievably lost.
1382 This option is ignored if the pool is importable or already imported.
1384 Indicates that this command will request encryption keys for all encrypted
1385 datasets it attempts to mount as it is bringing the pool online. Note that if
1390 this command will block waiting for the keys to be entered. Without this flag
1391 encrypted datasets will be left unavailable until the keys are loaded.
1393 Allows a pool to import when there is a missing log device.
1394 Recent transactions can be lost because the log device will be discarded.
1399 Determines whether a non-importable pool can be made importable again, but does
1400 not actually perform the pool recovery.
1401 For more details about pool recovery mode, see the
1405 Import the pool without mounting any file systems.
1407 Comma-separated list of mount options to use when mounting datasets within the
1411 for a description of dataset properties and mount options.
1412 .It Fl o Ar property Ns = Ns Ar value
1413 Sets the specified property on the imported pool.
1416 section for more information on the available pool properties.
1426 .It Fl -rewind-to-checkpoint
1427 Rewinds pool to the checkpointed state.
1428 Once the pool is imported with this flag there is no way to undo the rewind.
1429 All changes and data that were written after the checkpoint are lost!
1430 The only exception is when the
1432 mounting option is enabled.
1433 In this case, the checkpointed state of the pool is opened and an
1434 administrator can see how the pool would look like if they were
1437 Scan using the default search path, the libblkid cache will not be
1438 consulted. A custom search path may be specified by setting the
1439 ZPOOL_IMPORT_PATH environment variable.
1443 recovery option. Determines whether extreme
1444 measures to find a valid txg should take place. This allows the pool to
1445 be rolled back to a txg which is no longer guaranteed to be consistent.
1446 Pools imported at an inconsistent txg may contain uncorrectable
1447 checksum errors. For more details about pool recovery mode, see the
1449 option, above. WARNING: This option can be extremely hazardous to the
1450 health of your pool and should only be used as a last resort.
1452 Specify the txg to use for rollback. Implies
1455 about pool recovery mode, see the
1457 option, above. WARNING: This option can be extremely hazardous to the
1458 health of your pool and should only be used as a last resort.
1464 .Op Fl F Oo Fl n Oc Oo Fl t Oc Oo Fl T Oc Oo Fl X Oc
1465 .Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns device
1467 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1470 .Ar pool Ns | Ns Ar id
1473 Imports a specific pool.
1474 A pool can be identified by its name or the numeric identifier.
1477 is specified, the pool is imported using the name
1479 Otherwise, it is imported with the same name as its exported name.
1481 If a device is removed from a system without running
1483 first, the device appears as potentially active.
1484 It cannot be determined if this was a failed export, or whether the device is
1485 really in use from another host.
1486 To import a pool in this state, the
1490 .It Fl c Ar cachefile
1491 Reads configuration from the given
1493 that was created with the
1498 is used instead of searching for devices.
1499 .It Fl d Ar dir Ns | Ns Ar device
1502 or searches for devices or files in
1506 option can be specified multiple times.
1507 This option is incompatible with the
1511 Imports destroyed pool.
1514 option is also required.
1516 Forces import, even if the pool appears to be potentially active.
1518 Recovery mode for a non-importable pool.
1519 Attempt to return the pool to an importable state by discarding the last few
1521 Not all damaged pools can be recovered by using this option.
1522 If successful, the data from the discarded transactions is irretrievably lost.
1523 This option is ignored if the pool is importable or already imported.
1525 Indicates that this command will request encryption keys for all encrypted
1526 datasets it attempts to mount as it is bringing the pool online. Note that if
1531 this command will block waiting for the keys to be entered. Without this flag
1532 encrypted datasets will be left unavailable until the keys are loaded.
1534 Allows a pool to import when there is a missing log device.
1535 Recent transactions can be lost because the log device will be discarded.
1540 Determines whether a non-importable pool can be made importable again, but does
1541 not actually perform the pool recovery.
1542 For more details about pool recovery mode, see the
1546 Comma-separated list of mount options to use when mounting datasets within the
1550 for a description of dataset properties and mount options.
1551 .It Fl o Ar property Ns = Ns Ar value
1552 Sets the specified property on the imported pool.
1555 section for more information on the available pool properties.
1566 Scan using the default search path, the libblkid cache will not be
1567 consulted. A custom search path may be specified by setting the
1568 ZPOOL_IMPORT_PATH environment variable.
1572 recovery option. Determines whether extreme
1573 measures to find a valid txg should take place. This allows the pool to
1574 be rolled back to a txg which is no longer guaranteed to be consistent.
1575 Pools imported at an inconsistent txg may contain uncorrectable
1576 checksum errors. For more details about pool recovery mode, see the
1578 option, above. WARNING: This option can be extremely hazardous to the
1579 health of your pool and should only be used as a last resort.
1581 Specify the txg to use for rollback. Implies
1584 about pool recovery mode, see the
1586 option, above. WARNING: This option can be extremely hazardous to the
1587 health of your pool and should only be used as a last resort.
1593 is temporary. Temporary pool names last until export. Ensures that
1594 the original pool name will be used in all label updates and therefore
1595 is retained upon export.
1596 Will also set -o cachefile=none when not explicitly specified.
1601 .Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
1602 .Op Fl T Sy u Ns | Ns Sy d
1604 .Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1605 .Op Ar interval Op Ar count
1607 Displays I/O statistics for the given pools/vdevs. You can pass in a
1608 list of pools, a pool and list of vdevs in that pool, or a list of any
1609 vdevs from any pool. If no items are specified, statistics for every
1610 pool in the system are shown.
1613 the statistics are printed every
1615 seconds until ^C is pressed. If count is specified, the command exits
1616 after count reports are printed. The first report printed is always
1617 the statistics since boot regardless of whether
1621 are passed. However, this behavior can be suppressed with the
1623 flag. Also note that the units of
1627 that are printed in the report are in base 1024. To get the raw
1632 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
1633 Run a script (or scripts) on each vdev and include the output as a new column
1636 output. Users can run any script found in their
1638 directory or from the system
1639 .Pa /etc/zfs/zpool.d
1640 directory. Script names containing the slash (/) character are not allowed.
1641 The default search path can be overridden by setting the
1642 ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
1644 if they have the ZPOOL_SCRIPTS_AS_ROOT
1645 environment variable set. If a script requires the use of a privileged
1648 then it's recommended you allow the user access to it in
1650 or add the user to the
1651 .Pa /etc/sudoers.d/zfs
1656 is passed without a script name, it prints a list of all scripts.
1658 also sets verbose mode
1659 .No \&( Ns Fl v Ns No \&).
1661 Script output should be in the form of "name=value". The column name is
1662 set to "name" and the value is set to "value". Multiple lines can be
1663 used to output multiple columns. The first line of output not in the
1664 "name=value" format is displayed without a column title, and no more
1665 output after that is displayed. This can be useful for printing error
1666 messages. Blank or NULL values are printed as a '-' to make output
1669 The following environment variables are set before running each script:
1670 .Bl -tag -width "VDEV_PATH"
1672 Full path to the vdev
1674 .Bl -tag -width "VDEV_UPATH"
1676 Underlying path to the vdev (/dev/sd*). For use with device mapper,
1677 multipath, or partitioned vdevs.
1679 .Bl -tag -width "VDEV_ENC_SYSFS_PATH"
1680 .It Sy VDEV_ENC_SYSFS_PATH
1681 The sysfs path to the enclosure for the vdev (if any).
1683 .It Fl T Sy u Ns | Ns Sy d
1684 Display a time stamp.
1687 for a printed representation of the internal representation of time.
1692 for standard date format.
1696 Display vdev GUIDs instead of the normal device names. These GUIDs
1697 can be used in place of device names for the zpool
1698 detach/offline/remove/replace commands.
1700 Scripted mode. Do not display headers, and separate fields by a
1701 single tab instead of arbitrary space.
1703 Display real paths for vdevs resolving all symbolic links. This can
1704 be used to look up the current block device name regardless of the
1706 path used to open it.
1708 Display numbers in parsable (exact) values. Time values are in
1711 Display full paths for vdevs instead of only the last component of
1712 the path. This can be used in conjunction with the
1716 Print request size histograms for the leaf ZIOs. This includes
1717 histograms of individual ZIOs (
1719 and aggregate ZIOs (
1721 These stats can be useful for seeing how well the ZFS IO aggregator is
1722 working. Do not confuse these request size stats with the block layer
1723 requests; it's possible ZIOs can be broken up before being sent to the
1726 Verbose statistics Reports usage statistics for individual vdevs within the
1727 pool, in addition to the pool-wide statistics.
1729 Omit statistics since boot.
1730 Normally the first line of output reports the statistics since boot.
1731 This option suppresses that first line of output.
1733 Display latency histograms:
1736 Total IO time (queuing + disk IO time).
1738 Disk IO time (time reading/writing the disk).
1740 Amount of time IO spent in synchronous priority queues. Does not include
1743 Amount of time IO spent in asynchronous priority queues. Does not include
1746 Amount of time IO spent in scrub queue. Does not include disk time.
1748 Include average latency statistics:
1751 Average total IO time (queuing + disk IO time).
1753 Average disk IO time (time reading/writing the disk).
1755 Average amount of time IO spent in synchronous priority queues. Does
1756 not include disk time.
1758 Average amount of time IO spent in asynchronous priority queues.
1759 Does not include disk time.
1761 Average queuing time in scrub queue. Does not include disk time.
1763 Include active queue statistics. Each priority queue has both
1768 IOs. Pending IOs are waiting to
1769 be issued to the disk, and active IOs have been issued to disk and are
1770 waiting for completion. These stats are broken out by priority queue:
1772 .Ar syncq_read/write :
1773 Current number of entries in synchronous priority
1775 .Ar asyncq_read/write :
1776 Current number of entries in asynchronous priority queues.
1778 Current number of entries in scrub queue.
1780 All queue statistics are instantaneous measurements of the number of
1781 entries in the queues. If you specify an interval, the measurements
1782 will be sampled from the end of the interval.
1790 Removes ZFS label information from the specified
1794 must not be part of an active pool configuration.
1797 Treat exported or foreign devices as inactive.
1803 .Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1804 .Op Fl T Sy u Ns | Ns Sy d
1805 .Oo Ar pool Oc Ns ...
1806 .Op Ar interval Op Ar count
1808 Lists the given pools along with a health status and space usage.
1811 are specified, all pools in the system are listed.
1814 the information is printed every
1816 seconds until ^C is pressed.
1819 is specified, the command exits after
1821 reports are printed.
1824 Display vdev GUIDs instead of the normal device names. These GUIDs
1825 can be used in place of device names for the zpool
1826 detach/offline/remove/replace commands.
1829 Do not display headers, and separate fields by a single tab instead of arbitrary
1831 .It Fl o Ar property
1832 Comma-separated list of properties to display.
1835 section for a list of valid properties.
1837 .Cm name , size , allocated , free , checkpoint, expandsize , fragmentation ,
1838 .Cm capacity , dedupratio , health , altroot .
1840 Display real paths for vdevs resolving all symbolic links. This can
1841 be used to look up the current block device name regardless of the
1842 /dev/disk/ path used to open it.
1844 Display numbers in parsable
1848 Display full paths for vdevs instead of only the last component of
1849 the path. This can be used in conjunction with the
1852 .It Fl T Sy u Ns | Ns Sy d
1853 Display a time stamp.
1856 for a printed representation of the internal representation of time.
1861 for standard date format.
1866 Reports usage statistics for individual vdevs within the pool, in addition to
1867 the pool-wise statistics.
1874 .Ar pool Ar device Ns ...
1876 Takes the specified physical device offline.
1879 is offline, no attempt is made to read or write to the device.
1880 This command is not applicable to spares.
1883 Force fault. Instead of offlining the disk, put it into a faulted
1884 state. The fault will persist across imports unless the
1889 Upon reboot, the specified physical device reverts to its previous state.
1895 .Ar pool Ar device Ns ...
1897 Brings the specified physical device online.
1898 This command is not applicable to spares.
1901 Expand the device to use all available space.
1902 If the device is part of a mirror or raidz then all devices must be expanded
1903 before the new space will become available to the pool.
1910 Generates a new unique identifier for the pool.
1911 You must ensure that all devices in this pool are online and healthy before
1912 performing this action.
1919 Reopen all the vdevs associated with the pool.
1922 Do not restart an in-progress scrub operation. This is not recommended and can
1923 result in partially resilvered devices unless a second scrub is performed.
1929 .Ar pool Ar device Ns ...
1931 Removes the specified device from the pool.
1932 This command supports removing hot spare, cache, log, and both mirrored and
1933 non-redundant primary top-level vdevs, including dedup and special vdevs.
1934 When the primary pool storage includes a top-level raidz vdev only hot spare,
1935 cache, and log devices can be removed.
1937 Removing a top-level vdev reduces the total amount of space in the storage pool.
1938 The specified device will be evacuated by copying all allocated space from it to
1939 the other devices in the pool.
1942 command initiates the removal and returns, while the evacuation continues in
1944 The removal progress can be monitored with
1945 .Nm zpool Cm status .
1946 If an IO error is encountered during the removal process it will be
1949 feature flag must be enabled to remove a top-level vdev, see
1950 .Xr zpool-features 5 .
1952 A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1954 Non-log devices or data devices that are part of a mirrored configuration can be removed using
1960 Do not actually perform the removal ("no-op").
1961 Instead, print the estimated amount of memory that will be used by the
1962 mapping table after the removal completes.
1963 This is nonzero only for top-level vdevs.
1967 Used in conjunction with the
1969 flag, displays numbers as parsable (exact) values.
1977 Stops and cancels an in-progress removal of a top-level vdev.
1982 .Op Fl o Ar property Ns = Ns Ar value
1983 .Ar pool Ar device Op Ar new_device
1989 This is equivalent to attaching
1991 waiting for it to resilver, and then detaching
1996 must be greater than or equal to the minimum size of all the devices in a mirror
1997 or raidz configuration.
2000 is required if the pool is not redundant.
2003 is not specified, it defaults to
2005 This form of replacement is useful after an existing disk has failed and has
2006 been physically replaced.
2007 In this case, the new disk may have the same
2009 path as the old device, even though it is actually a different disk.
2010 ZFS recognizes this.
2015 even if its appears to be in use.
2016 Not all devices can be overridden in this manner.
2017 .It Fl o Ar property Ns = Ns Ar value
2018 Sets the given pool properties. See the
2020 section for a list of valid properties that can be set.
2021 The only property supported at the moment is
2030 Begins a scrub or resumes a paused scrub.
2031 The scrub examines all data in the specified pools to verify that it checksums
2035 devices, ZFS automatically repairs any damage discovered during the scrub.
2038 command reports the progress of the scrub and summarizes the results of the
2039 scrub upon completion.
2041 Scrubbing and resilvering are very similar operations.
2042 The difference is that resilvering only examines data that ZFS knows to be out
2045 for example, when attaching a new device to a mirror or replacing an existing
2048 whereas scrubbing examines all data to discover silent errors due to hardware
2049 faults or disk failure.
2051 Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2053 If a scrub is paused, the
2056 If a resilver is in progress, ZFS does not allow a scrub to be started until the
2065 Scrub pause state and progress are periodically synced to disk.
2066 If the system is restarted or pool is exported during a paused scrub,
2067 even after import, scrub will remain paused until it is resumed.
2068 Once resumed the scrub will pick up from the place where it was last
2069 checkpointed to disk.
2070 To resume a paused scrub issue
2079 Starts a resilver. If an existing resilver is already running it will be
2080 restarted from the beginning. Any drives that were scheduled for a deferred
2081 resilver will be added to the new one.
2085 .Ar property Ns = Ns Ar value
2088 Sets the given property on the specified pool.
2091 section for more information on what properties can be set and acceptable
2097 .Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2108 must be mirrors and the pool must not be in the process of resilvering.
2109 At the time of the split,
2111 will be a replica of
2114 last device in each mirror is split from
2119 The optional device specification causes the specified device(s) to be
2122 and, should any devices remain unspecified,
2123 the last device in each mirror is used as would be by default.
2126 Display vdev GUIDs instead of the normal device names. These GUIDs
2127 can be used in place of device names for the zpool
2128 detach/offline/remove/replace commands.
2130 Display real paths for vdevs resolving all symbolic links. This can
2131 be used to look up the current block device name regardless of the
2133 path used to open it.
2135 Indicates that this command will request encryption keys for all encrypted
2136 datasets it attempts to mount as it is bringing the new pool online. Note that
2137 if any datasets have a
2141 this command will block waiting for the keys to be entered. Without this flag
2142 encrypted datasets will be left unavailable until the keys are loaded.
2144 Do dry run, do not actually perform the split.
2145 Print out the expected configuration of
2148 Display full paths for vdevs instead of only the last component of
2149 the path. This can be used in conjunction with the
2152 .It Fl o Ar property Ns = Ns Ar value
2153 Sets the specified property for
2157 section for more information on the available pool properties.
2165 and automatically import it.
2170 .Op Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2172 .Op Fl T Sy u Ns | Ns Sy d
2173 .Oo Ar pool Oc Ns ...
2174 .Op Ar interval Op Ar count
2176 Displays the detailed health status for the given pools.
2179 is specified, then the status of each pool in the system is displayed.
2180 For more information on pool and device health, see the
2181 .Sx Device Failure and Recovery
2184 If a scrub or resilver is in progress, this command reports the percentage done
2185 and the estimated time to completion.
2186 Both of these are only approximate, because the amount of data in the pool and
2187 the other workloads on the system can change.
2189 .It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
2190 Run a script (or scripts) on each vdev and include the output as a new column
2197 for complete details.
2199 Display vdev GUIDs instead of the normal device names. These GUIDs
2200 can be used in place of device names for the zpool
2201 detach/offline/remove/replace commands.
2203 Display real paths for vdevs resolving all symbolic links. This can
2204 be used to look up the current block device name regardless of the
2206 path used to open it.
2208 Display numbers in parsable (exact) values.
2210 Display full paths for vdevs instead of only the last component of
2211 the path. This can be used in conjunction with the
2215 Display a histogram of deduplication statistics, showing the allocated
2216 .Pq physically present on disk
2218 .Pq logically referenced in the pool
2219 block counts and sizes by reference count.
2221 Display the number of leaf VDEV slow IOs. This is the number of IOs that
2222 didn't complete in \fBzio_slow_io_ms\fR milliseconds (default 30 seconds).
2223 This does not necessarily mean the IOs failed to complete, just took an
2224 unreasonably long amount of time. This may indicate a problem with the
2226 .It Fl T Sy u Ns | Ns Sy d
2227 Display a time stamp.
2230 for a printed representation of the internal representation of time.
2235 for standard date format.
2239 Displays verbose data error information, printing out a complete list of all
2240 data errors since the last complete pool scrub.
2242 Only display status for pools that are exhibiting errors or are otherwise
2244 Warnings about pools not using the latest on-disk format will not be included.
2251 This command forces all in-core dirty data to be written to the primary
2252 pool storage and not the ZIL. It will also update administrative
2253 information including quota reporting. Without arguments,
2255 will sync all pools on the system. Otherwise, it will sync only the
2261 Displays pools which do not have all supported features enabled and pools
2262 formatted using a legacy ZFS version number.
2263 These pools can continue to be used, but some features may not be available.
2265 .Nm zpool Cm upgrade Fl a
2266 to enable all features on all pools.
2272 Displays legacy ZFS versions supported by the current software.
2274 .Xr zpool-features 5
2275 for a description of feature flags features supported by the current software.
2280 .Fl a Ns | Ns Ar pool Ns ...
2282 Enables all supported features on the given pool.
2283 Once this is done, the pool will no longer be accessible on systems that do not
2284 support feature flags.
2286 .Xr zpool-features 5
2287 for details on compatibility with systems that support feature flags, but do not
2288 support all features enabled on the pool.
2291 Enables all supported features on all pools.
2293 Upgrade to the specified legacy version.
2296 flag is specified, no features will be enabled on the pool.
2297 This option can only be used to increase the version number up to the last
2298 supported legacy version number.
2302 The following exit values are returned:
2305 Successful completion.
2309 Invalid command line options were specified.
2313 .It Sy Example 1 No Creating a RAID-Z Storage Pool
2314 The following command creates a pool with a single raidz root vdev that
2315 consists of six disks.
2317 # zpool create tank raidz sda sdb sdc sdd sde sdf
2319 .It Sy Example 2 No Creating a Mirrored Storage Pool
2320 The following command creates a pool with two mirrors, where each mirror
2323 # zpool create tank mirror sda sdb mirror sdc sdd
2325 .It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
2326 The following command creates an unmirrored pool using two disk partitions.
2328 # zpool create tank sda1 sdb2
2330 .It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2331 The following command creates an unmirrored pool using files.
2332 While not recommended, a pool based on files can be useful for experimental
2335 # zpool create tank /path/to/file/a /path/to/file/b
2337 .It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2338 The following command adds two mirrored disks to the pool
2340 assuming the pool is already made up of two-way mirrors.
2341 The additional space is immediately available to any datasets within the pool.
2343 # zpool add tank mirror sda sdb
2345 .It Sy Example 6 No Listing Available ZFS Storage Pools
2346 The following command lists all available pools on the system.
2347 In this case, the pool
2349 is faulted due to a missing device.
2350 The results from this command are similar to the following:
2353 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2354 rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
2355 tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
2356 zion - - - - - - - FAULTED -
2358 .It Sy Example 7 No Destroying a ZFS Storage Pool
2359 The following command destroys the pool
2361 and any datasets contained within.
2363 # zpool destroy -f tank
2365 .It Sy Example 8 No Exporting a ZFS Storage Pool
2366 The following command exports the devices in pool
2368 so that they can be relocated or later imported.
2372 .It Sy Example 9 No Importing a ZFS Storage Pool
2373 The following command displays available pools, and then imports the pool
2375 for use on the system.
2376 The results from this command are similar to the following:
2380 id: 15451357997522795478
2382 action: The pool can be imported using its name or numeric identifier.
2392 .It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2393 The following command upgrades all ZFS Storage pools to the current version of
2397 This system is currently running ZFS version 2.
2399 .It Sy Example 11 No Managing Hot Spares
2400 The following command creates a new pool with an available hot spare:
2402 # zpool create tank mirror sda sdb spare sdc
2405 If one of the disks were to fail, the pool would be reduced to the degraded
2407 The failed device can be replaced using the following command:
2409 # zpool replace tank sda sdd
2412 Once the data has been resilvered, the spare is automatically removed and is
2413 made available for use should another device fail.
2414 The hot spare can be permanently removed from the pool using the following
2417 # zpool remove tank sdc
2419 .It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2420 The following command creates a ZFS storage pool consisting of two, two-way
2421 mirrors and mirrored log devices:
2423 # zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
2426 .It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2427 The following command adds two disks for use as cache devices to a ZFS storage
2430 # zpool add pool cache sdc sdd
2433 Once added, the cache devices gradually fill with content from main memory.
2434 Depending on the size of your cache devices, it could take over an hour for
2436 Capacity and reads can be monitored using the
2440 # zpool iostat -v pool 5
2442 .It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2443 The following commands remove the mirrored log device
2445 and mirrored top-level data device
2448 Given this configuration:
2452 scrub: none requested
2455 NAME STATE READ WRITE CKSUM
2457 mirror-0 ONLINE 0 0 0
2460 mirror-1 ONLINE 0 0 0
2464 mirror-2 ONLINE 0 0 0
2469 The command to remove the mirrored log
2473 # zpool remove tank mirror-2
2476 The command to remove the mirrored data
2480 # zpool remove tank mirror-1
2482 .It Sy Example 15 No Displaying expanded space on a device
2483 The following command displays the detailed information for the pool
2485 This pool is comprised of a single raidz vdev where one of its devices
2486 increased its capacity by 10GB.
2487 In this example, the pool will not be able to utilize this extra capacity until
2488 all the devices under the raidz vdev have been expanded.
2490 # zpool list -v data
2491 NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
2492 data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
2493 raidz1 23.9G 14.6G 9.30G - 48%
2498 .It Sy Example 16 No Adding output columns
2499 Additional columns can be added to the
2507 # zpool status -c vendor,model,size
2508 NAME STATE READ WRITE CKSUM vendor model size
2510 mirror-0 ONLINE 0 0 0
2511 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2512 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2513 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2514 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2515 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2516 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
2518 # zpool iostat -vc slaves
2519 capacity operations bandwidth
2520 pool alloc free read write read write slaves
2521 ---------- ----- ----- ----- ----- ----- ----- ---------
2522 tank 20.4G 7.23T 26 152 20.7M 21.6M
2523 mirror 20.4G 7.23T 26 152 20.7M 21.6M
2524 U1 - - 0 31 1.46K 20.6M sdb sdff
2525 U10 - - 0 1 3.77K 13.3K sdas sdgw
2526 U11 - - 0 1 288K 13.3K sdat sdgx
2527 U12 - - 0 1 78.4K 13.3K sdau sdgy
2528 U13 - - 0 1 128K 13.3K sdav sdgz
2529 U14 - - 0 1 63.2K 13.3K sdfk sdg
2532 .Sh ENVIRONMENT VARIABLES
2533 .Bl -tag -width "ZFS_ABORT"
2537 to dump core on exit for the purposes of running
2540 .Bl -tag -width "ZPOOL_IMPORT_PATH"
2541 .It Ev ZPOOL_IMPORT_PATH
2542 The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
2544 looks for device nodes and files.
2550 .Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2551 .It Ev ZPOOL_VDEV_NAME_GUID
2553 .Nm zpool subcommands to output vdev guids by default. This behavior
2556 command line option.
2558 .Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2559 .It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2562 subcommands to follow links for vdev names by default. This behavior is identical to the
2564 command line option.
2566 .Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2567 .It Ev ZPOOL_VDEV_NAME_PATH
2570 subcommands to output full vdev path names by default. This
2571 behavior is identical to the
2573 command line option.
2575 .Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
2576 .It Ev ZFS_VDEV_DEVID_OPT_OUT
2577 Older ZFS on Linux implementations had issues when attempting to display pool
2578 config VDEV names if a
2580 NVP value is present in the pool's config.
2582 For example, a pool that originated on illumos platform would have a devid
2583 value in the config and
2585 would fail when listing the config.
2586 This would also be true for future Linux based pools.
2588 A pool can be stripped of any
2590 values on import or prevented from adding
2596 .Sy ZFS_VDEV_DEVID_OPT_OUT .
2598 .Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
2599 .It Ev ZPOOL_SCRIPTS_AS_ROOT
2600 Allow a privileged user to run the
2601 .Nm zpool status/iostat
2604 option. Normally, only unprivileged users are allowed to run
2607 .Bl -tag -width "ZPOOL_SCRIPTS_PATH"
2608 .It Ev ZPOOL_SCRIPTS_PATH
2609 The search path for scripts when running
2610 .Nm zpool status/iostat
2613 option. This is a colon-separated list of directories and overrides the default
2616 .Pa /etc/zfs/zpool.d
2619 .Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
2620 .It Ev ZPOOL_SCRIPTS_ENABLED
2622 .Nm zpool status/iostat
2626 .Sy ZPOOL_SCRIPTS_ENABLED
2627 is not set, it is assumed that the user is allowed to run
2628 .Nm zpool status/iostat -c .
2630 .Sh INTERFACE STABILITY
2634 .Xr zfs-module-parameters 5 ,
2635 .Xr zpool-features 5 ,