]> git.proxmox.com Git - mirror_zfs.git/commit - module/zfs/spa.c
Distributed Spare (dRAID) Feature
authorBrian Behlendorf <behlendorf1@llnl.gov>
Fri, 13 Nov 2020 21:51:51 +0000 (13:51 -0800)
committerGitHub <noreply@github.com>
Fri, 13 Nov 2020 21:51:51 +0000 (13:51 -0800)
commitb2255edcc0099e62ad46a3dd9d64537663c6aee3
tree6cfe0d0fd30fb451396551a991d50f4bdc0cf353
parenta724db03740133c46b9a577b41a6f7221acd3e1f
Distributed Spare (dRAID) Feature

This patch adds a new top-level vdev type called dRAID, which stands
for Distributed parity RAID.  This pool configuration allows all dRAID
vdevs to participate when rebuilding to a distributed hot spare device.
This can substantially reduce the total time required to restore full
parity to pool with a failed device.

A dRAID pool can be created using the new top-level `draid` type.
Like `raidz`, the desired redundancy is specified after the type:
`draid[1,2,3]`.  No additional information is required to create the
pool and reasonable default values will be chosen based on the number
of child vdevs in the dRAID vdev.

    zpool create <pool> draid[1,2,3] <vdevs...>

Unlike raidz, additional optional dRAID configuration values can be
provided as part of the draid type as colon separated values. This
allows administrators to fully specify a layout for either performance
or capacity reasons.  The supported options include:

    zpool create <pool> \
        draid[<parity>][:<data>d][:<children>c][:<spares>s] \
        <vdevs...>

    - draid[parity]       - Parity level (default 1)
    - draid[:<data>d]     - Data devices per group (default 8)
    - draid[:<children>c] - Expected number of child vdevs
    - draid[:<spares>s]   - Distributed hot spares (default 0)

Abbreviated example `zpool status` output for a 68 disk dRAID pool
with two distributed spares using special allocation classes.

```
  pool: tank
 state: ONLINE
config:

    NAME                  STATE     READ WRITE CKSUM
    slag7                 ONLINE       0     0     0
      draid2:8d:68c:2s-0  ONLINE       0     0     0
        L0                ONLINE       0     0     0
        L1                ONLINE       0     0     0
        ...
        U25               ONLINE       0     0     0
        U26               ONLINE       0     0     0
        spare-53          ONLINE       0     0     0
          U27             ONLINE       0     0     0
          draid2-0-0      ONLINE       0     0     0
        U28               ONLINE       0     0     0
        U29               ONLINE       0     0     0
        ...
        U42               ONLINE       0     0     0
        U43               ONLINE       0     0     0
    special
      mirror-1            ONLINE       0     0     0
        L5                ONLINE       0     0     0
        U5                ONLINE       0     0     0
      mirror-2            ONLINE       0     0     0
        L6                ONLINE       0     0     0
        U6                ONLINE       0     0     0
    spares
      draid2-0-0          INUSE     currently in use
      draid2-0-1          AVAIL
```

When adding test coverage for the new dRAID vdev type the following
options were added to the ztest command.  These options are leverages
by zloop.sh to test a wide range of dRAID configurations.

    -K draid|raidz|random - kind of RAID to test
    -D <value>            - dRAID data drives per group
    -S <value>            - dRAID distributed hot spares
    -R <value>            - RAID parity (raidz or dRAID)

The zpool_create, zpool_import, redundancy, replacement and fault
test groups have all been updated provide test coverage for the
dRAID feature.

Co-authored-by: Isaac Huang <he.huang@intel.com>
Co-authored-by: Mark Maybee <mmaybee@cray.com>
Co-authored-by: Don Brady <don.brady@delphix.com>
Co-authored-by: Matthew Ahrens <mahrens@delphix.com>
Co-authored-by: Brian Behlendorf <behlendorf1@llnl.gov>
Reviewed-by: Mark Maybee <mmaybee@cray.com>
Reviewed-by: Matt Ahrens <matt@delphix.com>
Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #10102
157 files changed:
cmd/raidz_test/raidz_bench.c
cmd/raidz_test/raidz_test.c
cmd/raidz_test/raidz_test.h
cmd/zdb/zdb.c
cmd/zed/agents/zfs_mod.c
cmd/zed/agents/zfs_retire.c
cmd/zfs/zfs_main.c
cmd/zpool/zpool_main.c
cmd/zpool/zpool_vdev.c
cmd/ztest/ztest.c
configure.ac
include/libzfs.h
include/sys/Makefile.am
include/sys/dsl_scan.h
include/sys/fs/zfs.h
include/sys/spa_impl.h
include/sys/txg.h
include/sys/vdev.h
include/sys/vdev_draid.h [new file with mode: 0644]
include/sys/vdev_impl.h
include/sys/vdev_raidz.h
include/sys/vdev_raidz_impl.h
include/sys/vdev_rebuild.h
include/sys/zio.h
include/zfeature_common.h
lib/libzfs/libzfs_dataset.c
lib/libzfs/libzfs_import.c
lib/libzfs/libzfs_pool.c
lib/libzpool/Makefile.am
man/man1/raidz_test.1
man/man1/ztest.1
man/man5/zfs-module-parameters.5
man/man5/zpool-features.5
man/man8/zpool-create.8
man/man8/zpool-scrub.8
man/man8/zpoolconcepts.8
module/Makefile.bsd
module/os/freebsd/zfs/vdev_file.c
module/os/freebsd/zfs/vdev_geom.c
module/os/linux/zfs/vdev_disk.c
module/os/linux/zfs/vdev_file.c
module/zcommon/zfeature_common.c
module/zcommon/zfs_namecheck.c
module/zfs/Makefile.in
module/zfs/abd.c
module/zfs/dsl_scan.c
module/zfs/metaslab.c
module/zfs/mmp.c
module/zfs/spa.c
module/zfs/spa_misc.c
module/zfs/vdev.c
module/zfs/vdev_draid.c [new file with mode: 0644]
module/zfs/vdev_draid_rand.c [new file with mode: 0644]
module/zfs/vdev_indirect.c
module/zfs/vdev_initialize.c
module/zfs/vdev_label.c
module/zfs/vdev_mirror.c
module/zfs/vdev_missing.c
module/zfs/vdev_queue.c
module/zfs/vdev_raidz.c
module/zfs/vdev_raidz_math.c
module/zfs/vdev_raidz_math_impl.h
module/zfs/vdev_rebuild.c
module/zfs/vdev_removal.c
module/zfs/vdev_root.c
module/zfs/vdev_trim.c
module/zfs/zfs_fm.c
module/zfs/zio.c
module/zfs/zio_inject.c
scripts/Makefile.am
scripts/zfs-helpers.sh
scripts/zloop.sh
tests/runfiles/common.run
tests/test-runner/bin/zts-report.py.in
tests/zfs-tests/cmd/Makefile.am
tests/zfs-tests/cmd/draid/.gitignore [new file with mode: 0644]
tests/zfs-tests/cmd/draid/Makefile.am [new file with mode: 0644]
tests/zfs-tests/cmd/draid/draid.c [new file with mode: 0644]
tests/zfs-tests/include/commands.cfg
tests/zfs-tests/include/libtest.shlib
tests/zfs-tests/include/tunables.cfg
tests/zfs-tests/tests/functional/cli_root/zfs_mount/zfs_mount.kshlib
tests/zfs-tests/tests/functional/cli_root/zpool_add/zpool_add_001_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_create/draidcfg.gz [new file with mode: 0644]
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_001_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_005_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_006_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_007_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_009_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_010_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_011_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_draid_001_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_draid_002_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_draid_003_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_create/zpool_create_draid_004_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_get/zpool_get.cfg
tests/zfs-tests/tests/functional/cli_root/zpool_import/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_cachefile_device_added.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_cachefile_device_replaced.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_cachefile_shared_device.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_paths_changed.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_rewind_config_changed.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/import_rewind_device_replaced.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/setup.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import.cfg
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import.kshlib
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_007_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_008_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_010_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_016_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_017_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_missing_001_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_missing_002_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_import/zpool_import_missing_003_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_wait/scan/zpool_wait_replace_cancel.ksh
tests/zfs-tests/tests/functional/fault/auto_offline_001_pos.ksh
tests/zfs-tests/tests/functional/fault/auto_spare_001_pos.ksh
tests/zfs-tests/tests/functional/fault/auto_spare_002_pos.ksh
tests/zfs-tests/tests/functional/fault/auto_spare_ashift.ksh
tests/zfs-tests/tests/functional/fault/auto_spare_multiple.ksh
tests/zfs-tests/tests/functional/fault/auto_spare_shared.ksh
tests/zfs-tests/tests/functional/raidz/Makefile.am
tests/zfs-tests/tests/functional/raidz/raidz_003_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/raidz/raidz_004_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/Makefile.am
tests/zfs-tests/tests/functional/redundancy/redundancy.kshlib
tests/zfs-tests/tests/functional/redundancy/redundancy_001_pos.ksh [deleted file]
tests/zfs-tests/tests/functional/redundancy/redundancy_002_pos.ksh [deleted file]
tests/zfs-tests/tests/functional/redundancy/redundancy_003_pos.ksh [deleted file]
tests/zfs-tests/tests/functional/redundancy/redundancy_004_neg.ksh [deleted file]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid1.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid2.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid3.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid_spare1.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid_spare2.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_draid_spare3.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_mirror.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_raidz1.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_raidz2.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_raidz3.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/redundancy/redundancy_stripe.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/replacement/attach_rebuild.ksh
tests/zfs-tests/tests/functional/replacement/attach_resilver.ksh
tests/zfs-tests/tests/functional/replacement/detach.ksh
tests/zfs-tests/tests/functional/replacement/rebuild_raidz.ksh
tests/zfs-tests/tests/functional/replacement/replace_rebuild.ksh
tests/zfs-tests/tests/functional/replacement/replace_resilver.ksh
tests/zfs-tests/tests/functional/trim/autotrim_config.ksh
tests/zfs-tests/tests/functional/trim/autotrim_integrity.ksh
tests/zfs-tests/tests/functional/trim/autotrim_trim_integrity.ksh
tests/zfs-tests/tests/functional/trim/trim_config.ksh
tests/zfs-tests/tests/functional/trim/trim_integrity.ksh