]> git.proxmox.com Git - mirror_zfs.git/commit
Add support for autoexpand property
authorBrian Behlendorf <behlendorf1@llnl.gov>
Mon, 23 Jul 2018 22:40:15 +0000 (15:40 -0700)
committerGitHub <noreply@github.com>
Mon, 23 Jul 2018 22:40:15 +0000 (15:40 -0700)
commitd441e85dd754ecc15659322b4d36796cbd3838de
tree3b5adc51a6bda08c513edd382769cade243bb0ca
parent2e5dc449c1a65e0b0bf730fd69c9b5804bd57ee8
Add support for autoexpand property

While the autoexpand property may seem like a small feature it
depends on a significant amount of system infrastructure.  Enough
of that infrastructure is now in place that with a few modifications
for Linux it can be supported.

Auto-expand works as follows; when a block device is modified
(re-sized, closed after being open r/w, etc) a change uevent is
generated for udev.  The ZED, which is monitoring udev events,
passes the change event along to zfs_deliver_dle() if the disk
or partition contains a zfs_member as identified by blkid.

From here the device is matched against all imported pool vdevs
using the vdev_guid which was read from the label by blkid.  If
a match is found the ZED reopens the pool vdev.  This re-opening
is important because it allows the vdev to be briefly closed so
the disk partition table can be re-read.  Otherwise, it wouldn't
be possible to report the maximum possible expansion size.

Finally, if the property autoexpand=on a vdev expansion will be
attempted.  After performing some sanity checks on the disk to
verify that it is safe to expand,  the primary partition (-part1)
will be expanded and the partition table updated.  The partition
is then re-opened (again) to detect the updated size which allows
the new capacity to be used.

In order to make all of the above possible the following changes
were required:

* Updated the zpool_expand_001_pos and zpool_expand_003_pos tests.
  These tests now create a pool which is layered on a loopback,
  scsi_debug, and file vdev.  This allows for testing of non-
  partitioned block device (loopback), a partition block device
  (scsi_debug), and a file which does not receive udev change
  events.  This provided for better test coverage, and by removing
  the layering on ZFS volumes there issues surrounding layering
  one pool on another are avoided.

* zpool_find_vdev_by_physpath() updated to accept a vdev guid.
  This allows for matching by guid rather than path which is a
  more reliable way for the ZED to reference a vdev.

* Fixed zfs_zevent_wait() signal handling which could result
  in the ZED spinning when a signal was not handled.

* Removed vdev_disk_rrpart() functionality which can be abandoned
  in favor of kernel provided blkdev_reread_part() function.

* Added a rwlock which is held as a writer while a disk is being
  reopened.  This is important to prevent errors from occurring
  for any configuration related IOs which bypass the SCL_ZIO lock.
  The zpool_reopen_007_pos.ksh test case was added to verify IO
  error are never observed when reopening.  This is not expected
  to impact IO performance.

Additional fixes which aren't critical but were discovered and
resolved in the course of developing this functionality.

* Added PHYS_PATH="/dev/zvol/dataset" to the vdev configuration for
  ZFS volumes.  This is as good as a unique physical path, while the
  volumes are not used in the test cases anymore for other reasons
  this improvement was included.

Reviewed by: Richard Elling <Richard.Elling@RichardElling.com>
Signed-off-by: Sara Hartse <sara.hartse@delphix.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #120
Closes #2437
Closes #5771
Closes #7366
Closes #7582
Closes #7629
26 files changed:
cmd/zed/agents/zfs_mod.c
config/kernel-blkdev-get.m4 [deleted file]
config/kernel-blkdev-reread-part.m4 [new file with mode: 0644]
config/kernel-get-gendisk.m4 [deleted file]
config/kernel.m4
include/linux/blkdev_compat.h
include/sys/vdev_disk.h
lib/libzfs/libzfs_import.c
lib/libzfs/libzfs_pool.c
module/zfs/fm.c
module/zfs/vdev.c
module/zfs/vdev_disk.c
tests/runfiles/linux.run
tests/test-runner/bin/zts-report.py
tests/zfs-tests/include/blkdev.shlib
tests/zfs-tests/tests/functional/cli_root/zpool_expand/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_expand/setup.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand.cfg
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_001_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_002_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_003_neg.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_004_pos.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_expand/zpool_expand_005_pos.ksh [new file with mode: 0755]
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/Makefile.am
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/cleanup.ksh
tests/zfs-tests/tests/functional/cli_root/zpool_reopen/zpool_reopen_007_pos.ksh [new file with mode: 0755]