Use 'zpool reopen' instead of 'zpool scrub' to kick in the spare device:
this is required to avoid spurious failures caused by a race condition
in events processing by the ZFS Event Daemon:
P1 (zpool scrub) P2 (zed)
---
zfs_ioc_pool_scan()
-> dsl_scan()
-> vdev_reopen()
-> vdev_set_state(VDEV_STATE_CANT_OPEN)
zfs_ioc_vdev_attach()
-> spa_vdev_attach()
-> dsl_resilver_restart()
-> dsl_sync_task()
-> dsl_scan_setup_check()
<- dsl_scan_setup_check(): EBUSY
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #7247
Closes #7342
# 2.1 Fault a device, verify the spare is kicked in
log_must zinject -d $FAULT_DEV -e nxio -T all -f 100 $TESTPOOL
- log_must zpool scrub $TESTPOOL
+ log_must zpool reopen $TESTPOOL
log_must wait_vdev_state $TESTPOOL $FAULT_DEV "UNAVAIL" 60
log_must wait_vdev_state $TESTPOOL $SPARE_DEV1 "ONLINE" 60
log_must wait_hotspare_state $TESTPOOL $SPARE_DEV1 "INUSE"