From: Stoiko Ivanov Date: Fri, 19 Feb 2021 12:45:43 +0000 (+0100) Subject: zfspoolplugin: check if mounted instead of imported X-Git-Url: https://git.proxmox.com/?p=pve-storage.git;a=commitdiff_plain;h=56a7637adbb76cde8364d8786faa7d8166255dcd zfspoolplugin: check if mounted instead of imported This patch addresses an issue we recently saw on a production machine: * after booting a ZFS pool failed to get imported (due to an empty /etc/zfs/zpool.cache) * pvestatd/guest-startall eventually tried to import the pool * the pool was imported, yet the datasets of the pool remained not mounted A bit of debugging showed that `zpool import ` is not atomic, in fact it does fork+exec `mount` with appropriate parameters. If an import ran longer than the hardcoded timeout of 15s, it could happen that the pool got imported, but the zpool command (and its forks) got terminated due to timing out. reproducing this is straight-forward by setting (drastic) bw+iops limits on a guests' disk (which contains a zpool) - e.g.: `qm set 100 -scsi1 wd:vm-100-disk-1,iops_rd=10,iops_rd_max=20,\ iops_wr=15,iops_wr_max=20,mbps_rd=10,mbps_rd_max=15,mbps_wr=10,\ mbps_wr_max=15` afterwards running `timeout 15 zpool import ` resulted in that situation in the guest on my machine The patch changes the check in activate_storage for the ZFSPoolPlugin, to check if any dataset below the 'pool' (which can also be a sub-dataset) is mounted by parsing /proc/mounts: * this is cheaper than running `zfs get` or `zpool list` * it catches a properly imported and mounted pool in case the root-dataset has 'canmount' set to off (or noauto), as long as any dataset below is mounted After trying to import the pool, we also run `zfs mount -a` (in case another check of /proc/mounts fails). Potential for regression: * running `zfs mount -a` is problematic, if a dataset is manually umounted after booting (without setting 'canmount') * a pool without any mounted dataset (no mountpoint property set and only zvols) - will result in repeated calls to `zfs mount -a` both of the above seem unlikely and should not occur, if using our tooling. Signed-off-by: Stoiko Ivanov --- diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index be7b1b9..a524515 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -7,6 +7,7 @@ use IO::File; use Net::IP; use POSIX; +use PVE::ProcFSTools; use PVE::RPCEnvironment; use PVE::Storage::Plugin; use PVE::Tools qw(run_command); @@ -525,8 +526,26 @@ sub activate_storage { my ($class, $storeid, $scfg, $cache) = @_; # Note: $scfg->{pool} can include dataset / - my $pool = $scfg->{pool}; - $pool =~ s!/.*$!!; + my $dataset = $scfg->{pool}; + my $pool = ($dataset =~ s!/.*$!!r); + + my $dataset_mounted = sub { + my $mounted = 0; + my $dataset_dec = PVE::ProcFSTools::decode_mount($dataset); + my $mounts = eval { PVE::ProcFSTools::parse_proc_mounts() }; + warn "$@\n" if $@; + foreach my $mp (@$mounts) { + my ($what, $dir, $fs) = @$mp; + next if $fs ne 'zfs'; + # check for root-dataset of storage or any child-dataset. + # root-dataset could have 'canmount=off'. If any child is mounted + # heuristically assume that `zfs mount -a` was successful + next if $what !~ m!^$dataset_dec(?:/|$)!; + $mounted = 1; + last; + } + return $mounted; + }; my $pool_imported = sub { my @param = ('-o', 'name', '-H', $pool); @@ -536,7 +555,7 @@ sub activate_storage { return defined($res) && $res =~ m/$pool/; }; - if (!$pool_imported->()) { + if (!$dataset_mounted->()) { # import can only be done if not yet imported! my @param = ('-d', '/dev/disk/by-id/', '-o', 'cachefile=none', $pool); eval { $class->zfs_request($scfg, undef, 'zpool_import', @param) }; @@ -544,6 +563,8 @@ sub activate_storage { # just could've raced with another import, so recheck if it is imported die "could not activate storage '$storeid', $err\n" if !$pool_imported->(); } + eval { $class->zfs_request($scfg, undef, 'mount', '-a') }; + die "could not activate storage '$storeid', $@\n" if $@; } return 1; }