Get the replicatable volumes from the snapshot config rather than the
current config. And filter those volumes further to those that will
actually be rolled back.
Previously, a volume that only had replication snapshots (e.g. because
it was added after the snapshot was taken, or the vmstate volume)
would lose them. Then, on the next replication run, such a volume
would lead to an error, because replication tried to do a full sync,
but the target volume still exists.
This is not a complete fix. It is still possible to run into problems:
- by removing the last (non-replication) snapshots after a rollback
before replication can run once.
- by creating a snapshot and making a rollback before replication can
run once.
The list of volumes is not required to be sorted for prepare(), but it
is sorted by how foreach_volume() iterates now, so not random.