Starting the upgrade
====================
+.. note::
+ .. note::
+ `Staggered Upgrade`_ of the mons/mgrs may be necessary to have access
+ to this new feature.
+
+ Cephadm by default reduces `max_mds` to `1`. This can be disruptive for large
+ scale CephFS deployments because the cluster cannot quickly reduce active MDS(s)
+ to `1` and a single active MDS cannot easily handle the load of all clients
+ even for a short time. Therefore, to upgrade MDS(s) without reducing `max_mds`,
+ the `fail_fs` option can to be set to `true` (default value is `false`) prior
+ to initiating the upgrade:
+
+ .. prompt:: bash #
+
+ ceph config set mgr mgr/orchestrator/fail_fs true
+
+ This would:
+ #. Fail CephFS filesystems, bringing active MDS daemon(s) to
+ `up:standby` state.
+
+ #. Upgrade MDS daemons safely.
+
+ #. Bring CephFS filesystems back up, bringing the state of active
+ MDS daemon(s) from `up:standby` to `up:active`.
+
Before you use cephadm to upgrade Ceph, verify that all hosts are currently online and that your cluster is healthy by running the following command:
.. prompt:: bash #