->= 12.1.2
----------
-* When running 'df' on a CephFS filesystem comprising exactly one data pool,
- the result now reflects the file storage space used and available in that
- data pool (fuse client only).
-* Added new commands "pg force-recovery" and "pg-force-backfill". Use them
- to boost recovery or backfill priority of specified pgs, so they're
- recovered/backfilled before any other. Note that these commands don't
- interrupt ongoing recovery/backfill, but merely queue specified pgs
- before others so they're recovered/backfilled as soon as possible.
- New commands "pg cancel-force-recovery" and "pg cancel-force-backfill"
- restore default recovery/backfill priority of previously forced pgs.
-
-
-12.2.1
-------
-
-* Clusters will need to upgrade to 12.2.1 before upgrading to any
- Mimic 13.y.z version (either a development release or an eventual
- stable Mimic release).
-
-- *CephFS*:
-
- * Limiting MDS cache via a memory limit is now supported using the new
- mds_cache_memory_limit config option (1GB by default). A cache reservation
- can also be specified using mds_cache_reservation as a percentage of the
- limit (5% by default). Limits by inode count are still supported using
- mds_cache_size. Setting mds_cache_size to 0 (the default) disables the
- inode limit.
-
-* The maximum number of PGs per OSD before the monitor issues a
- warning has been reduced from 300 to 200 PGs. 200 is still twice
- the generally recommended target of 100 PGs per OSD. This limit can
- be adjusted via the ``mon_max_pg_per_osd`` option on the
- monitors. The older ``mon_pg_warn_max_per_osd`` option has been removed.
-
-* Creating pools or adjusting pg_num will now fail if the change would
- make the number of PGs per OSD exceed the configured
- ``mon_max_pg_per_osd`` limit. The option can be adjusted if it
- is really necessary to create a pool with more PGs.
+>=15.2.4
+--------
+
+* Cephadm: There were a lot of small usability improvements and bug fixes:
+
+ * Grafana when deployed by Cephadm now binds to all network interfaces.
+ * ``cephadm check-host`` now prints all detected problems at once.
+ * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
+ when generating an SSL certificate for Grafana.
+ * The Alertmanager is now correctly pointed to the Ceph Dashboard
+ * ``cephadm adopt`` now supports adopting an Alertmanager
+ * ``ceph orch ps`` now supports filtering by service name
+ * ``ceph orch host ls`` now marks hosts as offline, if they are not
+ accessible.
+
+* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
+ a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
+ nfs-ns::
+
+ ceph orch apply nfs mynfs nfs-ganesha nfs-ns
+
+* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
+ yaml representation that is consumable by ``ceph orch apply``. In addition,
+ the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
+ ``--format json-pretty``.
+
+* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
+ the OSD specification before deploying OSDs. This makes it possible to
+ verify that the specification is correct, before applying it.
+
+* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
+ ``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
+ ``radosgw-admin orphans find`` -- have been deprecated. They have
+ not been actively maintained and they store intermediate results on
+ the cluster, which could fill a nearly-full cluster. They have been
+ replaced by a tool, currently considered experimental,
+ ``rgw-orphan-list``.
+
+* RBD: The name of the rbd pool object that is used to store
+ rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
+ to "rbd_trash_purge_schedule". Users that have already started using
+ ``rbd trash purge schedule`` functionality and have per pool or namespace
+ schedules configured should copy "rbd_trash_trash_purge_schedule"
+ object to "rbd_trash_purge_schedule" before the upgrade and remove
+ "rbd_trash_purge_schedule" using the following commands in every RBD
+ pool and namespace where a trash purge schedule was previously
+ configured::
+
+ rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
+ rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
+
+ or use any other convenient way to restore the schedule after the
+ upgrade.