->=15.2.4
+>=15.2.5
--------
-* Cephadm: There were a lot of small usability improvements and bug fixes:
+* CephFS: Automatic static subtree partitioning policies may now be configured
+ using the new distributed and random ephemeral pinning extended attributes on
+ directories. See the documentation for more information:
+ https://docs.ceph.com/docs/master/cephfs/multimds/
- * Grafana when deployed by Cephadm now binds to all network interfaces.
- * ``cephadm check-host`` now prints all detected problems at once.
- * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
- when generating an SSL certificate for Grafana.
- * The Alertmanager is now correctly pointed to the Ceph Dashboard
- * ``cephadm adopt`` now supports adopting an Alertmanager
- * ``ceph orch ps`` now supports filtering by service name
- * ``ceph orch host ls`` now marks hosts as offline, if they are not
- accessible.
+* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
+ If any OSD has repaired more than this many I/O errors in stored data a
+ ``OSD_TOO_MANY_REPAIRS`` health warning is generated.
-* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
- a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
- nfs-ns::
-
- ceph orch apply nfs mynfs nfs-ganesha nfs-ns
-
-* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
- yaml representation that is consumable by ``ceph orch apply``. In addition,
- the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
- ``--format json-pretty``.
-
-* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
- the OSD specification before deploying OSDs. This makes it possible to
- verify that the specification is correct, before applying it.
-
-* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
- ``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
- ``radosgw-admin orphans find`` -- have been deprecated. They have
- not been actively maintained and they store intermediate results on
- the cluster, which could fill a nearly-full cluster. They have been
- replaced by a tool, currently considered experimental,
- ``rgw-orphan-list``.
-
-* RBD: The name of the rbd pool object that is used to store
- rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
- to "rbd_trash_purge_schedule". Users that have already started using
- ``rbd trash purge schedule`` functionality and have per pool or namespace
- schedules configured should copy "rbd_trash_trash_purge_schedule"
- object to "rbd_trash_purge_schedule" before the upgrade and remove
- "rbd_trash_purge_schedule" using the following commands in every RBD
- pool and namespace where a trash purge schedule was previously
- configured::
-
- rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
- rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
-
- or use any other convenient way to restore the schedule after the
- upgrade.
+* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
+ scheduled scrubs of the type disabled will be aborted. All user initiated
+ scrubs are NOT interrupted.