->=14.2.1
+>=15.2.4
--------
-* Ceph now packages python bindings for python3.6 instead of
- python3.4, because EPEL7 recently switched from python3.4 to
- python3.6 as the native python3. see the `announcement <https://lists.fedoraproject.org/archives/list/epel-announce@lists.fedoraproject.org/message/EGUMKAIMPK2UD5VSHXM53BH2MBDGDWMO/>_`
- for more details on the background of this change.
+* Cephadm: There were a lot of small usability improvements and bug fixes:
-* Nautilus-based librbd clients can now open images on Jewel clusters.
+ * Grafana when deployed by Cephadm now binds to all network interfaces.
+ * ``cephadm check-host`` now prints all detected problems at once.
+ * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
+ when generating an SSL certificate for Grafana.
+ * The Alertmanager is now correctly pointed to the Ceph Dashboard
+ * ``cephadm adopt`` now supports adopting an Alertmanager
+ * ``ceph orch ps`` now supports filtering by service name
+ * ``ceph orch host ls`` now marks hosts as offline, if they are not
+ accessible.
-14.2.2
-------
+* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
+ a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
+ nfs-ns::
-* The no{up,down,in,out} related commands has been revamped.
- There are now 2 ways to set the no{up,down,in,out} flags:
- the old 'ceph osd [un]set <flag>' command, which sets cluster-wide flags;
- and the new 'ceph osd [un]set-group <flags> <who>' command,
- which sets flags in batch at the granularity of any crush node,
- or device class.
+ ceph orch apply nfs mynfs nfs-ganesha nfs-ns
-* RGW: radosgw-admin introduces two subcommands that allow the
- managing of expire-stale objects that might be left behind after a
- bucket reshard in earlier versions of RGW. One subcommand lists such
- objects and the other deletes them. Read the troubleshooting section
- of the dynamic resharding docs for details.
+* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
+ yaml representation that is consumable by ``ceph orch apply``. In addition,
+ the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
+ ``--format json-pretty``.
+
+* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
+ the OSD specification before deploying OSDs. This makes it possible to
+ verify that the specification is correct, before applying it.
+
+* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
+ ``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
+ ``radosgw-admin orphans find`` -- have been deprecated. They have
+ not been actively maintained and they store intermediate results on
+ the cluster, which could fill a nearly-full cluster. They have been
+ replaced by a tool, currently considered experimental,
+ ``rgw-orphan-list``.
+
+* RBD: The name of the rbd pool object that is used to store
+ rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
+ to "rbd_trash_purge_schedule". Users that have already started using
+ ``rbd trash purge schedule`` functionality and have per pool or namespace
+ schedules configured should copy "rbd_trash_trash_purge_schedule"
+ object to "rbd_trash_purge_schedule" before the upgrade and remove
+ "rbd_trash_purge_schedule" using the following commands in every RBD
+ pool and namespace where a trash purge schedule was previously
+ configured::
+
+ rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
+ rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
+
+ or use any other convenient way to restore the schedule after the
+ upgrade.