-14.2.2
-------
-
-* The no{up,down,in,out} related commands has been revamped.
- There are now 2 ways to set the no{up,down,in,out} flags:
- the old 'ceph osd [un]set <flag>' command, which sets cluster-wide flags;
- and the new 'ceph osd [un]set-group <flags> <who>' command,
- which sets flags in batch at the granularity of any crush node,
- or device class.
-
-* RGW: radosgw-admin introduces two subcommands that allow the
- managing of expire-stale objects that might be left behind after a
- bucket reshard in earlier versions of RGW. One subcommand lists such
- objects and the other deletes them. Read the troubleshooting section
- of the dynamic resharding docs for details.
-
-14.2.5
-------
-
-* The telemetry module now has a 'device' channel, enabled by default, that
- will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
- in order to build and improve device failure prediction algorithms. Because
- the content of telemetry reports has changed, you will need to either re-opt-in
- with::
-
- ceph telemetry on
-
- You can view exactly what information will be reported first with::
-
- ceph telemetry show
- ceph telemetry show device # specifically show the device channel
-
- If you are not comfortable sharing device metrics, you can disable that
- channel first before re-opting-in:
-
- ceph config set mgr mgr/telemetry/channel_crash false
- ceph telemetry on
-
-* The telemetry module now reports more information about CephFS file systems,
- including:
-
- - how many MDS daemons (in total and per file system)
- - which features are (or have been) enabled
- - how many data pools
- - approximate file system age (year + month of creation)
- - how many files, bytes, and snapshots
- - how much metadata is being cached
-
- We have also added:
-
- - which Ceph release the monitors are running
- - whether msgr v1 or v2 addresses are used for the monitors
- - whether IPv4 or IPv6 addresses are used for the monitors
- - whether RADOS cache tiering is enabled (and which mode)
- - whether pools are replicated or erasure coded, and
- which erasure code profile plugin and parameters are in use
- - how many hosts are in the cluster, and how many hosts have each type of daemon
- - whether a separate OSD cluster network is being used
- - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled
- - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
- - aggregate stats about the CRUSH map, like which algorithms are used, how big buckets are, how many rules are defined, and what tunables are in use
-
- If you had telemetry enabled, you will need to re-opt-in with::