>=17.0.0
+* `ceph-mgr-modules-core` debian package does not recommend `ceph-mgr-rook`
+ anymore. As the latter depends on `python3-numpy` which cannot be imported in
+ different Python sub-interpreters multi-times if the version of
+ `python3-numpy` is older than 1.19. Since `apt-get` installs the `Recommends`
+ packages by default, `ceph-mgr-rook` was always installed along with
+ `ceph-mgr` debian package as an indirect dependency. If your workflow depends
+ on this behavior, you might want to install `ceph-mgr-rook` separately.
+
* A new library is available, libcephsqlite. It provides a SQLite Virtual File
System (VFS) on top of RADOS. The database and journals are striped over
RADOS across multiple objects for virtually unlimited scaling and throughput
that were storing state in RADOS omap, especially without striping which
limits scalability.
+* MDS upgrades no longer require stopping all standby MDS daemons before
+ upgrading the sole active MDS for a file system.
+
+* RGW: It is possible to specify ssl options and ciphers for beast frontend now.
+ The default ssl options setting is "no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1".
+ If you want to return back the old behavior add 'ssl_options=' (empty) to
+ ``rgw frontends`` configuration.
+
+* fs: A file system can be created with a specific ID ("fscid"). This is useful
+ in certain recovery scenarios, e.g., monitor database lost and rebuilt, and
+ the restored file system is expected to have the same ID as before.
+
+>=16.2.6
+--------
+
+* MGR: The pg_autoscaler has a new default 'scale-down' profile which provides more
+ performance from the start for new pools (for newly created clusters).
+ Existing clusters will retain the old behavior, now called the 'scale-up' profile.
+ For more details, see:
+
+ https://docs.ceph.com/en/latest/rados/operations/placement-groups/
+
>=16.0.0
--------
deprecated and will be removed in a future release. Please use
``nfs cluster rm`` and ``nfs export rm`` instead.
-* mgr-pg_autoscaler: Autoscaler will now start out by scaling each
- pool to have a full complements of pgs from the start and will only
- decrease it when other pools need more pgs due to increased usage.
- This improves out of the box performance of Ceph by allowing more PGs
- to be created for a given pool.
-
* CephFS: Disabling allow_standby_replay on a file system will also stop all
standby-replay daemons for that file system.
CentOS 7.6 and later. To enable older clients, set ``cephx_require_version``
and ``cephx_service_require_version`` config options to 1.
+* rgw: The Civetweb frontend is now deprecated and will be removed in Quincy.
+
>=15.0.0
--------