]> git.proxmox.com Git - ceph.git/blob - ceph/PendingReleaseNotes
873c3e7ca18c5360e738d1b8ba5403d300365607
[ceph.git] / ceph / PendingReleaseNotes
1 >=17.2.4
2 --------
3
4 * Cephfs: The 'AT_NO_ATTR_SYNC' macro is deprecated, please use the standard
5 'AT_STATX_DONT_SYNC' macro. The 'AT_NO_ATTR_SYNC' macro will be removed in
6 the future.
7
8 * OSD: The issue of high CPU utilization during recovery/backfill operations
9 has been fixed. For more details, see: https://tracker.ceph.com/issues/56530.
10
11 >=17.2.1
12
13 * The "BlueStore zero block detection" feature (first introduced to Quincy in
14 https://github.com/ceph/ceph/pull/43337) has been turned off by default with a
15 new global configuration called `bluestore_zero_block_detection`. This feature,
16 intended for large-scale synthetic testing, does not interact well with some RBD
17 and CephFS features. Any side effects experienced in previous Quincy versions
18 would no longer occur, provided that the configuration remains set to false.
19 Relevant tracker: https://tracker.ceph.com/issues/55521
20
21 * telemetry: Added new Rook metrics to the 'basic' channel to report Rook's
22 version, Kubernetes version, node metrics, etc.
23 See a sample report with `ceph telemetry preview`.
24 Opt-in with `ceph telemetry on`.
25
26 For more details, see:
27
28 https://docs.ceph.com/en/latest/mgr/telemetry/
29
30 >=17.0.0
31
32 * Filestore has been deprecated in Quincy, considering that BlueStore has been
33 the default objectstore for quite some time.
34
35 * Critical bug in OMAP format upgrade is fixed. This could cause data corruption
36 (improperly formatted OMAP keys) after pre-Pacific cluster upgrade if
37 bluestore-quick-fix-on-mount parameter is set to true or ceph-bluestore-tool's
38 quick-fix/repair commands are invoked.
39 Relevant tracker: https://tracker.ceph.com/issues/53062
40
41 * `ceph-mgr-modules-core` debian package does not recommend `ceph-mgr-rook`
42 anymore. As the latter depends on `python3-numpy` which cannot be imported in
43 different Python sub-interpreters multi-times if the version of
44 `python3-numpy` is older than 1.19. Since `apt-get` installs the `Recommends`
45 packages by default, `ceph-mgr-rook` was always installed along with
46 `ceph-mgr` debian package as an indirect dependency. If your workflow depends
47 on this behavior, you might want to install `ceph-mgr-rook` separately.
48
49 * the "kvs" Ceph object class is not packaged anymore. "kvs" Ceph object class
50 offers a distributed flat b-tree key-value store implemented on top of librados
51 objects omap. Because we don't have existing internal users of this object
52 class, it is not packaged anymore.
53
54 * A new library is available, libcephsqlite. It provides a SQLite Virtual File
55 System (VFS) on top of RADOS. The database and journals are striped over
56 RADOS across multiple objects for virtually unlimited scaling and throughput
57 only limited by the SQLite client. Applications using SQLite may change to
58 the Ceph VFS with minimal changes, usually just by specifying the alternate
59 VFS. We expect the library to be most impactful and useful for applications
60 that were storing state in RADOS omap, especially without striping which
61 limits scalability.
62
63 * The ``device_health_metrics`` pool has been renamed ``.mgr``. It is now
64 used as a common store for all ``ceph-mgr`` modules.
65
66 * fs: A file system can be created with a specific ID ("fscid"). This is useful
67 in certain recovery scenarios, e.g., monitor database lost and rebuilt, and
68 the restored file system is expected to have the same ID as before.
69
70 * fs: A file system can be renamed using the `fs rename` command. Any cephx
71 credentials authorized for the old file system name will need to be
72 reauthorized to the new file system name. Since the operations of the clients
73 using these re-authorized IDs may be disrupted, this command requires the
74 "--yes-i-really-mean-it" flag. Also, mirroring is expected to be disabled
75 on the file system.
76
77 * fs: A FS volume can be renamed using the `fs volume rename` command. Any cephx
78 credentials authorized for the old volume name will need to be reauthorized to
79 the new volume name. Since the operations of the clients using these re-authorized
80 IDs may be disrupted, this command requires the "--yes-i-really-mean-it" flag. Also,
81 mirroring is expected to be disabled on the file system.
82
83 * MDS upgrades no longer require stopping all standby MDS daemons before
84 upgrading the sole active MDS for a file system.
85
86 * RGW: RGW now supports rate limiting by user and/or by bucket.
87 With this feature it is possible to limit user and/or bucket, the total operations and/or
88 bytes per minute can be delivered.
89 This feature is allowing the admin to limit only READ operations and/or WRITE operations.
90 The rate limiting configuration could be applied on all users and all bucket by using
91 global configuration.
92
93 * RGW: `radosgw-admin realm delete` is now renamed to `radosgw-admin realm rm`. This
94 is consistent with the help message.
95
96 * OSD: Ceph now uses mclock_scheduler for bluestore OSDs as its default osd_op_queue
97 to provide QoS. The 'mclock_scheduler' is not supported for filestore OSDs.
98 Therefore, the default 'osd_op_queue' is set to 'wpq' for filestore OSDs
99 and is enforced even if the user attempts to change it. For more details on
100 configuring mclock see,
101
102 https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/
103
104 * CephFS: Failure to replay the journal by a standby-replay daemon will now
105 cause the rank to be marked damaged.
106
107 * RGW: S3 bucket notification events now contain an `eTag` key instead of `etag`,
108 and eventName values no longer carry the `s3:` prefix, fixing deviations from
109 the message format observed on AWS.
110
111 * RGW: It is possible to specify ssl options and ciphers for beast frontend now.
112 The default ssl options setting is "no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1".
113 If you want to return back the old behavior add 'ssl_options=' (empty) to
114 ``rgw frontends`` configuration.
115
116 * RGW: The behavior for Multipart Upload was modified so that only
117 CompleteMultipartUpload notification is sent at the end of the multipart upload.
118 The POST notification at the beginning of the upload, and PUT notifications that
119 were sent on each part are not sent anymore.
120
121 * MGR: The pg_autoscaler has a new 'scale-down' profile which provides more
122 performance from the start for new pools. However, the module will remain
123 using it old behavior by default, now called the 'scale-up' profile.
124 For more details, see:
125
126 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
127
128 * MGR: The pg_autoscaler can now be turned `on` and `off` globally
129 with the `noautoscale` flag. By default this flag is unset and
130 the default pg_autoscale mode remains the same.
131 For more details, see:
132
133 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
134
135 * The ``ceph pg dump`` command now prints three additional columns:
136 `LAST_SCRUB_DURATION` shows the duration (in seconds) of the last completed scrub;
137 `SCRUB_SCHEDULING` conveys whether a PG is scheduled to be scrubbed at a specified
138 time, queued for scrubbing, or being scrubbed;
139 `OBJECTS_SCRUBBED` shows the number of objects scrubbed in a PG after scrub begins.
140
141 * A health warning will now be reported if the ``require-osd-release`` flag is not
142 set to the appropriate release after a cluster upgrade.
143
144 * LevelDB support has been removed. ``WITH_LEVELDB`` is no longer a supported
145 build option.
146
147 * MON/MGR: Pools can now be created with `--bulk` flag. Any pools created with `bulk`
148 will use a profile of the `pg_autoscaler` that provides more performance from the start.
149 However, any pools created without the `--bulk` flag will remain using it's old behavior
150 by default. For more details, see:
151
152 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
153 * Cephadm: ``osd_memory_target_autotune`` will be enabled by default which will set
154 ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.7`` of total RAM. This will be
155 unsuitable for hyperconverged infrastructures. For hyperconverged Ceph, please refer
156 to the documentation or set ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.2``.
157
158 * telemetry: Improved the opt-in flow so that users can keep sharing the same
159 data, even when new data collections are available. A new 'perf' channel
160 that collects various performance metrics is now avaiable to opt-in to with:
161 `ceph telemetry on`
162 `ceph telemetry enable channel perf`
163 See a sample report with `ceph telemetry preview`
164 For more details, see:
165
166 https://docs.ceph.com/en/latest/mgr/telemetry/
167
168 * MGR: The progress module disables the pg recovery event by default
169 since the event is expensive and has interrupted other service when
170 there are OSDs being marked in/out from the the cluster. However,
171 the user may still enable this event anytime. For more details, see:
172
173 https://docs.ceph.com/en/latest/mgr/progress/
174
175 >=16.0.0
176 --------
177 * mgr/nfs: ``nfs`` module is moved out of volumes plugin. Prior using the
178 ``ceph nfs`` commands, ``nfs`` mgr module must be enabled.
179
180 * volumes/nfs: The ``cephfs`` cluster type has been removed from the
181 ``nfs cluster create`` subcommand. Clusters deployed by cephadm can
182 support an NFS export of both ``rgw`` and ``cephfs`` from a single
183 NFS cluster instance.
184
185 * The ``nfs cluster update`` command has been removed. You can modify
186 the placement of an existing NFS service (and/or its associated
187 ingress service) using ``orch ls --export`` and ``orch apply -i
188 ...``.
189
190 * The ``orch apply nfs`` command no longer requires a pool or
191 namespace argument. We strongly encourage users to use the defaults
192 so that the ``nfs cluster ls`` and related commands will work
193 properly.
194
195 * The ``nfs cluster delete`` and ``nfs export delete`` commands are
196 deprecated and will be removed in a future release. Please use
197 ``nfs cluster rm`` and ``nfs export rm`` instead.
198
199 * The ``nfs export create`` CLI arguments have changed, with the
200 *fsname* or *bucket-name* argument position moving to the right of
201 *the *cluster-id* and *pseudo-path*. Consider transitioning to
202 *using named arguments instead of positional arguments (e.g., ``ceph
203 *nfs export create cephfs --cluster-id mycluster --pseudo-path /foo
204 *--fsname myfs`` instead of ``ceph nfs export create cephfs
205 *mycluster /foo myfs`` to ensure correct behavior with any
206 *version.
207
208 * mgr-pg_autoscaler: Autoscaler will now start out by scaling each
209 pool to have a full complements of pgs from the start and will only
210 decrease it when other pools need more pgs due to increased usage.
211 This improves out of the box performance of Ceph by allowing more PGs
212 to be created for a given pool.
213
214 * CephFS: Disabling allow_standby_replay on a file system will also stop all
215 standby-replay daemons for that file system.
216
217 * New bluestore_rocksdb_options_annex config parameter. Complements
218 bluestore_rocksdb_options and allows setting rocksdb options without repeating
219 the existing defaults.
220 * The MDS in Pacific makes backwards-incompatible changes to the ON-RADOS
221 metadata structures, which prevent a downgrade to older releases
222 (to Octopus and older).
223
224 * $pid expansion in config paths like `admin_socket` will now properly expand
225 to the daemon pid for commands like `ceph-mds` or `ceph-osd`. Previously only
226 `ceph-fuse`/`rbd-nbd` expanded `$pid` with the actual daemon pid.
227
228 * The allowable options for some "radosgw-admin" commands have been changed.
229
230 * "mdlog-list", "datalog-list", "sync-error-list" no longer accept
231 start and end dates, but do accept a single optional start marker.
232 * "mdlog-trim", "datalog-trim", "sync-error-trim" only accept a
233 single marker giving the end of the trimmed range.
234 * Similarly the date ranges and marker ranges have been removed on
235 the RESTful DATALog and MDLog list and trim operations.
236
237 * ceph-volume: The ``lvm batch`` subcommand received a major rewrite. This
238 closed a number of bugs and improves usability in terms of size specification
239 and calculation, as well as idempotency behaviour and disk replacement
240 process. Please refer to
241 https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for more detailed
242 information.
243
244 * Configuration variables for permitted scrub times have changed. The legal
245 values for ``osd_scrub_begin_hour`` and ``osd_scrub_end_hour`` are ``0`` -
246 ``23``. The use of 24 is now illegal. Specifying ``0`` for both values
247 causes every hour to be allowed. The legal vaues for
248 ``osd_scrub_begin_week_day`` and ``osd_scrub_end_week_day`` are ``0`` -
249 ``6``. The use of ``7`` is now illegal. Specifying ``0`` for both values
250 causes every day of the week to be allowed.
251
252 * Support for multiple file systems in a single Ceph cluster is now stable.
253 New Ceph clusters enable support for multiple file systems by default.
254 Existing clusters must still set the "enable_multiple" flag on the fs.
255 See the CephFS documentation for more information.
256
257 * volume/nfs: The "ganesha-" prefix from cluster id and nfs-ganesha common
258 config object was removed to ensure a consistent namespace across different
259 orchestrator backends. Delete any existing nfs-ganesha clusters prior
260 to upgrading and redeploy new clusters after upgrading to Pacific.
261
262 * A new health check, DAEMON_OLD_VERSION, warns if different versions of
263 Ceph are running on daemons. It generates a health error if multiple
264 versions are detected. This condition must exist for over
265 ``mon_warn_older_version_delay`` (set to 1 week by default) in order for the
266 health condition to be triggered. This allows most upgrades to proceed
267 without falsely seeing the warning. If upgrade is paused for an extended
268 time period, health mute can be used like this "ceph health mute
269 DAEMON_OLD_VERSION --sticky". In this case after upgrade has finished use
270 "ceph health unmute DAEMON_OLD_VERSION".
271
272 * MGR: progress module can now be turned on/off, using these commands:
273 ``ceph progress on`` and ``ceph progress off``.
274
275 * The ceph_volume_client.py library used for manipulating legacy "volumes" in
276 CephFS is removed. All remaining users should use the "fs volume" interface
277 exposed by the ceph-mgr:
278 https://docs.ceph.com/en/latest/cephfs/fs-volumes/
279
280 * An AWS-compliant API: "GetTopicAttributes" was added to replace the existing
281 "GetTopic" API. The new API should be used to fetch information about topics
282 used for bucket notifications.
283
284 * librbd: The shared, read-only parent cache's config option
285 ``immutable_object_cache_watermark`` has now been updated to properly reflect
286 the upper cache utilization before space is reclaimed. The default
287 ``immutable_object_cache_watermark`` is now ``0.9``. If the capacity reaches
288 90% the daemon will delete cold cache.
289
290 * OSD: the option ``osd_fast_shutdown_notify_mon`` has been introduced to allow
291 the OSD to notify the monitor it is shutting down even if ``osd_fast_shutdown``
292 is enabled. This helps with the monitor logs on larger clusters, that may get
293 many 'osd.X reported immediately failed by osd.Y' messages, and confuse tools.
294 * rgw/kms/vault: the transit logic has been revamped to better use
295 the transit engine in vault. To take advantage of this new
296 functionality configuration changes are required. See the current
297 documentation (radosgw/vault) for more details.
298
299 * Scubs are more aggressive in trying to find more simultaneous possible PGs within osd_max_scrubs limitation.
300 It is possible that increasing osd_scrub_sleep may be necessary to maintain client responsiveness.
301
302 * Version 2 of the cephx authentication protocol (``CEPHX_V2`` feature bit) is
303 now required by default. It was introduced in 2018, adding replay attack
304 protection for authorizers and making msgr v1 message signatures stronger
305 (CVE-2018-1128 and CVE-2018-1129). Support is present in Jewel 10.2.11,
306 Luminous 12.2.6, Mimic 13.2.1, Nautilus 14.2.0 and later; upstream kernels
307 4.9.150, 4.14.86, 4.19 and later; various distribution kernels, in particular
308 CentOS 7.6 and later. To enable older clients, set ``cephx_require_version``
309 and ``cephx_service_require_version`` config options to 1.
310
311 >=15.0.0
312 --------
313
314 * MON: The cluster log now logs health detail every ``mon_health_to_clog_interval``,
315 which has been changed from 1hr to 10min. Logging of health detail will be
316 skipped if there is no change in health summary since last known.
317
318 * The ``ceph df`` command now lists the number of pgs in each pool.
319
320 * Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
321 by default. However, if enabled, user now have to pass the
322 ``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
323 sure of configuring pool size 1.
324
325 * librbd now inherits the stripe unit and count from its parent image upon creation.
326 This can be overridden by specifying different stripe settings during clone creation.
327
328 * The balancer is now on by default in upmap mode. Since upmap mode requires
329 ``require_min_compat_client`` luminous, new clusters will only support luminous
330 and newer clients by default. Existing clusters can enable upmap support by running
331 ``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
332 the balancer off using the ``ceph balancer off`` command. In earlier versions,
333 the balancer was included in the ``always_on_modules`` list, but needed to be
334 turned on explicitly using the ``ceph balancer on`` command.
335
336 * MGR: the "cloud" mode of the diskprediction module is not supported anymore
337 and the ``ceph-mgr-diskprediction-cloud`` manager module has been removed. This
338 is because the external cloud service run by ProphetStor is no longer accessible
339 and there is no immediate replacement for it at this time. The "local" prediction
340 mode will continue to be supported.
341
342 * Cephadm: There were a lot of small usability improvements and bug fixes:
343
344 * Grafana when deployed by Cephadm now binds to all network interfaces.
345 * ``cephadm check-host`` now prints all detected problems at once.
346 * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
347 when generating an SSL certificate for Grafana.
348 * The Alertmanager is now correctly pointed to the Ceph Dashboard
349 * ``cephadm adopt`` now supports adopting an Alertmanager
350 * ``ceph orch ps`` now supports filtering by service name
351 * ``ceph orch host ls`` now marks hosts as offline, if they are not
352 accessible.
353
354 * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
355 a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
356 nfs-ns::
357
358 ceph orch apply nfs mynfs nfs-ganesha nfs-ns
359
360 * Cephadm: ``ceph orch ls --export`` now returns all service specifications in
361 yaml representation that is consumable by ``ceph orch apply``. In addition,
362 the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
363 ``--format json-pretty``.
364
365 * CephFS: Automatic static subtree partitioning policies may now be configured
366 using the new distributed and random ephemeral pinning extended attributes on
367 directories. See the documentation for more information:
368 https://docs.ceph.com/docs/master/cephfs/multimds/
369
370 * Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
371 the OSD specification before deploying OSDs. This makes it possible to
372 verify that the specification is correct, before applying it.
373
374 * RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
375 ``radosgw-admin orphans find``, ``radosgw-admin orphans finish``, and
376 ``radosgw-admin orphans list-jobs`` -- have been deprecated. They have
377 not been actively maintained and they store intermediate results on
378 the cluster, which could fill a nearly-full cluster. They have been
379 replaced by a tool, currently considered experimental,
380 ``rgw-orphan-list``.
381
382 * RBD: The name of the rbd pool object that is used to store
383 rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
384 to "rbd_trash_purge_schedule". Users that have already started using
385 ``rbd trash purge schedule`` functionality and have per pool or namespace
386 schedules configured should copy "rbd_trash_trash_purge_schedule"
387 object to "rbd_trash_purge_schedule" before the upgrade and remove
388 "rbd_trash_purge_schedule" using the following commands in every RBD
389 pool and namespace where a trash purge schedule was previously
390 configured::
391
392 rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
393 rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
394
395 or use any other convenient way to restore the schedule after the
396 upgrade.
397
398 * librbd: The shared, read-only parent cache has been moved to a separate librbd
399 plugin. If the parent cache was previously in-use, you must also instruct
400 librbd to load the plugin by adding the following to your configuration::
401
402 rbd_plugins = parent_cache
403
404 * Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
405 If any OSD has repaired more than this many I/O errors in stored data a
406 ``OSD_TOO_MANY_REPAIRS`` health warning is generated.
407
408 * Introduce commands that manipulate required client features of a file system::
409
410 ceph fs required_client_features <fs name> add <feature>
411 ceph fs required_client_features <fs name> rm <feature>
412 ceph fs feature ls
413
414 * OSD: A new configuration option ``osd_compact_on_start`` has been added which triggers
415 an OSD compaction on start. Setting this option to ``true`` and restarting an OSD
416 will result in an offline compaction of the OSD prior to booting.
417
418 * OSD: the option named ``bdev_nvme_retry_count`` has been removed. Because
419 in SPDK v20.07, there is no easy access to bdev_nvme options, and this
420 option is hardly used, so it was removed.
421
422 * Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
423 scheduled scrubs of the type disabled will be aborted. All user initiated
424 scrubs are NOT interrupted.
425
426 * Alpine build related script, documentation and test have been removed since
427 the most updated APKBUILD script of Ceph is already included by Alpine Linux's
428 aports repository.
429
430 * fs: Names of new FSs, volumes, subvolumes and subvolume groups can only
431 contain alphanumeric and ``-``, ``_`` and ``.`` characters. Some commands
432 or CephX credentials may not work with old FSs with non-conformant names.
433
434 * It is now possible to specify the initial monitor to contact for Ceph tools
435 and daemons using the ``mon_host_override`` config option or
436 ``--mon-host-override <ip>`` command-line switch. This generally should only
437 be used for debugging and only affects initial communication with Ceph's
438 monitor cluster.
439
440 * `blacklist` has been replaced with `blocklist` throughout. The following commands have changed:
441
442 - ``ceph osd blacklist ...`` are now ``ceph osd blocklist ...``
443 - ``ceph <tell|daemon> osd.<NNN> dump_blacklist`` is now ``ceph <tell|daemon> osd.<NNN> dump_blocklist``
444
445 * The following config options have changed:
446
447 - ``mon osd blacklist default expire`` is now ``mon osd blocklist default expire``
448 - ``mon mds blacklist interval`` is now ``mon mds blocklist interval``
449 - ``mon mgr blacklist interval`` is now ''mon mgr blocklist interval``
450 - ``rbd blacklist on break lock`` is now ``rbd blocklist on break lock``
451 - ``rbd blacklist expire seconds`` is now ``rbd blocklist expire seconds``
452 - ``mds session blacklist on timeout`` is now ``mds session blocklist on timeout``
453 - ``mds session blacklist on evict`` is now ``mds session blocklist on evict``
454
455 * CephFS: Compatibility code for old on-disk format of snapshot has been removed.
456 Current on-disk format of snapshot was introduced by Mimic release. If there
457 are any snapshots created by Ceph release older than Mimic. Before upgrading,
458 either delete them all or scrub the whole filesystem:
459
460 ceph daemon <mds of rank 0> scrub_path / force recursive repair
461 ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair
462
463 * CephFS: Scrub is supported in multiple active mds setup. MDS rank 0 handles
464 scrub commands, and forward scrub to other mds if necessary.
465
466 * The following librados API calls have changed:
467
468 - ``rados_blacklist_add`` is now ``rados_blocklist_add``; the former will issue a deprecation warning and be removed in a future release.
469 - ``rados.blacklist_add`` is now ``rados.blocklist_add`` in the C++ API.
470
471 * The JSON output for the following commands now shows ``blocklist`` instead of ``blacklist``:
472
473 - ``ceph osd dump``
474 - ``ceph <tell|daemon> osd.<N> dump_blocklist``
475
476 * caps: MON and MDS caps can now be used to restrict client's ability to view
477 and operate on specific Ceph file systems. The FS can be specificed using
478 ``fsname`` in caps. This also affects subcommand ``fs authorize``, the caps
479 produce by it will be specific to the FS name passed in its arguments.
480
481 * fs: root_squash flag can be set in MDS caps. It disallows file system
482 operations that need write access for clients with uid=0 or gid=0. This
483 feature should prevent accidents such as an inadvertent `sudo rm -rf /<path>`.
484
485 * fs: "fs authorize" now sets MON cap to "allow <perm> fsname=<fsname>"
486 instead of setting it to "allow r" all the time.
487
488 * ``ceph pg #.# list_unfound`` output has been enhanced to provide
489 might_have_unfound information which indicates which OSDs may
490 contain the unfound objects.
491
492 * The ``ceph orch apply rgw`` syntax and behavior have changed. RGW
493 services can now be arbitrarily named (it is no longer forced to be
494 `realm.zone`). The ``--rgw-realm=...`` and ``--rgw-zone=...``
495 arguments are now optional, which means that if they are omitted, a
496 vanilla single-cluster RGW will be deployed. When the realm and
497 zone are provided, the user is now responsible for setting up the
498 multisite configuration beforehand--cephadm no longer attempts to
499 create missing realms or zones.
500
501 * The ``min_size`` and ``max_size`` CRUSH rule properties have been removed. Older
502 CRUSH maps will still compile but Ceph will issue a warning that these fields are
503 ignored.
504 * The cephadm NFS support has been simplified to no longer allow the
505 pool and namespace where configuration is stored to be customized.
506 As a result, the ``ceph orch apply nfs`` command no longer has
507 ``--pool`` or ``--namespace`` arguments.
508
509 Existing cephadm NFS deployments (from earlier version of Pacific or
510 from Octopus) will be automatically migrated when the cluster is
511 upgraded. Note that the NFS ganesha daemons will be redeployed and
512 it is possible that their IPs will change.
513
514 * RGW now requires a secure connection to the monitor by default
515 (``auth_client_required=cephx`` and ``ms_mon_client_mode=secure``).
516 If you have cephx authentication disabled on your cluster, you may
517 need to adjust these settings for RGW.