]> git.proxmox.com Git - ceph.git/blame - ceph/PendingReleaseNotes
import ceph quincy 17.2.1
[ceph.git] / ceph / PendingReleaseNotes
CommitLineData
33c7a0ef
TL
1>=17.2.1
2
3* The "BlueStore zero block detection" feature (first introduced to Quincy in
4https://github.com/ceph/ceph/pull/43337) has been turned off by default with a
5new global configuration called `bluestore_zero_block_detection`. This feature,
6intended for large-scale synthetic testing, does not interact well with some RBD
7and CephFS features. Any side effects experienced in previous Quincy versions
8would no longer occur, provided that the configuration remains set to false.
9Relevant tracker: https://tracker.ceph.com/issues/55521
10
11* telemetry: Added new Rook metrics to the 'basic' channel to report Rook's
12 version, Kubernetes version, node metrics, etc.
13 See a sample report with `ceph telemetry preview`.
14 Opt-in with `ceph telemetry on`.
15
16 For more details, see:
17
18 https://docs.ceph.com/en/latest/mgr/telemetry/
19
f67539c2
TL
20>=17.0.0
21
20effc67
TL
22* Filestore has been deprecated in Quincy, considering that BlueStore has been
23 the default objectstore for quite some time.
24
25* Critical bug in OMAP format upgrade is fixed. This could cause data corruption
26 (improperly formatted OMAP keys) after pre-Pacific cluster upgrade if
27 bluestore-quick-fix-on-mount parameter is set to true or ceph-bluestore-tool's
28 quick-fix/repair commands are invoked.
29 Relevant tracker: https://tracker.ceph.com/issues/53062
30
522d829b
TL
31* `ceph-mgr-modules-core` debian package does not recommend `ceph-mgr-rook`
32 anymore. As the latter depends on `python3-numpy` which cannot be imported in
33 different Python sub-interpreters multi-times if the version of
34 `python3-numpy` is older than 1.19. Since `apt-get` installs the `Recommends`
35 packages by default, `ceph-mgr-rook` was always installed along with
36 `ceph-mgr` debian package as an indirect dependency. If your workflow depends
37 on this behavior, you might want to install `ceph-mgr-rook` separately.
38
20effc67
TL
39* the "kvs" Ceph object class is not packaged anymore. "kvs" Ceph object class
40 offers a distributed flat b-tree key-value store implemented on top of librados
41 objects omap. Because we don't have existing internal users of this object
42 class, it is not packaged anymore.
43
f67539c2
TL
44* A new library is available, libcephsqlite. It provides a SQLite Virtual File
45 System (VFS) on top of RADOS. The database and journals are striped over
46 RADOS across multiple objects for virtually unlimited scaling and throughput
47 only limited by the SQLite client. Applications using SQLite may change to
48 the Ceph VFS with minimal changes, usually just by specifying the alternate
49 VFS. We expect the library to be most impactful and useful for applications
50 that were storing state in RADOS omap, especially without striping which
51 limits scalability.
52
20effc67
TL
53* The ``device_health_metrics`` pool has been renamed ``.mgr``. It is now
54 used as a common store for all ``ceph-mgr`` modules.
55
56* fs: A file system can be created with a specific ID ("fscid"). This is useful
57 in certain recovery scenarios, e.g., monitor database lost and rebuilt, and
58 the restored file system is expected to have the same ID as before.
59
60* fs: A file system can be renamed using the `fs rename` command. Any cephx
61 credentials authorized for the old file system name will need to be
62 reauthorized to the new file system name. Since the operations of the clients
63 using these re-authorized IDs may be disrupted, this command requires the
64 "--yes-i-really-mean-it" flag. Also, mirroring is expected to be disabled
65 on the file system.
1d09f67e
TL
66
67* fs: A FS volume can be renamed using the `fs volume rename` command. Any cephx
68 credentials authorized for the old volume name will need to be reauthorized to
69 the new volume name. Since the operations of the clients using these re-authorized
70 IDs may be disrupted, this command requires the "--yes-i-really-mean-it" flag. Also,
71 mirroring is expected to be disabled on the file system.
72
522d829b
TL
73* MDS upgrades no longer require stopping all standby MDS daemons before
74 upgrading the sole active MDS for a file system.
75
20effc67
TL
76* RGW: RGW now supports rate limiting by user and/or by bucket.
77 With this feature it is possible to limit user and/or bucket, the total operations and/or
78 bytes per minute can be delivered.
79 This feature is allowing the admin to limit only READ operations and/or WRITE operations.
80 The rate limiting configuration could be applied on all users and all bucket by using
81 global configuration.
82
83* RGW: `radosgw-admin realm delete` is now renamed to `radosgw-admin realm rm`. This
84 is consistent with the help message.
85
86* OSD: Ceph now uses mclock_scheduler for bluestore OSDs as its default osd_op_queue
87 to provide QoS. The 'mclock_scheduler' is not supported for filestore OSDs.
88 Therefore, the default 'osd_op_queue' is set to 'wpq' for filestore OSDs
1d09f67e
TL
89 and is enforced even if the user attempts to change it. For more details on
90 configuring mclock see,
91
92 https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/
20effc67 93
a4b75251
TL
94* CephFS: Failure to replay the journal by a standby-replay daemon will now
95 cause the rank to be marked damaged.
96
20effc67
TL
97* RGW: S3 bucket notification events now contain an `eTag` key instead of `etag`,
98 and eventName values no longer carry the `s3:` prefix, fixing deviations from
99 the message format observed on AWS.
100
522d829b
TL
101* RGW: It is possible to specify ssl options and ciphers for beast frontend now.
102 The default ssl options setting is "no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1".
103 If you want to return back the old behavior add 'ssl_options=' (empty) to
104 ``rgw frontends`` configuration.
105
20effc67
TL
106* RGW: The behavior for Multipart Upload was modified so that only
107 CompleteMultipartUpload notification is sent at the end of the multipart upload.
108 The POST notification at the beginning of the upload, and PUT notifications that
109 were sent on each part are not sent anymore.
522d829b 110
20effc67
TL
111* MGR: The pg_autoscaler has a new 'scale-down' profile which provides more
112 performance from the start for new pools. However, the module will remain
113 using it old behavior by default, now called the 'scale-up' profile.
114 For more details, see:
a4b75251 115
20effc67 116 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
a4b75251 117
20effc67
TL
118* MGR: The pg_autoscaler can now be turned `on` and `off` globally
119 with the `noautoscale` flag. By default this flag is unset and
120 the default pg_autoscale mode remains the same.
121 For more details, see:
a4b75251 122
20effc67 123 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
522d829b 124
20effc67
TL
125* The ``ceph pg dump`` command now prints three additional columns:
126 `LAST_SCRUB_DURATION` shows the duration (in seconds) of the last completed scrub;
127 `SCRUB_SCHEDULING` conveys whether a PG is scheduled to be scrubbed at a specified
128 time, queued for scrubbing, or being scrubbed;
129 `OBJECTS_SCRUBBED` shows the number of objects scrubbed in a PG after scrub begins.
130
131* A health warning will now be reported if the ``require-osd-release`` flag is not
132 set to the appropriate release after a cluster upgrade.
133
134* LevelDB support has been removed. ``WITH_LEVELDB`` is no longer a supported
135 build option.
136
137* MON/MGR: Pools can now be created with `--bulk` flag. Any pools created with `bulk`
138 will use a profile of the `pg_autoscaler` that provides more performance from the start.
139 However, any pools created without the `--bulk` flag will remain using it's old behavior
140 by default. For more details, see:
522d829b
TL
141
142 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
20effc67
TL
143* Cephadm: ``osd_memory_target_autotune`` will be enabled by default which will set
144 ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.7`` of total RAM. This will be
145 unsuitable for hyperconverged infrastructures. For hyperconverged Ceph, please refer
146 to the documentation or set ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.2``.
147
148* telemetry: Improved the opt-in flow so that users can keep sharing the same
149 data, even when new data collections are available. A new 'perf' channel
150 that collects various performance metrics is now avaiable to opt-in to with:
151 `ceph telemetry on`
152 `ceph telemetry enable channel perf`
153 See a sample report with `ceph telemetry preview`
154 For more details, see:
522d829b 155
20effc67 156 https://docs.ceph.com/en/latest/mgr/telemetry/
b3b6e05e 157
20effc67
TL
158* MGR: The progress module disables the pg recovery event by default
159 since the event is expensive and has interrupted other service when
160 there are OSDs being marked in/out from the the cluster. However,
161 the user may still enable this event anytime. For more details, see:
b3b6e05e 162
20effc67
TL
163 https://docs.ceph.com/en/latest/mgr/progress/
164
165>=16.0.0
166--------
b3b6e05e
TL
167* mgr/nfs: ``nfs`` module is moved out of volumes plugin. Prior using the
168 ``ceph nfs`` commands, ``nfs`` mgr module must be enabled.
169
170* volumes/nfs: The ``cephfs`` cluster type has been removed from the
171 ``nfs cluster create`` subcommand. Clusters deployed by cephadm can
172 support an NFS export of both ``rgw`` and ``cephfs`` from a single
173 NFS cluster instance.
174
175* The ``nfs cluster update`` command has been removed. You can modify
176 the placement of an existing NFS service (and/or its associated
177 ingress service) using ``orch ls --export`` and ``orch apply -i
178 ...``.
179
180* The ``orch apply nfs`` command no longer requires a pool or
181 namespace argument. We strongly encourage users to use the defaults
182 so that the ``nfs cluster ls`` and related commands will work
183 properly.
184
185* The ``nfs cluster delete`` and ``nfs export delete`` commands are
186 deprecated and will be removed in a future release. Please use
187 ``nfs cluster rm`` and ``nfs export rm`` instead.
188
a4b75251
TL
189* The ``nfs export create`` CLI arguments have changed, with the
190 *fsname* or *bucket-name* argument position moving to the right of
191 *the *cluster-id* and *pseudo-path*. Consider transitioning to
192 *using named arguments instead of positional arguments (e.g., ``ceph
193 *nfs export create cephfs --cluster-id mycluster --pseudo-path /foo
194 *--fsname myfs`` instead of ``ceph nfs export create cephfs
195 *mycluster /foo myfs`` to ensure correct behavior with any
196 *version.
197
198* mgr-pg_autoscaler: Autoscaler will now start out by scaling each
199 pool to have a full complements of pgs from the start and will only
200 decrease it when other pools need more pgs due to increased usage.
201 This improves out of the box performance of Ceph by allowing more PGs
202 to be created for a given pool.
203
f67539c2
TL
204* CephFS: Disabling allow_standby_replay on a file system will also stop all
205 standby-replay daemons for that file system.
adb31ebb 206
cd265ab1
TL
207* New bluestore_rocksdb_options_annex config parameter. Complements
208 bluestore_rocksdb_options and allows setting rocksdb options without repeating
209 the existing defaults.
20effc67
TL
210* The MDS in Pacific makes backwards-incompatible changes to the ON-RADOS
211 metadata structures, which prevent a downgrade to older releases
212 (to Octopus and older).
cd265ab1 213
adb31ebb
TL
214* $pid expansion in config paths like `admin_socket` will now properly expand
215 to the daemon pid for commands like `ceph-mds` or `ceph-osd`. Previously only
216 `ceph-fuse`/`rbd-nbd` expanded `$pid` with the actual daemon pid.
eafe8130 217
f67539c2
TL
218* The allowable options for some "radosgw-admin" commands have been changed.
219
20effc67
TL
220 * "mdlog-list", "datalog-list", "sync-error-list" no longer accept
221 start and end dates, but do accept a single optional start marker.
f67539c2
TL
222 * "mdlog-trim", "datalog-trim", "sync-error-trim" only accept a
223 single marker giving the end of the trimmed range.
224 * Similarly the date ranges and marker ranges have been removed on
225 the RESTful DATALog and MDLog list and trim operations.
226
20effc67
TL
227* ceph-volume: The ``lvm batch`` subcommand received a major rewrite. This
228 closed a number of bugs and improves usability in terms of size specification
229 and calculation, as well as idempotency behaviour and disk replacement
230 process. Please refer to
231 https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for more detailed
232 information.
e306af50 233
f67539c2 234* Configuration variables for permitted scrub times have changed. The legal
20effc67
TL
235 values for ``osd_scrub_begin_hour`` and ``osd_scrub_end_hour`` are ``0`` -
236 ``23``. The use of 24 is now illegal. Specifying ``0`` for both values
237 causes every hour to be allowed. The legal vaues for
238 ``osd_scrub_begin_week_day`` and ``osd_scrub_end_week_day`` are ``0`` -
239 ``6``. The use of ``7`` is now illegal. Specifying ``0`` for both values
240 causes every day of the week to be allowed.
241
242* Support for multiple file systems in a single Ceph cluster is now stable.
243 New Ceph clusters enable support for multiple file systems by default.
244 Existing clusters must still set the "enable_multiple" flag on the fs.
245 See the CephFS documentation for more information.
246
247* volume/nfs: The "ganesha-" prefix from cluster id and nfs-ganesha common
248 config object was removed to ensure a consistent namespace across different
249 orchestrator backends. Delete any existing nfs-ganesha clusters prior
f67539c2
TL
250 to upgrading and redeploy new clusters after upgrading to Pacific.
251
20effc67
TL
252* A new health check, DAEMON_OLD_VERSION, warns if different versions of
253 Ceph are running on daemons. It generates a health error if multiple
254 versions are detected. This condition must exist for over
255 ``mon_warn_older_version_delay`` (set to 1 week by default) in order for the
256 health condition to be triggered. This allows most upgrades to proceed
257 without falsely seeing the warning. If upgrade is paused for an extended
258 time period, health mute can be used like this "ceph health mute
259 DAEMON_OLD_VERSION --sticky". In this case after upgrade has finished use
260 "ceph health unmute DAEMON_OLD_VERSION".
261
262* MGR: progress module can now be turned on/off, using these commands:
f67539c2 263 ``ceph progress on`` and ``ceph progress off``.
f67539c2 264
f67539c2
TL
265* The ceph_volume_client.py library used for manipulating legacy "volumes" in
266 CephFS is removed. All remaining users should use the "fs volume" interface
267 exposed by the ceph-mgr:
268 https://docs.ceph.com/en/latest/cephfs/fs-volumes/
269
270* An AWS-compliant API: "GetTopicAttributes" was added to replace the existing
271 "GetTopic" API. The new API should be used to fetch information about topics
272 used for bucket notifications.
273
274* librbd: The shared, read-only parent cache's config option
275 ``immutable_object_cache_watermark`` has now been updated to properly reflect
276 the upper cache utilization before space is reclaimed. The default
277 ``immutable_object_cache_watermark`` is now ``0.9``. If the capacity reaches
278 90% the daemon will delete cold cache.
279
280* OSD: the option ``osd_fast_shutdown_notify_mon`` has been introduced to allow
281 the OSD to notify the monitor it is shutting down even if ``osd_fast_shutdown``
282 is enabled. This helps with the monitor logs on larger clusters, that may get
283 many 'osd.X reported immediately failed by osd.Y' messages, and confuse tools.
284* rgw/kms/vault: the transit logic has been revamped to better use
285 the transit engine in vault. To take advantage of this new
286 functionality configuration changes are required. See the current
287 documentation (radosgw/vault) for more details.
288
289* Scubs are more aggressive in trying to find more simultaneous possible PGs within osd_max_scrubs limitation.
290 It is possible that increasing osd_scrub_sleep may be necessary to maintain client responsiveness.
f67539c2
TL
291
292* Version 2 of the cephx authentication protocol (``CEPHX_V2`` feature bit) is
293 now required by default. It was introduced in 2018, adding replay attack
294 protection for authorizers and making msgr v1 message signatures stronger
295 (CVE-2018-1128 and CVE-2018-1129). Support is present in Jewel 10.2.11,
296 Luminous 12.2.6, Mimic 13.2.1, Nautilus 14.2.0 and later; upstream kernels
297 4.9.150, 4.14.86, 4.19 and later; various distribution kernels, in particular
298 CentOS 7.6 and later. To enable older clients, set ``cephx_require_version``
299 and ``cephx_service_require_version`` config options to 1.
300
301>=15.0.0
302--------
303
f91f0fd5
TL
304* MON: The cluster log now logs health detail every ``mon_health_to_clog_interval``,
305 which has been changed from 1hr to 10min. Logging of health detail will be
306 skipped if there is no change in health summary since last known.
e306af50 307
f91f0fd5
TL
308* The ``ceph df`` command now lists the number of pgs in each pool.
309
f67539c2
TL
310* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
311 by default. However, if enabled, user now have to pass the
312 ``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
313 sure of configuring pool size 1.
314
315* librbd now inherits the stripe unit and count from its parent image upon creation.
316 This can be overridden by specifying different stripe settings during clone creation.
317
318* The balancer is now on by default in upmap mode. Since upmap mode requires
319 ``require_min_compat_client`` luminous, new clusters will only support luminous
320 and newer clients by default. Existing clusters can enable upmap support by running
321 ``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
322 the balancer off using the ``ceph balancer off`` command. In earlier versions,
323 the balancer was included in the ``always_on_modules`` list, but needed to be
324 turned on explicitly using the ``ceph balancer on`` command.
325
326* MGR: the "cloud" mode of the diskprediction module is not supported anymore
327 and the ``ceph-mgr-diskprediction-cloud`` manager module has been removed. This
328 is because the external cloud service run by ProphetStor is no longer accessible
329 and there is no immediate replacement for it at this time. The "local" prediction
330 mode will continue to be supported.
331
332* Cephadm: There were a lot of small usability improvements and bug fixes:
333
334 * Grafana when deployed by Cephadm now binds to all network interfaces.
335 * ``cephadm check-host`` now prints all detected problems at once.
336 * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
337 when generating an SSL certificate for Grafana.
338 * The Alertmanager is now correctly pointed to the Ceph Dashboard
339 * ``cephadm adopt`` now supports adopting an Alertmanager
340 * ``ceph orch ps`` now supports filtering by service name
341 * ``ceph orch host ls`` now marks hosts as offline, if they are not
342 accessible.
343
344* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
345 a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
346 nfs-ns::
347
348 ceph orch apply nfs mynfs nfs-ganesha nfs-ns
349
350* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
351 yaml representation that is consumable by ``ceph orch apply``. In addition,
352 the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
353 ``--format json-pretty``.
354
355* CephFS: Automatic static subtree partitioning policies may now be configured
356 using the new distributed and random ephemeral pinning extended attributes on
357 directories. See the documentation for more information:
358 https://docs.ceph.com/docs/master/cephfs/multimds/
359
360* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
361 the OSD specification before deploying OSDs. This makes it possible to
362 verify that the specification is correct, before applying it.
363
364* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
365 ``radosgw-admin orphans find``, ``radosgw-admin orphans finish``, and
366 ``radosgw-admin orphans list-jobs`` -- have been deprecated. They have
367 not been actively maintained and they store intermediate results on
368 the cluster, which could fill a nearly-full cluster. They have been
369 replaced by a tool, currently considered experimental,
370 ``rgw-orphan-list``.
371
372* RBD: The name of the rbd pool object that is used to store
373 rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
374 to "rbd_trash_purge_schedule". Users that have already started using
375 ``rbd trash purge schedule`` functionality and have per pool or namespace
376 schedules configured should copy "rbd_trash_trash_purge_schedule"
377 object to "rbd_trash_purge_schedule" before the upgrade and remove
378 "rbd_trash_purge_schedule" using the following commands in every RBD
379 pool and namespace where a trash purge schedule was previously
380 configured::
381
382 rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
383 rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
384
385 or use any other convenient way to restore the schedule after the
386 upgrade.
387
388* librbd: The shared, read-only parent cache has been moved to a separate librbd
389 plugin. If the parent cache was previously in-use, you must also instruct
390 librbd to load the plugin by adding the following to your configuration::
391
392 rbd_plugins = parent_cache
393
394* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
395 If any OSD has repaired more than this many I/O errors in stored data a
396 ``OSD_TOO_MANY_REPAIRS`` health warning is generated.
397
398* Introduce commands that manipulate required client features of a file system::
399
400 ceph fs required_client_features <fs name> add <feature>
401 ceph fs required_client_features <fs name> rm <feature>
402 ceph fs feature ls
403
404* OSD: A new configuration option ``osd_compact_on_start`` has been added which triggers
405 an OSD compaction on start. Setting this option to ``true`` and restarting an OSD
406 will result in an offline compaction of the OSD prior to booting.
407
408* OSD: the option named ``bdev_nvme_retry_count`` has been removed. Because
409 in SPDK v20.07, there is no easy access to bdev_nvme options, and this
410 option is hardly used, so it was removed.
411
412* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
413 scheduled scrubs of the type disabled will be aborted. All user initiated
414 scrubs are NOT interrupted.
415
416* Alpine build related script, documentation and test have been removed since
417 the most updated APKBUILD script of Ceph is already included by Alpine Linux's
418 aports repository.
419
420* fs: Names of new FSs, volumes, subvolumes and subvolume groups can only
421 contain alphanumeric and ``-``, ``_`` and ``.`` characters. Some commands
422 or CephX credentials may not work with old FSs with non-conformant names.
f91f0fd5
TL
423
424* It is now possible to specify the initial monitor to contact for Ceph tools
425 and daemons using the ``mon_host_override`` config option or
426 ``--mon-host-override <ip>`` command-line switch. This generally should only
427 be used for debugging and only affects initial communication with Ceph's
428 monitor cluster.
f67539c2
TL
429
430* `blacklist` has been replaced with `blocklist` throughout. The following commands have changed:
431
432 - ``ceph osd blacklist ...`` are now ``ceph osd blocklist ...``
433 - ``ceph <tell|daemon> osd.<NNN> dump_blacklist`` is now ``ceph <tell|daemon> osd.<NNN> dump_blocklist``
434
435* The following config options have changed:
436
437 - ``mon osd blacklist default expire`` is now ``mon osd blocklist default expire``
438 - ``mon mds blacklist interval`` is now ``mon mds blocklist interval``
439 - ``mon mgr blacklist interval`` is now ''mon mgr blocklist interval``
440 - ``rbd blacklist on break lock`` is now ``rbd blocklist on break lock``
441 - ``rbd blacklist expire seconds`` is now ``rbd blocklist expire seconds``
442 - ``mds session blacklist on timeout`` is now ``mds session blocklist on timeout``
443 - ``mds session blacklist on evict`` is now ``mds session blocklist on evict``
444
445* CephFS: Compatibility code for old on-disk format of snapshot has been removed.
446 Current on-disk format of snapshot was introduced by Mimic release. If there
447 are any snapshots created by Ceph release older than Mimic. Before upgrading,
448 either delete them all or scrub the whole filesystem:
449
450 ceph daemon <mds of rank 0> scrub_path / force recursive repair
451 ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair
452
453* CephFS: Scrub is supported in multiple active mds setup. MDS rank 0 handles
454 scrub commands, and forward scrub to other mds if necessary.
455
456* The following librados API calls have changed:
457
458 - ``rados_blacklist_add`` is now ``rados_blocklist_add``; the former will issue a deprecation warning and be removed in a future release.
459 - ``rados.blacklist_add`` is now ``rados.blocklist_add`` in the C++ API.
460
461* The JSON output for the following commands now shows ``blocklist`` instead of ``blacklist``:
462
463 - ``ceph osd dump``
464 - ``ceph <tell|daemon> osd.<N> dump_blocklist``
465
466* caps: MON and MDS caps can now be used to restrict client's ability to view
467 and operate on specific Ceph file systems. The FS can be specificed using
468 ``fsname`` in caps. This also affects subcommand ``fs authorize``, the caps
469 produce by it will be specific to the FS name passed in its arguments.
470
471* fs: root_squash flag can be set in MDS caps. It disallows file system
472 operations that need write access for clients with uid=0 or gid=0. This
473 feature should prevent accidents such as an inadvertent `sudo rm -rf /<path>`.
474
475* fs: "fs authorize" now sets MON cap to "allow <perm> fsname=<fsname>"
476 instead of setting it to "allow r" all the time.
477
478* ``ceph pg #.# list_unfound`` output has been enhanced to provide
479 might_have_unfound information which indicates which OSDs may
480 contain the unfound objects.
481
482* The ``ceph orch apply rgw`` syntax and behavior have changed. RGW
483 services can now be arbitrarily named (it is no longer forced to be
484 `realm.zone`). The ``--rgw-realm=...`` and ``--rgw-zone=...``
485 arguments are now optional, which means that if they are omitted, a
486 vanilla single-cluster RGW will be deployed. When the realm and
487 zone are provided, the user is now responsible for setting up the
488 multisite configuration beforehand--cephadm no longer attempts to
489 create missing realms or zones.
a4b75251 490
20effc67
TL
491* The ``min_size`` and ``max_size`` CRUSH rule properties have been removed. Older
492 CRUSH maps will still compile but Ceph will issue a warning that these fields are
493 ignored.
a4b75251
TL
494* The cephadm NFS support has been simplified to no longer allow the
495 pool and namespace where configuration is stored to be customized.
496 As a result, the ``ceph orch apply nfs`` command no longer has
497 ``--pool`` or ``--namespace`` arguments.
498
499 Existing cephadm NFS deployments (from earlier version of Pacific or
500 from Octopus) will be automatically migrated when the cluster is
501 upgraded. Note that the NFS ganesha daemons will be redeployed and
502 it is possible that their IPs will change.
20effc67
TL
503
504* RGW now requires a secure connection to the monitor by default
505 (``auth_client_required=cephx`` and ``ms_mon_client_mode=secure``).
506 If you have cephx authentication disabled on your cluster, you may
507 need to adjust these settings for RGW.