]> git.proxmox.com Git - ceph.git/blame - ceph/PendingReleaseNotes
import quincy 17.2.0
[ceph.git] / ceph / PendingReleaseNotes
CommitLineData
f67539c2
TL
1>=17.0.0
2
20effc67
TL
3* Filestore has been deprecated in Quincy, considering that BlueStore has been
4 the default objectstore for quite some time.
5
6* Critical bug in OMAP format upgrade is fixed. This could cause data corruption
7 (improperly formatted OMAP keys) after pre-Pacific cluster upgrade if
8 bluestore-quick-fix-on-mount parameter is set to true or ceph-bluestore-tool's
9 quick-fix/repair commands are invoked.
10 Relevant tracker: https://tracker.ceph.com/issues/53062
11
522d829b
TL
12* `ceph-mgr-modules-core` debian package does not recommend `ceph-mgr-rook`
13 anymore. As the latter depends on `python3-numpy` which cannot be imported in
14 different Python sub-interpreters multi-times if the version of
15 `python3-numpy` is older than 1.19. Since `apt-get` installs the `Recommends`
16 packages by default, `ceph-mgr-rook` was always installed along with
17 `ceph-mgr` debian package as an indirect dependency. If your workflow depends
18 on this behavior, you might want to install `ceph-mgr-rook` separately.
19
20effc67
TL
20* the "kvs" Ceph object class is not packaged anymore. "kvs" Ceph object class
21 offers a distributed flat b-tree key-value store implemented on top of librados
22 objects omap. Because we don't have existing internal users of this object
23 class, it is not packaged anymore.
24
f67539c2
TL
25* A new library is available, libcephsqlite. It provides a SQLite Virtual File
26 System (VFS) on top of RADOS. The database and journals are striped over
27 RADOS across multiple objects for virtually unlimited scaling and throughput
28 only limited by the SQLite client. Applications using SQLite may change to
29 the Ceph VFS with minimal changes, usually just by specifying the alternate
30 VFS. We expect the library to be most impactful and useful for applications
31 that were storing state in RADOS omap, especially without striping which
32 limits scalability.
33
20effc67
TL
34* The ``device_health_metrics`` pool has been renamed ``.mgr``. It is now
35 used as a common store for all ``ceph-mgr`` modules.
36
37* fs: A file system can be created with a specific ID ("fscid"). This is useful
38 in certain recovery scenarios, e.g., monitor database lost and rebuilt, and
39 the restored file system is expected to have the same ID as before.
40
41* fs: A file system can be renamed using the `fs rename` command. Any cephx
42 credentials authorized for the old file system name will need to be
43 reauthorized to the new file system name. Since the operations of the clients
44 using these re-authorized IDs may be disrupted, this command requires the
45 "--yes-i-really-mean-it" flag. Also, mirroring is expected to be disabled
46 on the file system.
1d09f67e
TL
47
48* fs: A FS volume can be renamed using the `fs volume rename` command. Any cephx
49 credentials authorized for the old volume name will need to be reauthorized to
50 the new volume name. Since the operations of the clients using these re-authorized
51 IDs may be disrupted, this command requires the "--yes-i-really-mean-it" flag. Also,
52 mirroring is expected to be disabled on the file system.
53
522d829b
TL
54* MDS upgrades no longer require stopping all standby MDS daemons before
55 upgrading the sole active MDS for a file system.
56
20effc67
TL
57* RGW: RGW now supports rate limiting by user and/or by bucket.
58 With this feature it is possible to limit user and/or bucket, the total operations and/or
59 bytes per minute can be delivered.
60 This feature is allowing the admin to limit only READ operations and/or WRITE operations.
61 The rate limiting configuration could be applied on all users and all bucket by using
62 global configuration.
63
64* RGW: `radosgw-admin realm delete` is now renamed to `radosgw-admin realm rm`. This
65 is consistent with the help message.
66
67* OSD: Ceph now uses mclock_scheduler for bluestore OSDs as its default osd_op_queue
68 to provide QoS. The 'mclock_scheduler' is not supported for filestore OSDs.
69 Therefore, the default 'osd_op_queue' is set to 'wpq' for filestore OSDs
1d09f67e
TL
70 and is enforced even if the user attempts to change it. For more details on
71 configuring mclock see,
72
73 https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/
20effc67 74
a4b75251
TL
75* CephFS: Failure to replay the journal by a standby-replay daemon will now
76 cause the rank to be marked damaged.
77
20effc67
TL
78* RGW: S3 bucket notification events now contain an `eTag` key instead of `etag`,
79 and eventName values no longer carry the `s3:` prefix, fixing deviations from
80 the message format observed on AWS.
81
522d829b
TL
82* RGW: It is possible to specify ssl options and ciphers for beast frontend now.
83 The default ssl options setting is "no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1".
84 If you want to return back the old behavior add 'ssl_options=' (empty) to
85 ``rgw frontends`` configuration.
86
20effc67
TL
87* RGW: The behavior for Multipart Upload was modified so that only
88 CompleteMultipartUpload notification is sent at the end of the multipart upload.
89 The POST notification at the beginning of the upload, and PUT notifications that
90 were sent on each part are not sent anymore.
522d829b 91
20effc67
TL
92* MGR: The pg_autoscaler has a new 'scale-down' profile which provides more
93 performance from the start for new pools. However, the module will remain
94 using it old behavior by default, now called the 'scale-up' profile.
95 For more details, see:
a4b75251 96
20effc67 97 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
a4b75251 98
20effc67
TL
99* MGR: The pg_autoscaler can now be turned `on` and `off` globally
100 with the `noautoscale` flag. By default this flag is unset and
101 the default pg_autoscale mode remains the same.
102 For more details, see:
a4b75251 103
20effc67 104 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
522d829b 105
20effc67
TL
106* The ``ceph pg dump`` command now prints three additional columns:
107 `LAST_SCRUB_DURATION` shows the duration (in seconds) of the last completed scrub;
108 `SCRUB_SCHEDULING` conveys whether a PG is scheduled to be scrubbed at a specified
109 time, queued for scrubbing, or being scrubbed;
110 `OBJECTS_SCRUBBED` shows the number of objects scrubbed in a PG after scrub begins.
111
112* A health warning will now be reported if the ``require-osd-release`` flag is not
113 set to the appropriate release after a cluster upgrade.
114
115* LevelDB support has been removed. ``WITH_LEVELDB`` is no longer a supported
116 build option.
117
118* MON/MGR: Pools can now be created with `--bulk` flag. Any pools created with `bulk`
119 will use a profile of the `pg_autoscaler` that provides more performance from the start.
120 However, any pools created without the `--bulk` flag will remain using it's old behavior
121 by default. For more details, see:
522d829b
TL
122
123 https://docs.ceph.com/en/latest/rados/operations/placement-groups/
20effc67
TL
124* Cephadm: ``osd_memory_target_autotune`` will be enabled by default which will set
125 ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.7`` of total RAM. This will be
126 unsuitable for hyperconverged infrastructures. For hyperconverged Ceph, please refer
127 to the documentation or set ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.2``.
128
129* telemetry: Improved the opt-in flow so that users can keep sharing the same
130 data, even when new data collections are available. A new 'perf' channel
131 that collects various performance metrics is now avaiable to opt-in to with:
132 `ceph telemetry on`
133 `ceph telemetry enable channel perf`
134 See a sample report with `ceph telemetry preview`
135 For more details, see:
522d829b 136
20effc67 137 https://docs.ceph.com/en/latest/mgr/telemetry/
b3b6e05e 138
20effc67
TL
139* MGR: The progress module disables the pg recovery event by default
140 since the event is expensive and has interrupted other service when
141 there are OSDs being marked in/out from the the cluster. However,
142 the user may still enable this event anytime. For more details, see:
b3b6e05e 143
20effc67
TL
144 https://docs.ceph.com/en/latest/mgr/progress/
145
146>=16.0.0
147--------
b3b6e05e
TL
148* mgr/nfs: ``nfs`` module is moved out of volumes plugin. Prior using the
149 ``ceph nfs`` commands, ``nfs`` mgr module must be enabled.
150
151* volumes/nfs: The ``cephfs`` cluster type has been removed from the
152 ``nfs cluster create`` subcommand. Clusters deployed by cephadm can
153 support an NFS export of both ``rgw`` and ``cephfs`` from a single
154 NFS cluster instance.
155
156* The ``nfs cluster update`` command has been removed. You can modify
157 the placement of an existing NFS service (and/or its associated
158 ingress service) using ``orch ls --export`` and ``orch apply -i
159 ...``.
160
161* The ``orch apply nfs`` command no longer requires a pool or
162 namespace argument. We strongly encourage users to use the defaults
163 so that the ``nfs cluster ls`` and related commands will work
164 properly.
165
166* The ``nfs cluster delete`` and ``nfs export delete`` commands are
167 deprecated and will be removed in a future release. Please use
168 ``nfs cluster rm`` and ``nfs export rm`` instead.
169
a4b75251
TL
170* The ``nfs export create`` CLI arguments have changed, with the
171 *fsname* or *bucket-name* argument position moving to the right of
172 *the *cluster-id* and *pseudo-path*. Consider transitioning to
173 *using named arguments instead of positional arguments (e.g., ``ceph
174 *nfs export create cephfs --cluster-id mycluster --pseudo-path /foo
175 *--fsname myfs`` instead of ``ceph nfs export create cephfs
176 *mycluster /foo myfs`` to ensure correct behavior with any
177 *version.
178
179* mgr-pg_autoscaler: Autoscaler will now start out by scaling each
180 pool to have a full complements of pgs from the start and will only
181 decrease it when other pools need more pgs due to increased usage.
182 This improves out of the box performance of Ceph by allowing more PGs
183 to be created for a given pool.
184
f67539c2
TL
185* CephFS: Disabling allow_standby_replay on a file system will also stop all
186 standby-replay daemons for that file system.
adb31ebb 187
cd265ab1
TL
188* New bluestore_rocksdb_options_annex config parameter. Complements
189 bluestore_rocksdb_options and allows setting rocksdb options without repeating
190 the existing defaults.
20effc67
TL
191* The MDS in Pacific makes backwards-incompatible changes to the ON-RADOS
192 metadata structures, which prevent a downgrade to older releases
193 (to Octopus and older).
cd265ab1 194
adb31ebb
TL
195* $pid expansion in config paths like `admin_socket` will now properly expand
196 to the daemon pid for commands like `ceph-mds` or `ceph-osd`. Previously only
197 `ceph-fuse`/`rbd-nbd` expanded `$pid` with the actual daemon pid.
eafe8130 198
f67539c2
TL
199* The allowable options for some "radosgw-admin" commands have been changed.
200
20effc67
TL
201 * "mdlog-list", "datalog-list", "sync-error-list" no longer accept
202 start and end dates, but do accept a single optional start marker.
f67539c2
TL
203 * "mdlog-trim", "datalog-trim", "sync-error-trim" only accept a
204 single marker giving the end of the trimmed range.
205 * Similarly the date ranges and marker ranges have been removed on
206 the RESTful DATALog and MDLog list and trim operations.
207
20effc67
TL
208* ceph-volume: The ``lvm batch`` subcommand received a major rewrite. This
209 closed a number of bugs and improves usability in terms of size specification
210 and calculation, as well as idempotency behaviour and disk replacement
211 process. Please refer to
212 https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for more detailed
213 information.
e306af50 214
f67539c2 215* Configuration variables for permitted scrub times have changed. The legal
20effc67
TL
216 values for ``osd_scrub_begin_hour`` and ``osd_scrub_end_hour`` are ``0`` -
217 ``23``. The use of 24 is now illegal. Specifying ``0`` for both values
218 causes every hour to be allowed. The legal vaues for
219 ``osd_scrub_begin_week_day`` and ``osd_scrub_end_week_day`` are ``0`` -
220 ``6``. The use of ``7`` is now illegal. Specifying ``0`` for both values
221 causes every day of the week to be allowed.
222
223* Support for multiple file systems in a single Ceph cluster is now stable.
224 New Ceph clusters enable support for multiple file systems by default.
225 Existing clusters must still set the "enable_multiple" flag on the fs.
226 See the CephFS documentation for more information.
227
228* volume/nfs: The "ganesha-" prefix from cluster id and nfs-ganesha common
229 config object was removed to ensure a consistent namespace across different
230 orchestrator backends. Delete any existing nfs-ganesha clusters prior
f67539c2
TL
231 to upgrading and redeploy new clusters after upgrading to Pacific.
232
20effc67
TL
233* A new health check, DAEMON_OLD_VERSION, warns if different versions of
234 Ceph are running on daemons. It generates a health error if multiple
235 versions are detected. This condition must exist for over
236 ``mon_warn_older_version_delay`` (set to 1 week by default) in order for the
237 health condition to be triggered. This allows most upgrades to proceed
238 without falsely seeing the warning. If upgrade is paused for an extended
239 time period, health mute can be used like this "ceph health mute
240 DAEMON_OLD_VERSION --sticky". In this case after upgrade has finished use
241 "ceph health unmute DAEMON_OLD_VERSION".
242
243* MGR: progress module can now be turned on/off, using these commands:
f67539c2 244 ``ceph progress on`` and ``ceph progress off``.
f67539c2 245
f67539c2
TL
246* The ceph_volume_client.py library used for manipulating legacy "volumes" in
247 CephFS is removed. All remaining users should use the "fs volume" interface
248 exposed by the ceph-mgr:
249 https://docs.ceph.com/en/latest/cephfs/fs-volumes/
250
251* An AWS-compliant API: "GetTopicAttributes" was added to replace the existing
252 "GetTopic" API. The new API should be used to fetch information about topics
253 used for bucket notifications.
254
255* librbd: The shared, read-only parent cache's config option
256 ``immutable_object_cache_watermark`` has now been updated to properly reflect
257 the upper cache utilization before space is reclaimed. The default
258 ``immutable_object_cache_watermark`` is now ``0.9``. If the capacity reaches
259 90% the daemon will delete cold cache.
260
261* OSD: the option ``osd_fast_shutdown_notify_mon`` has been introduced to allow
262 the OSD to notify the monitor it is shutting down even if ``osd_fast_shutdown``
263 is enabled. This helps with the monitor logs on larger clusters, that may get
264 many 'osd.X reported immediately failed by osd.Y' messages, and confuse tools.
265* rgw/kms/vault: the transit logic has been revamped to better use
266 the transit engine in vault. To take advantage of this new
267 functionality configuration changes are required. See the current
268 documentation (radosgw/vault) for more details.
269
270* Scubs are more aggressive in trying to find more simultaneous possible PGs within osd_max_scrubs limitation.
271 It is possible that increasing osd_scrub_sleep may be necessary to maintain client responsiveness.
f67539c2
TL
272
273* Version 2 of the cephx authentication protocol (``CEPHX_V2`` feature bit) is
274 now required by default. It was introduced in 2018, adding replay attack
275 protection for authorizers and making msgr v1 message signatures stronger
276 (CVE-2018-1128 and CVE-2018-1129). Support is present in Jewel 10.2.11,
277 Luminous 12.2.6, Mimic 13.2.1, Nautilus 14.2.0 and later; upstream kernels
278 4.9.150, 4.14.86, 4.19 and later; various distribution kernels, in particular
279 CentOS 7.6 and later. To enable older clients, set ``cephx_require_version``
280 and ``cephx_service_require_version`` config options to 1.
281
282>=15.0.0
283--------
284
f91f0fd5
TL
285* MON: The cluster log now logs health detail every ``mon_health_to_clog_interval``,
286 which has been changed from 1hr to 10min. Logging of health detail will be
287 skipped if there is no change in health summary since last known.
e306af50 288
f91f0fd5
TL
289* The ``ceph df`` command now lists the number of pgs in each pool.
290
f67539c2
TL
291* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
292 by default. However, if enabled, user now have to pass the
293 ``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
294 sure of configuring pool size 1.
295
296* librbd now inherits the stripe unit and count from its parent image upon creation.
297 This can be overridden by specifying different stripe settings during clone creation.
298
299* The balancer is now on by default in upmap mode. Since upmap mode requires
300 ``require_min_compat_client`` luminous, new clusters will only support luminous
301 and newer clients by default. Existing clusters can enable upmap support by running
302 ``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
303 the balancer off using the ``ceph balancer off`` command. In earlier versions,
304 the balancer was included in the ``always_on_modules`` list, but needed to be
305 turned on explicitly using the ``ceph balancer on`` command.
306
307* MGR: the "cloud" mode of the diskprediction module is not supported anymore
308 and the ``ceph-mgr-diskprediction-cloud`` manager module has been removed. This
309 is because the external cloud service run by ProphetStor is no longer accessible
310 and there is no immediate replacement for it at this time. The "local" prediction
311 mode will continue to be supported.
312
313* Cephadm: There were a lot of small usability improvements and bug fixes:
314
315 * Grafana when deployed by Cephadm now binds to all network interfaces.
316 * ``cephadm check-host`` now prints all detected problems at once.
317 * Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
318 when generating an SSL certificate for Grafana.
319 * The Alertmanager is now correctly pointed to the Ceph Dashboard
320 * ``cephadm adopt`` now supports adopting an Alertmanager
321 * ``ceph orch ps`` now supports filtering by service name
322 * ``ceph orch host ls`` now marks hosts as offline, if they are not
323 accessible.
324
325* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
326 a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
327 nfs-ns::
328
329 ceph orch apply nfs mynfs nfs-ganesha nfs-ns
330
331* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
332 yaml representation that is consumable by ``ceph orch apply``. In addition,
333 the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
334 ``--format json-pretty``.
335
336* CephFS: Automatic static subtree partitioning policies may now be configured
337 using the new distributed and random ephemeral pinning extended attributes on
338 directories. See the documentation for more information:
339 https://docs.ceph.com/docs/master/cephfs/multimds/
340
341* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
342 the OSD specification before deploying OSDs. This makes it possible to
343 verify that the specification is correct, before applying it.
344
345* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
346 ``radosgw-admin orphans find``, ``radosgw-admin orphans finish``, and
347 ``radosgw-admin orphans list-jobs`` -- have been deprecated. They have
348 not been actively maintained and they store intermediate results on
349 the cluster, which could fill a nearly-full cluster. They have been
350 replaced by a tool, currently considered experimental,
351 ``rgw-orphan-list``.
352
353* RBD: The name of the rbd pool object that is used to store
354 rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
355 to "rbd_trash_purge_schedule". Users that have already started using
356 ``rbd trash purge schedule`` functionality and have per pool or namespace
357 schedules configured should copy "rbd_trash_trash_purge_schedule"
358 object to "rbd_trash_purge_schedule" before the upgrade and remove
359 "rbd_trash_purge_schedule" using the following commands in every RBD
360 pool and namespace where a trash purge schedule was previously
361 configured::
362
363 rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
364 rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
365
366 or use any other convenient way to restore the schedule after the
367 upgrade.
368
369* librbd: The shared, read-only parent cache has been moved to a separate librbd
370 plugin. If the parent cache was previously in-use, you must also instruct
371 librbd to load the plugin by adding the following to your configuration::
372
373 rbd_plugins = parent_cache
374
375* Monitors now have a config option ``mon_osd_warn_num_repaired``, 10 by default.
376 If any OSD has repaired more than this many I/O errors in stored data a
377 ``OSD_TOO_MANY_REPAIRS`` health warning is generated.
378
379* Introduce commands that manipulate required client features of a file system::
380
381 ceph fs required_client_features <fs name> add <feature>
382 ceph fs required_client_features <fs name> rm <feature>
383 ceph fs feature ls
384
385* OSD: A new configuration option ``osd_compact_on_start`` has been added which triggers
386 an OSD compaction on start. Setting this option to ``true`` and restarting an OSD
387 will result in an offline compaction of the OSD prior to booting.
388
389* OSD: the option named ``bdev_nvme_retry_count`` has been removed. Because
390 in SPDK v20.07, there is no easy access to bdev_nvme options, and this
391 option is hardly used, so it was removed.
392
393* Now when noscrub and/or nodeep-scrub flags are set globally or per pool,
394 scheduled scrubs of the type disabled will be aborted. All user initiated
395 scrubs are NOT interrupted.
396
397* Alpine build related script, documentation and test have been removed since
398 the most updated APKBUILD script of Ceph is already included by Alpine Linux's
399 aports repository.
400
401* fs: Names of new FSs, volumes, subvolumes and subvolume groups can only
402 contain alphanumeric and ``-``, ``_`` and ``.`` characters. Some commands
403 or CephX credentials may not work with old FSs with non-conformant names.
f91f0fd5
TL
404
405* It is now possible to specify the initial monitor to contact for Ceph tools
406 and daemons using the ``mon_host_override`` config option or
407 ``--mon-host-override <ip>`` command-line switch. This generally should only
408 be used for debugging and only affects initial communication with Ceph's
409 monitor cluster.
f67539c2
TL
410
411* `blacklist` has been replaced with `blocklist` throughout. The following commands have changed:
412
413 - ``ceph osd blacklist ...`` are now ``ceph osd blocklist ...``
414 - ``ceph <tell|daemon> osd.<NNN> dump_blacklist`` is now ``ceph <tell|daemon> osd.<NNN> dump_blocklist``
415
416* The following config options have changed:
417
418 - ``mon osd blacklist default expire`` is now ``mon osd blocklist default expire``
419 - ``mon mds blacklist interval`` is now ``mon mds blocklist interval``
420 - ``mon mgr blacklist interval`` is now ''mon mgr blocklist interval``
421 - ``rbd blacklist on break lock`` is now ``rbd blocklist on break lock``
422 - ``rbd blacklist expire seconds`` is now ``rbd blocklist expire seconds``
423 - ``mds session blacklist on timeout`` is now ``mds session blocklist on timeout``
424 - ``mds session blacklist on evict`` is now ``mds session blocklist on evict``
425
426* CephFS: Compatibility code for old on-disk format of snapshot has been removed.
427 Current on-disk format of snapshot was introduced by Mimic release. If there
428 are any snapshots created by Ceph release older than Mimic. Before upgrading,
429 either delete them all or scrub the whole filesystem:
430
431 ceph daemon <mds of rank 0> scrub_path / force recursive repair
432 ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair
433
434* CephFS: Scrub is supported in multiple active mds setup. MDS rank 0 handles
435 scrub commands, and forward scrub to other mds if necessary.
436
437* The following librados API calls have changed:
438
439 - ``rados_blacklist_add`` is now ``rados_blocklist_add``; the former will issue a deprecation warning and be removed in a future release.
440 - ``rados.blacklist_add`` is now ``rados.blocklist_add`` in the C++ API.
441
442* The JSON output for the following commands now shows ``blocklist`` instead of ``blacklist``:
443
444 - ``ceph osd dump``
445 - ``ceph <tell|daemon> osd.<N> dump_blocklist``
446
447* caps: MON and MDS caps can now be used to restrict client's ability to view
448 and operate on specific Ceph file systems. The FS can be specificed using
449 ``fsname`` in caps. This also affects subcommand ``fs authorize``, the caps
450 produce by it will be specific to the FS name passed in its arguments.
451
452* fs: root_squash flag can be set in MDS caps. It disallows file system
453 operations that need write access for clients with uid=0 or gid=0. This
454 feature should prevent accidents such as an inadvertent `sudo rm -rf /<path>`.
455
456* fs: "fs authorize" now sets MON cap to "allow <perm> fsname=<fsname>"
457 instead of setting it to "allow r" all the time.
458
459* ``ceph pg #.# list_unfound`` output has been enhanced to provide
460 might_have_unfound information which indicates which OSDs may
461 contain the unfound objects.
462
463* The ``ceph orch apply rgw`` syntax and behavior have changed. RGW
464 services can now be arbitrarily named (it is no longer forced to be
465 `realm.zone`). The ``--rgw-realm=...`` and ``--rgw-zone=...``
466 arguments are now optional, which means that if they are omitted, a
467 vanilla single-cluster RGW will be deployed. When the realm and
468 zone are provided, the user is now responsible for setting up the
469 multisite configuration beforehand--cephadm no longer attempts to
470 create missing realms or zones.
a4b75251 471
20effc67
TL
472* The ``min_size`` and ``max_size`` CRUSH rule properties have been removed. Older
473 CRUSH maps will still compile but Ceph will issue a warning that these fields are
474 ignored.
a4b75251
TL
475* The cephadm NFS support has been simplified to no longer allow the
476 pool and namespace where configuration is stored to be customized.
477 As a result, the ``ceph orch apply nfs`` command no longer has
478 ``--pool`` or ``--namespace`` arguments.
479
480 Existing cephadm NFS deployments (from earlier version of Pacific or
481 from Octopus) will be automatically migrated when the cluster is
482 upgraded. Note that the NFS ganesha daemons will be redeployed and
483 it is possible that their IPs will change.
20effc67
TL
484
485* RGW now requires a secure connection to the monitor by default
486 (``auth_client_required=cephx`` and ``ms_mon_client_mode=secure``).
487 If you have cephx authentication disabled on your cluster, you may
488 need to adjust these settings for RGW.