]> git.proxmox.com Git - ceph.git/blob - ceph/PendingReleaseNotes
d705d58fa6747c8f653ca6a9947890275d8fabed
[ceph.git] / ceph / PendingReleaseNotes
1 >= 12.0.0
2 ------
3 * The "journaler allow split entries" config setting has been removed.
4 * The 'apply' mode of cephfs-journal-tool has been removed
5 * Added new configuration "public bind addr" to support dynamic environments
6 like Kubernetes. When set the Ceph MON daemon could bind locally to an IP
7 address and advertise a different IP address "public addr" on the network.
8 * RGW: bucket index resharding now uses the reshard namespace in upgrade scenarios as well
9 this is a changed behaviour from RC1 where a new pool for reshard was created
10
11 12.0.0
12 ------
13
14 * When assigning a network to the public network and not to
15 the cluster network the network specification of the public
16 network will be used for the cluster network as well.
17 In older versions this would lead to cluster services
18 being bound to 0.0.0.0:<port>, thus making the
19 cluster service even more publicly available than the
20 public services. When only specifying a cluster network it
21 will still result in the public services binding to 0.0.0.0.
22
23 * Some variants of the omap_get_keys and omap_get_vals librados
24 functions have been deprecated in favor of omap_get_vals2 and
25 omap_get_keys2. The new methods include an output argument
26 indicating whether there are additional keys left to fetch.
27 Previously this had to be inferred from the requested key count vs
28 the number of keys returned, but this breaks with new OSD-side
29 limits on the number of keys or bytes that can be returned by a
30 single omap request. These limits were introduced by kraken but
31 are effectively disabled by default (by setting a very large limit
32 of 1 GB) because users of the newly deprecated interface cannot
33 tell whether they should fetch more keys or not. In the case of
34 the standalone calls in the C++ interface
35 (IoCtx::get_omap_{keys,vals}), librados has been updated to loop on
36 the client side to provide a correct result via multiple calls to
37 the OSD. In the case of the methods used for building
38 multi-operation transactions, however, client-side looping is not
39 practical, and the methods have been deprecated. Note that use of
40 either the IoCtx methods on older librados versions or the
41 deprecated methods on any version of librados will lead to
42 incomplete results if/when the new OSD limits are enabled.
43
44 * In previous versions, if a client sent an op to the wrong OSD, the OSD
45 would reply with ENXIO. The rationale here is that the client or OSD is
46 clearly buggy and we want to surface the error as clearly as possible.
47 We now only send the ENXIO reply if the osd_enxio_on_misdirected_op option
48 is enabled (it's off by default). This means that a VM using librbd that
49 previously would have gotten an EIO and gone read-only will now see a
50 blocked/hung IO instead.
51
52 * When configuring ceph-fuse mounts in /etc/fstab, a new syntax is
53 available that uses "ceph.<arg>=<val>" in the options column, instead
54 of putting configuration in the device column. The old style syntax
55 still works. See the documentation page "Mount CephFS in your
56 file systems table" for details.
57
58 12.0.1
59 ------
60
61 * The original librados rados_objects_list_open (C) and objects_begin
62 (C++) object listing API, deprecated in Hammer, has finally been
63 removed. Users of this interface must update their software to use
64 either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
65 the new rados_object_list_begin (C) and object_list_begin (C++) API
66 before updating the client-side librados library to Luminous.
67
68 Object enumeration (via any API) with the latest librados version
69 and pre-Hammer OSDs is no longer supported. Note that no in-tree
70 Ceph services rely on object enumeration via the deprecated APIs, so
71 only external librados users might be affected.
72
73 The newest (and recommended) rados_object_list_begin (C) and
74 object_list_begin (C++) API is only usable on clusters with the
75 SORTBITWISE flag enabled (Jewel and later). (Note that this flag is
76 required to be set before upgrading beyond Jewel.)
77
78 * The rados copy-get-classic operation has been removed since it has not been
79 used by the OSD since before hammer. It is unlikely any librados user is
80 using this operation explicitly since there is also the more modern copy-get.
81
82 * The RGW api for getting object torrent has changed its params from 'get_torrent'
83 to 'torrent' so that it can be compatible with Amazon S3. Now the request for
84 object torrent is like 'GET /ObjectName?torrent'.
85
86 * The configuration option "osd pool erasure code stripe width" has
87 been replaced by "osd pool erasure code stripe unit", and given the
88 ability to be overridden by the erasure code profile setting
89 "stripe_unit". For more details see "Erasure Code Profiles" in the
90 documentation.
91
92 * rbd and cephfs can use erasure coding with bluestore. This may be
93 enabled by setting 'allow_ec_overwrites' to 'true' for a pool. Since
94 this relies on bluestore's checksumming to do deep scrubbing,
95 enabling this on a pool stored on filestore is not allowed.
96
97 * The 'rados df' JSON output now prints numeric values as numbers instead of
98 strings.
99
100 * There was a bug introduced in Jewel (#19119) that broke the mapping behavior
101 when an "out" OSD that still existed in the CRUSH map was removed with 'osd rm'.
102 This could result in 'misdirected op' and other errors. The bug is now fixed,
103 but the fix itself introduces the same risk because the behavior may vary between
104 clients and OSDs. To avoid problems, please ensure that all OSDs are removed
105 from the CRUSH map before deleting them. That is, be sure to do::
106
107 ceph osd crush rm osd.123
108
109 before::
110
111 ceph osd rm osd.123
112
113 12.0.2
114 ------
115
116 * The original librados rados_objects_list_open (C) and objects_begin
117 (C++) object listing API, deprecated in Hammer, has finally been
118 removed. Users of this interface must update their software to use
119 either the rados_nobjects_list_open (C) and nobjects_begin (C++) API or
120 the new rados_object_list_begin (C) and object_list_begin (C++) API
121 before updating the client-side librados library to Luminous.
122
123 Object enumeration (via any API) with the latest librados version
124 and pre-Hammer OSDs is no longer supported. Note that no in-tree
125 Ceph services rely on object enumeration via the deprecated APIs, so
126 only external librados users might be affected.
127
128 The newest (and recommended) rados_object_list_begin (C) and
129 object_list_begin (C++) API is only usable on clusters with the
130 SORTBITWISE flag enabled (Jewel and later). (Note that this flag is
131 required to be set before upgrading beyond Jewel.)
132 * CephFS clients without the 'p' flag in their authentication capability
133 string will no longer be able to set quotas or any layout fields. This
134 flag previously only restricted modification of the pool and namespace
135 fields in layouts.
136 * CephFS directory fragmentation (large directory support) is enabled
137 by default on new filesystems. To enable it on existing filesystems
138 use "ceph fs set <fs_name> allow_dirfrags".
139 * CephFS will generate a health warning if you have fewer standby daemons
140 than it thinks you wanted. By default this will be 1 if you ever had
141 a standby, and 0 if you did not. You can customize this using
142 ``ceph fs set <fs> standby_count_wanted <number>``. Setting it
143 to zero will effectively disable the health check.
144 * The "ceph mds tell ..." command has been removed. It is superceded
145 by "ceph tell mds.<id> ..."
146
147 12.1.0
148 ------
149
150 * The ``mon_osd_max_op_age`` option has been renamed to
151 ``mon_osd_warn_op_age`` (default: 32 seconds), to indicate we
152 generate a warning at this age. There is also a new
153 ``mon_osd_err_op_age_ratio`` that is a expressed as a multitple of
154 ``mon_osd_warn_op_age`` (default: 128, for roughly 60 minutes) to
155 control when an error is generated.
156
157 * The default maximum size for a single RADOS object has been reduced from
158 100GB to 128MB. The 100GB limit was completely impractical in practice
159 while the 128MB limit is a bit high but not unreasonable. If you have an
160 application written directly to librados that is using objects larger than
161 128MB you may need to adjust ``osd_max_object_size``.
162
163 * The semantics of the 'rados ls' and librados object listing
164 operations have always been a bit confusing in that "whiteout"
165 objects (which logically don't exist and will return ENOENT if you
166 try to access them) are included in the results. Previously
167 whiteouts only occurred in cache tier pools. In luminous, logically
168 deleted but snapshotted objects now result in a whiteout object, and
169 as a result they will appear in 'rados ls' results, even though
170 trying to read such an object will result in ENOENT. The 'rados
171 listsnaps' operation can be used in such a case to enumerate which
172 snapshots are present.
173
174 This may seem a bit strange, but is less strange than having a
175 deleted-but-snapshotted object not appear at all and be completely
176 hidden from librados's ability to enumerate objects. Future
177 versions of Ceph will likely include an alternative object
178 enumeration interface that makes it more natural and efficient to
179 enumerate all objects along with their snapshot and clone metadata.
180
181 * The deprecated 'crush_ruleset' property has finally been removed; please use
182 'crush_rule' instead for the 'osd pool get ...' and 'osd pool set ..' commands.
183
184 * The 'osd pool default crush replicated ruleset' option has been
185 removed and replaced by the 'osd pool default crush rule' option.
186 By default it is -1, which means the mon will pick the first type
187 replicated rule in the CRUSH map for replicated pools. Erasure
188 coded pools have rules that are automatically created for them if they are
189 not specified at pool creation time.
190
191 * The `status` ceph-mgr module is enabled by default, and initially provides two
192 commands: `ceph tell mgr osd status` and `ceph tell mgr fs status`. These
193 are high level colorized views to complement the existing CLI.
194
195 12.1.1
196 ------
197
198 * choose_args encoding has been changed to make it architecture-independent.
199 If you deployed Luminous dev releases or 12.1.0 rc release and made use of
200 the CRUSH choose_args feature, you need to remove all choose_args mappings
201 from your CRUSH map before starting the upgrade.
202
203 * The 'ceph health' structured output (JSON or XML) no longer contains
204 a 'timechecks' section describing the time sync status. This
205 information is now available via the 'ceph time-sync-status'
206 command.
207
208 * Certain extra fields in the 'ceph health' structured output that
209 used to appear if the mons were low on disk space (which duplicated
210 the information in the normal health warning messages) are now gone.
211
212 * The "ceph -w" output no longer contains audit log entries by default.
213 Add a "--watch-channel=audit" or "--watch-channel=*" to see them.
214
215 12.1.2
216 ------
217
218 * New "ceph -w" behavior - the "ceph -w" output no longer contains I/O rates,
219 available space, pg info, etc. because these are no longer logged to the
220 central log (which is what "ceph -w" shows). The same information can be
221 obtained by running "ceph pg stat"; alternatively, I/O rates per pool can
222 be determined using "ceph osd pool stats". Although these commands do not
223 self-update like "ceph -w" did, they do have the ability to return formatted
224 output by providing a "--format=<format>" option.
225
226 * Pools are now expected to be associated with the application using them.
227 Upon completing the upgrade to Luminous, the cluster will attempt to associate
228 existing pools to known applications (i.e. CephFS, RBD, and RGW). In-use pools
229 that are not associated to an application will generate a health warning. Any
230 unassociated pools can be manually associated using the new
231 "ceph osd pool application enable" command. For more details see
232 "Associate Pool to Application" in the documentation.
233
234 * ceph-mgr now has a Zabbix plugin. Using zabbix_sender it sends trapper
235 events to a Zabbix server containing high-level information of the Ceph
236 cluster. This makes it easy to monitor a Ceph cluster's status and send
237 out notifications in case of a malfunction.
238
239 * The 'mon_warn_osd_usage_min_max_delta' config option has been
240 removed and the associated health warning has been disabled because
241 it does not address clusters undergoing recovery or CRUSH rules that do
242 not target all devices in the cluster.
243
244 * Specifying user authorization capabilities for RBD clients has been
245 simplified. The general syntax for using RBD capability profiles is
246 "mon 'profile rbd' osd 'profile rbd[-read-only][ pool={pool-name}[, ...]]'".
247 For more details see "User Management" in the documentation.
248
249 * ``ceph config-key put`` has been deprecated in favor of ``ceph config-key set``.