]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/configuration/mon-config-ref.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / rados / configuration / mon-config-ref.rst
1 ==========================
2 Monitor Config Reference
3 ==========================
4
5 Understanding how to configure a :term:`Ceph Monitor` is an important part of
6 building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
7 have at least one monitor**. A monitor configuration usually remains fairly
8 consistent, but you can add, remove or replace a monitor in a cluster. See
9 `Adding/Removing a Monitor`_ and `Add/Remove a Monitor (ceph-deploy)`_ for
10 details.
11
12
13 .. index:: Ceph Monitor; Paxos
14
15 Background
16 ==========
17
18 Ceph Monitors maintain a "master copy" of the :term:`cluster map`, which means a
19 :term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD
20 Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
21 retrieving a current cluster map. Before Ceph Clients can read from or write to
22 Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
23 first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
24 Client can compute the location for any object. The ability to compute object
25 locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
26 very important aspect of Ceph's high scalability and performance. See
27 `Scalability and High Availability`_ for additional details.
28
29 The primary role of the Ceph Monitor is to maintain a master copy of the cluster
30 map. Ceph Monitors also provide authentication and logging services. Ceph
31 Monitors write all changes in the monitor services to a single Paxos instance,
32 and Paxos writes the changes to a key/value store for strong consistency. Ceph
33 Monitors can query the most recent version of the cluster map during sync
34 operations. Ceph Monitors leverage the key/value store's snapshots and iterators
35 (using leveldb) to perform store-wide synchronization.
36
37 .. ditaa::
38
39 /-------------\ /-------------\
40 | Monitor | Write Changes | Paxos |
41 | cCCC +-------------->+ cCCC |
42 | | | |
43 +-------------+ \------+------/
44 | Auth | |
45 +-------------+ | Write Changes
46 | Log | |
47 +-------------+ v
48 | Monitor Map | /------+------\
49 +-------------+ | Key / Value |
50 | OSD Map | | Store |
51 +-------------+ | cCCC |
52 | PG Map | \------+------/
53 +-------------+ ^
54 | MDS Map | | Read Changes
55 +-------------+ |
56 | cCCC |*---------------------+
57 \-------------/
58
59
60 .. deprecated:: version 0.58
61
62 In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
63 each service and store the map as a file.
64
65 .. index:: Ceph Monitor; cluster map
66
67 Cluster Maps
68 ------------
69
70 The cluster map is a composite of maps, including the monitor map, the OSD map,
71 the placement group map and the metadata server map. The cluster map tracks a
72 number of important things: which processes are ``in`` the Ceph Storage Cluster;
73 which processes that are ``in`` the Ceph Storage Cluster are ``up`` and running
74 or ``down``; whether, the placement groups are ``active`` or ``inactive``, and
75 ``clean`` or in some other state; and, other details that reflect the current
76 state of the cluster such as the total amount of storage space, and the amount
77 of storage used.
78
79 When there is a significant change in the state of the cluster--e.g., a Ceph OSD
80 Daemon goes down, a placement group falls into a degraded state, etc.--the
81 cluster map gets updated to reflect the current state of the cluster.
82 Additionally, the Ceph Monitor also maintains a history of the prior states of
83 the cluster. The monitor map, OSD map, placement group map and metadata server
84 map each maintain a history of their map versions. We call each version an
85 "epoch."
86
87 When operating your Ceph Storage Cluster, keeping track of these states is an
88 important part of your system administration duties. See `Monitoring a Cluster`_
89 and `Monitoring OSDs and PGs`_ for additional details.
90
91 .. index:: high availability; quorum
92
93 Monitor Quorum
94 --------------
95
96 Our Configuring ceph section provides a trivial `Ceph configuration file`_ that
97 provides for one monitor in the test cluster. A cluster will run fine with a
98 single monitor; however, **a single monitor is a single-point-of-failure**. To
99 ensure high availability in a production Ceph Storage Cluster, you should run
100 Ceph with multiple monitors so that the failure of a single monitor **WILL NOT**
101 bring down your entire cluster.
102
103 When a Ceph Storage Cluster runs multiple Ceph Monitors for high availability,
104 Ceph Monitors use `Paxos`_ to establish consensus about the master cluster map.
105 A consensus requires a majority of monitors running to establish a quorum for
106 consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6;
107 etc.).
108
109 ``mon force quorum join``
110
111 :Description: Force monitor to join quorum even if it has been previously removed from the map
112 :Type: Boolean
113 :Default: ``False``
114
115 .. index:: Ceph Monitor; consistency
116
117 Consistency
118 -----------
119
120 When you add monitor settings to your Ceph configuration file, you need to be
121 aware of some of the architectural aspects of Ceph Monitors. **Ceph imposes
122 strict consistency requirements** for a Ceph monitor when discovering another
123 Ceph Monitor within the cluster. Whereas, Ceph Clients and other Ceph daemons
124 use the Ceph configuration file to discover monitors, monitors discover each
125 other using the monitor map (monmap), not the Ceph configuration file.
126
127 A Ceph Monitor always refers to the local copy of the monmap when discovering
128 other Ceph Monitors in the Ceph Storage Cluster. Using the monmap instead of the
129 Ceph configuration file avoids errors that could break the cluster (e.g., typos
130 in ``ceph.conf`` when specifying a monitor address or port). Since monitors use
131 monmaps for discovery and they share monmaps with clients and other Ceph
132 daemons, **the monmap provides monitors with a strict guarantee that their
133 consensus is valid.**
134
135 Strict consistency also applies to updates to the monmap. As with any other
136 updates on the Ceph Monitor, changes to the monmap always run through a
137 distributed consensus algorithm called `Paxos`_. The Ceph Monitors must agree on
138 each update to the monmap, such as adding or removing a Ceph Monitor, to ensure
139 that each monitor in the quorum has the same version of the monmap. Updates to
140 the monmap are incremental so that Ceph Monitors have the latest agreed upon
141 version, and a set of previous versions. Maintaining a history enables a Ceph
142 Monitor that has an older version of the monmap to catch up with the current
143 state of the Ceph Storage Cluster.
144
145 If Ceph Monitors discovered each other through the Ceph configuration file
146 instead of through the monmap, it would introduce additional risks because the
147 Ceph configuration files are not updated and distributed automatically. Ceph
148 Monitors might inadvertently use an older Ceph configuration file, fail to
149 recognize a Ceph Monitor, fall out of a quorum, or develop a situation where
150 `Paxos`_ is not able to determine the current state of the system accurately.
151
152
153 .. index:: Ceph Monitor; bootstrapping monitors
154
155 Bootstrapping Monitors
156 ----------------------
157
158 In most configuration and deployment cases, tools that deploy Ceph may help
159 bootstrap the Ceph Monitors by generating a monitor map for you (e.g.,
160 ``ceph-deploy``, etc). A Ceph Monitor requires a few explicit
161 settings:
162
163 - **Filesystem ID**: The ``fsid`` is the unique identifier for your
164 object store. Since you can run multiple clusters on the same
165 hardware, you must specify the unique ID of the object store when
166 bootstrapping a monitor. Deployment tools usually do this for you
167 (e.g., ``ceph-deploy`` can call a tool like ``uuidgen``), but you
168 may specify the ``fsid`` manually too.
169
170 - **Monitor ID**: A monitor ID is a unique ID assigned to each monitor within
171 the cluster. It is an alphanumeric value, and by convention the identifier
172 usually follows an alphabetical increment (e.g., ``a``, ``b``, etc.). This
173 can be set in a Ceph configuration file (e.g., ``[mon.a]``, ``[mon.b]``, etc.),
174 by a deployment tool, or using the ``ceph`` commandline.
175
176 - **Keys**: The monitor must have secret keys. A deployment tool such as
177 ``ceph-deploy`` usually does this for you, but you may
178 perform this step manually too. See `Monitor Keyrings`_ for details.
179
180 For additional details on bootstrapping, see `Bootstrapping a Monitor`_.
181
182 .. index:: Ceph Monitor; configuring monitors
183
184 Configuring Monitors
185 ====================
186
187 To apply configuration settings to the entire cluster, enter the configuration
188 settings under ``[global]``. To apply configuration settings to all monitors in
189 your cluster, enter the configuration settings under ``[mon]``. To apply
190 configuration settings to specific monitors, specify the monitor instance
191 (e.g., ``[mon.a]``). By convention, monitor instance names use alpha notation.
192
193 .. code-block:: ini
194
195 [global]
196
197 [mon]
198
199 [mon.a]
200
201 [mon.b]
202
203 [mon.c]
204
205
206 Minimum Configuration
207 ---------------------
208
209 The bare minimum monitor settings for a Ceph monitor via the Ceph configuration
210 file include a hostname and a monitor address for each monitor. You can configure
211 these under ``[mon]`` or under the entry for a specific monitor.
212
213 .. code-block:: ini
214
215 [global]
216 mon host = 10.0.0.2,10.0.0.3,10.0.0.4
217
218 .. code-block:: ini
219
220 [mon.a]
221 host = hostname1
222 mon addr = 10.0.0.10:6789
223
224 See the `Network Configuration Reference`_ for details.
225
226 .. note:: This minimum configuration for monitors assumes that a deployment
227 tool generates the ``fsid`` and the ``mon.`` key for you.
228
229 Once you deploy a Ceph cluster, you **SHOULD NOT** change the IP address of
230 the monitors. However, if you decide to change the monitor's IP address, you
231 must follow a specific procedure. See `Changing a Monitor's IP Address`_ for
232 details.
233
234 Monitors can also be found by clients using DNS SRV records. See `Monitor lookup through DNS`_ for details.
235
236 Cluster ID
237 ----------
238
239 Each Ceph Storage Cluster has a unique identifier (``fsid``). If specified, it
240 usually appears under the ``[global]`` section of the configuration file.
241 Deployment tools usually generate the ``fsid`` and store it in the monitor map,
242 so the value may not appear in a configuration file. The ``fsid`` makes it
243 possible to run daemons for multiple clusters on the same hardware.
244
245 ``fsid``
246
247 :Description: The cluster ID. One per cluster.
248 :Type: UUID
249 :Required: Yes.
250 :Default: N/A. May be generated by a deployment tool if not specified.
251
252 .. note:: Do not set this value if you use a deployment tool that does
253 it for you.
254
255
256 .. index:: Ceph Monitor; initial members
257
258 Initial Members
259 ---------------
260
261 We recommend running a production Ceph Storage Cluster with at least three Ceph
262 Monitors to ensure high availability. When you run multiple monitors, you may
263 specify the initial monitors that must be members of the cluster in order to
264 establish a quorum. This may reduce the time it takes for your cluster to come
265 online.
266
267 .. code-block:: ini
268
269 [mon]
270 mon initial members = a,b,c
271
272
273 ``mon initial members``
274
275 :Description: The IDs of initial monitors in a cluster during startup. If
276 specified, Ceph requires an odd number of monitors to form an
277 initial quorum (e.g., 3).
278
279 :Type: String
280 :Default: None
281
282 .. note:: A *majority* of monitors in your cluster must be able to reach
283 each other in order to establish a quorum. You can decrease the initial
284 number of monitors to establish a quorum with this setting.
285
286 .. index:: Ceph Monitor; data path
287
288 Data
289 ----
290
291 Ceph provides a default path where Ceph Monitors store data. For optimal
292 performance in a production Ceph Storage Cluster, we recommend running Ceph
293 Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb is using
294 ``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
295 very often, which can interfere with Ceph OSD Daemon workloads if the data
296 store is co-located with the OSD Daemons.
297
298 In Ceph versions 0.58 and earlier, Ceph Monitors store their data in files. This
299 approach allows users to inspect monitor data with common tools like ``ls``
300 and ``cat``. However, it doesn't provide strong consistency.
301
302 In Ceph versions 0.59 and later, Ceph Monitors store their data as key/value
303 pairs. Ceph Monitors require `ACID`_ transactions. Using a data store prevents
304 recovering Ceph Monitors from running corrupted versions through Paxos, and it
305 enables multiple modification operations in one single atomic batch, among other
306 advantages.
307
308 Generally, we do not recommend changing the default data location. If you modify
309 the default location, we recommend that you make it uniform across Ceph Monitors
310 by setting it in the ``[mon]`` section of the configuration file.
311
312
313 ``mon data``
314
315 :Description: The monitor's data location.
316 :Type: String
317 :Default: ``/var/lib/ceph/mon/$cluster-$id``
318
319
320 ``mon data size warn``
321
322 :Description: Issue a ``HEALTH_WARN`` in cluster log when the monitor's data
323 store goes over 15GB.
324 :Type: Integer
325 :Default: 15*1024*1024*1024*
326
327
328 ``mon data avail warn``
329
330 :Description: Issue a ``HEALTH_WARN`` in cluster log when the available disk
331 space of monitor's data store is lower or equal to this
332 percentage.
333 :Type: Integer
334 :Default: 30
335
336
337 ``mon data avail crit``
338
339 :Description: Issue a ``HEALTH_ERR`` in cluster log when the available disk
340 space of monitor's data store is lower or equal to this
341 percentage.
342 :Type: Integer
343 :Default: 5
344
345
346 ``mon warn on cache pools without hit sets``
347
348 :Description: Issue a ``HEALTH_WARN`` in cluster log if a cache pool does not
349 have the ``hit_set_type`` value configured.
350 See :ref:`hit_set_type <hit_set_type>` for more
351 details.
352 :Type: Boolean
353 :Default: True
354
355
356 ``mon warn on crush straw calc version zero``
357
358 :Description: Issue a ``HEALTH_WARN`` in cluster log if the CRUSH's
359 ``straw_calc_version`` is zero. See
360 :ref:`CRUSH map tunables <crush-map-tunables>` for
361 details.
362 :Type: Boolean
363 :Default: True
364
365
366 ``mon warn on legacy crush tunables``
367
368 :Description: Issue a ``HEALTH_WARN`` in cluster log if
369 CRUSH tunables are too old (older than ``mon_min_crush_required_version``)
370 :Type: Boolean
371 :Default: True
372
373
374 ``mon crush min required version``
375
376 :Description: The minimum tunable profile version required by the cluster.
377 See
378 :ref:`CRUSH map tunables <crush-map-tunables>` for
379 details.
380 :Type: String
381 :Default: ``firefly``
382
383
384 ``mon warn on osd down out interval zero``
385
386 :Description: Issue a ``HEALTH_WARN`` in cluster log if
387 ``mon osd down out interval`` is zero. Having this option set to
388 zero on the leader acts much like the ``noout`` flag. It's hard
389 to figure out what's going wrong with clusters without the
390 ``noout`` flag set but acting like that just the same, so we
391 report a warning in this case.
392 :Type: Boolean
393 :Default: True
394
395
396 ``mon cache target full warn ratio``
397
398 :Description: Position between pool's ``cache_target_full`` and
399 ``target_max_object`` where we start warning
400 :Type: Float
401 :Default: ``0.66``
402
403
404 ``mon health data update interval``
405
406 :Description: How often (in seconds) the monitor in quorum shares its health
407 status with its peers. (negative number disables it)
408 :Type: Float
409 :Default: ``60``
410
411
412 ``mon health to clog``
413
414 :Description: Enable sending health summary to cluster log periodically.
415 :Type: Boolean
416 :Default: True
417
418
419 ``mon health to clog tick interval``
420
421 :Description: How often (in seconds) the monitor send health summary to cluster
422 log (a non-positive number disables it). If current health summary
423 is empty or identical to the last time, monitor will not send it
424 to cluster log.
425 :Type: Integer
426 :Default: 3600
427
428
429 ``mon health to clog interval``
430
431 :Description: How often (in seconds) the monitor send health summary to cluster
432 log (a non-positive number disables it). Monitor will always
433 send the summary to cluster log no matter if the summary changes
434 or not.
435 :Type: Integer
436 :Default: 60
437
438
439
440 .. index:: Ceph Storage Cluster; capacity planning, Ceph Monitor; capacity planning
441
442 Storage Capacity
443 ----------------
444
445 When a Ceph Storage Cluster gets close to its maximum capacity (i.e., ``mon osd
446 full ratio``), Ceph prevents you from writing to or reading from Ceph OSD
447 Daemons as a safety measure to prevent data loss. Therefore, letting a
448 production Ceph Storage Cluster approach its full ratio is not a good practice,
449 because it sacrifices high availability. The default full ratio is ``.95``, or
450 95% of capacity. This a very aggressive setting for a test cluster with a small
451 number of OSDs.
452
453 .. tip:: When monitoring your cluster, be alert to warnings related to the
454 ``nearfull`` ratio. This means that a failure of some OSDs could result
455 in a temporary service disruption if one or more OSDs fails. Consider adding
456 more OSDs to increase storage capacity.
457
458 A common scenario for test clusters involves a system administrator removing a
459 Ceph OSD Daemon from the Ceph Storage Cluster to watch the cluster rebalance;
460 then, removing another Ceph OSD Daemon, and so on until the Ceph Storage Cluster
461 eventually reaches the full ratio and locks up. We recommend a bit of capacity
462 planning even with a test cluster. Planning enables you to gauge how much spare
463 capacity you will need in order to maintain high availability. Ideally, you want
464 to plan for a series of Ceph OSD Daemon failures where the cluster can recover
465 to an ``active + clean`` state without replacing those Ceph OSD Daemons
466 immediately. You can run a cluster in an ``active + degraded`` state, but this
467 is not ideal for normal operating conditions.
468
469 The following diagram depicts a simplistic Ceph Storage Cluster containing 33
470 Ceph Nodes with one Ceph OSD Daemon per host, each Ceph OSD Daemon reading from
471 and writing to a 3TB drive. So this exemplary Ceph Storage Cluster has a maximum
472 actual capacity of 99TB. With a ``mon osd full ratio`` of ``0.95``, if the Ceph
473 Storage Cluster falls to 5TB of remaining capacity, the cluster will not allow
474 Ceph Clients to read and write data. So the Ceph Storage Cluster's operating
475 capacity is 95TB, not 99TB.
476
477 .. ditaa::
478
479 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
480 | Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 |
481 | cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC |
482 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
483 | OSD 1 | | OSD 7 | | OSD 13 | | OSD 19 | | OSD 25 | | OSD 31 |
484 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
485 | OSD 2 | | OSD 8 | | OSD 14 | | OSD 20 | | OSD 26 | | OSD 32 |
486 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
487 | OSD 3 | | OSD 9 | | OSD 15 | | OSD 21 | | OSD 27 | | OSD 33 |
488 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
489 | OSD 4 | | OSD 10 | | OSD 16 | | OSD 22 | | OSD 28 | | Spare |
490 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
491 | OSD 5 | | OSD 11 | | OSD 17 | | OSD 23 | | OSD 29 | | Spare |
492 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
493 | OSD 6 | | OSD 12 | | OSD 18 | | OSD 24 | | OSD 30 | | Spare |
494 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
495
496 It is normal in such a cluster for one or two OSDs to fail. A less frequent but
497 reasonable scenario involves a rack's router or power supply failing, which
498 brings down multiple OSDs simultaneously (e.g., OSDs 7-12). In such a scenario,
499 you should still strive for a cluster that can remain operational and achieve an
500 ``active + clean`` state--even if that means adding a few hosts with additional
501 OSDs in short order. If your capacity utilization is too high, you may not lose
502 data, but you could still sacrifice data availability while resolving an outage
503 within a failure domain if capacity utilization of the cluster exceeds the full
504 ratio. For this reason, we recommend at least some rough capacity planning.
505
506 Identify two numbers for your cluster:
507
508 #. The number of OSDs.
509 #. The total capacity of the cluster
510
511 If you divide the total capacity of your cluster by the number of OSDs in your
512 cluster, you will find the mean average capacity of an OSD within your cluster.
513 Consider multiplying that number by the number of OSDs you expect will fail
514 simultaneously during normal operations (a relatively small number). Finally
515 multiply the capacity of the cluster by the full ratio to arrive at a maximum
516 operating capacity; then, subtract the number of amount of data from the OSDs
517 you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing
518 process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at
519 a reasonable number for a near full ratio.
520
521 The following settings only apply on cluster creation and are then stored in
522 the OSDMap.
523
524 .. code-block:: ini
525
526 [global]
527
528 mon osd full ratio = .80
529 mon osd backfillfull ratio = .75
530 mon osd nearfull ratio = .70
531
532
533 ``mon osd full ratio``
534
535 :Description: The percentage of disk space used before an OSD is
536 considered ``full``.
537
538 :Type: Float
539 :Default: ``.95``
540
541
542 ``mon osd backfillfull ratio``
543
544 :Description: The percentage of disk space used before an OSD is
545 considered too ``full`` to backfill.
546
547 :Type: Float
548 :Default: ``.90``
549
550
551 ``mon osd nearfull ratio``
552
553 :Description: The percentage of disk space used before an OSD is
554 considered ``nearfull``.
555
556 :Type: Float
557 :Default: ``.85``
558
559
560 .. tip:: If some OSDs are nearfull, but others have plenty of capacity, you
561 may have a problem with the CRUSH weight for the nearfull OSDs.
562
563 .. tip:: These settings only apply during cluster creation. Afterwards they need
564 to be changed in the OSDMap using ``ceph osd set-nearfull-ratio`` and
565 ``ceph osd set-full-ratio``
566
567 .. index:: heartbeat
568
569 Heartbeat
570 ---------
571
572 Ceph monitors know about the cluster by requiring reports from each OSD, and by
573 receiving reports from OSDs about the status of their neighboring OSDs. Ceph
574 provides reasonable default settings for monitor/OSD interaction; however, you
575 may modify them as needed. See `Monitor/OSD Interaction`_ for details.
576
577
578 .. index:: Ceph Monitor; leader, Ceph Monitor; provider, Ceph Monitor; requester, Ceph Monitor; synchronization
579
580 Monitor Store Synchronization
581 -----------------------------
582
583 When you run a production cluster with multiple monitors (recommended), each
584 monitor checks to see if a neighboring monitor has a more recent version of the
585 cluster map (e.g., a map in a neighboring monitor with one or more epoch numbers
586 higher than the most current epoch in the map of the instant monitor).
587 Periodically, one monitor in the cluster may fall behind the other monitors to
588 the point where it must leave the quorum, synchronize to retrieve the most
589 current information about the cluster, and then rejoin the quorum. For the
590 purposes of synchronization, monitors may assume one of three roles:
591
592 #. **Leader**: The `Leader` is the first monitor to achieve the most recent
593 Paxos version of the cluster map.
594
595 #. **Provider**: The `Provider` is a monitor that has the most recent version
596 of the cluster map, but wasn't the first to achieve the most recent version.
597
598 #. **Requester:** A `Requester` is a monitor that has fallen behind the leader
599 and must synchronize in order to retrieve the most recent information about
600 the cluster before it can rejoin the quorum.
601
602 These roles enable a leader to delegate synchronization duties to a provider,
603 which prevents synchronization requests from overloading the leader--improving
604 performance. In the following diagram, the requester has learned that it has
605 fallen behind the other monitors. The requester asks the leader to synchronize,
606 and the leader tells the requester to synchronize with a provider.
607
608
609 .. ditaa:: +-----------+ +---------+ +----------+
610 | Requester | | Leader | | Provider |
611 +-----------+ +---------+ +----------+
612 | | |
613 | | |
614 | Ask to Synchronize | |
615 |------------------->| |
616 | | |
617 |<-------------------| |
618 | Tell Requester to | |
619 | Sync with Provider | |
620 | | |
621 | Synchronize |
622 |--------------------+-------------------->|
623 | | |
624 |<-------------------+---------------------|
625 | Send Chunk to Requester |
626 | (repeat as necessary) |
627 | Requester Acks Chuck to Provider |
628 |--------------------+-------------------->|
629 | |
630 | Sync Complete |
631 | Notification |
632 |------------------->|
633 | |
634 |<-------------------|
635 | Ack |
636 | |
637
638
639 Synchronization always occurs when a new monitor joins the cluster. During
640 runtime operations, monitors may receive updates to the cluster map at different
641 times. This means the leader and provider roles may migrate from one monitor to
642 another. If this happens while synchronizing (e.g., a provider falls behind the
643 leader), the provider can terminate synchronization with a requester.
644
645 Once synchronization is complete, Ceph requires trimming across the cluster.
646 Trimming requires that the placement groups are ``active + clean``.
647
648
649 ``mon sync trim timeout``
650
651 :Description:
652 :Type: Double
653 :Default: ``30.0``
654
655
656 ``mon sync heartbeat timeout``
657
658 :Description:
659 :Type: Double
660 :Default: ``30.0``
661
662
663 ``mon sync heartbeat interval``
664
665 :Description:
666 :Type: Double
667 :Default: ``5.0``
668
669
670 ``mon sync backoff timeout``
671
672 :Description:
673 :Type: Double
674 :Default: ``30.0``
675
676
677 ``mon sync timeout``
678
679 :Description: Number of seconds the monitor will wait for the next update
680 message from its sync provider before it gives up and bootstrap
681 again.
682 :Type: Double
683 :Default: ``60.0``
684
685
686 ``mon sync max retries``
687
688 :Description:
689 :Type: Integer
690 :Default: ``5``
691
692
693 ``mon sync max payload size``
694
695 :Description: The maximum size for a sync payload (in bytes).
696 :Type: 32-bit Integer
697 :Default: ``1045676``
698
699
700 ``paxos max join drift``
701
702 :Description: The maximum Paxos iterations before we must first sync the
703 monitor data stores. When a monitor finds that its peer is too
704 far ahead of it, it will first sync with data stores before moving
705 on.
706 :Type: Integer
707 :Default: ``10``
708
709 ``paxos stash full interval``
710
711 :Description: How often (in commits) to stash a full copy of the PaxosService state.
712 Current this setting only affects ``mds``, ``mon``, ``auth`` and ``mgr``
713 PaxosServices.
714 :Type: Integer
715 :Default: 25
716
717 ``paxos propose interval``
718
719 :Description: Gather updates for this time interval before proposing
720 a map update.
721 :Type: Double
722 :Default: ``1.0``
723
724
725 ``paxos min``
726
727 :Description: The minimum number of paxos states to keep around
728 :Type: Integer
729 :Default: 500
730
731
732 ``paxos min wait``
733
734 :Description: The minimum amount of time to gather updates after a period of
735 inactivity.
736 :Type: Double
737 :Default: ``0.05``
738
739
740 ``paxos trim min``
741
742 :Description: Number of extra proposals tolerated before trimming
743 :Type: Integer
744 :Default: 250
745
746
747 ``paxos trim max``
748
749 :Description: The maximum number of extra proposals to trim at a time
750 :Type: Integer
751 :Default: 500
752
753
754 ``paxos service trim min``
755
756 :Description: The minimum amount of versions to trigger a trim (0 disables it)
757 :Type: Integer
758 :Default: 250
759
760
761 ``paxos service trim max``
762
763 :Description: The maximum amount of versions to trim during a single proposal (0 disables it)
764 :Type: Integer
765 :Default: 500
766
767
768 ``mon max log epochs``
769
770 :Description: The maximum amount of log epochs to trim during a single proposal
771 :Type: Integer
772 :Default: 500
773
774
775 ``mon max pgmap epochs``
776
777 :Description: The maximum amount of pgmap epochs to trim during a single proposal
778 :Type: Integer
779 :Default: 500
780
781
782 ``mon mds force trim to``
783
784 :Description: Force monitor to trim mdsmaps to this point (0 disables it.
785 dangerous, use with care)
786 :Type: Integer
787 :Default: 0
788
789
790 ``mon osd force trim to``
791
792 :Description: Force monitor to trim osdmaps to this point, even if there is
793 PGs not clean at the specified epoch (0 disables it. dangerous,
794 use with care)
795 :Type: Integer
796 :Default: 0
797
798 ``mon osd cache size``
799
800 :Description: The size of osdmaps cache, not to rely on underlying store's cache
801 :Type: Integer
802 :Default: 10
803
804
805 ``mon election timeout``
806
807 :Description: On election proposer, maximum waiting time for all ACKs in seconds.
808 :Type: Float
809 :Default: ``5``
810
811
812 ``mon lease``
813
814 :Description: The length (in seconds) of the lease on the monitor's versions.
815 :Type: Float
816 :Default: ``5``
817
818
819 ``mon lease renew interval factor``
820
821 :Description: ``mon lease`` \* ``mon lease renew interval factor`` will be the
822 interval for the Leader to renew the other monitor's leases. The
823 factor should be less than ``1.0``.
824 :Type: Float
825 :Default: ``0.6``
826
827
828 ``mon lease ack timeout factor``
829
830 :Description: The Leader will wait ``mon lease`` \* ``mon lease ack timeout factor``
831 for the Providers to acknowledge the lease extension.
832 :Type: Float
833 :Default: ``2.0``
834
835
836 ``mon accept timeout factor``
837
838 :Description: The Leader will wait ``mon lease`` \* ``mon accept timeout factor``
839 for the Requester(s) to accept a Paxos update. It is also used
840 during the Paxos recovery phase for similar purposes.
841 :Type: Float
842 :Default: ``2.0``
843
844
845 ``mon min osdmap epochs``
846
847 :Description: Minimum number of OSD map epochs to keep at all times.
848 :Type: 32-bit Integer
849 :Default: ``500``
850
851
852 ``mon max pgmap epochs``
853
854 :Description: Maximum number of PG map epochs the monitor should keep.
855 :Type: 32-bit Integer
856 :Default: ``500``
857
858
859 ``mon max log epochs``
860
861 :Description: Maximum number of Log epochs the monitor should keep.
862 :Type: 32-bit Integer
863 :Default: ``500``
864
865
866
867 .. index:: Ceph Monitor; clock
868
869 Clock
870 -----
871
872 Ceph daemons pass critical messages to each other, which must be processed
873 before daemons reach a timeout threshold. If the clocks in Ceph monitors
874 are not synchronized, it can lead to a number of anomalies. For example:
875
876 - Daemons ignoring received messages (e.g., timestamps outdated)
877 - Timeouts triggered too soon/late when a message wasn't received in time.
878
879 See `Monitor Store Synchronization`_ for details.
880
881
882 .. tip:: You SHOULD install NTP on your Ceph monitor hosts to
883 ensure that the monitor cluster operates with synchronized clocks.
884
885 Clock drift may still be noticeable with NTP even though the discrepancy is not
886 yet harmful. Ceph's clock drift / clock skew warnings may get triggered even
887 though NTP maintains a reasonable level of synchronization. Increasing your
888 clock drift may be tolerable under such circumstances; however, a number of
889 factors such as workload, network latency, configuring overrides to default
890 timeouts and the `Monitor Store Synchronization`_ settings may influence
891 the level of acceptable clock drift without compromising Paxos guarantees.
892
893 Ceph provides the following tunable options to allow you to find
894 acceptable values.
895
896
897 ``clock offset``
898
899 :Description: How much to offset the system clock. See ``Clock.cc`` for details.
900 :Type: Double
901 :Default: ``0``
902
903
904 .. deprecated:: 0.58
905
906 ``mon tick interval``
907
908 :Description: A monitor's tick interval in seconds.
909 :Type: 32-bit Integer
910 :Default: ``5``
911
912
913 ``mon clock drift allowed``
914
915 :Description: The clock drift in seconds allowed between monitors.
916 :Type: Float
917 :Default: ``.050``
918
919
920 ``mon clock drift warn backoff``
921
922 :Description: Exponential backoff for clock drift warnings
923 :Type: Float
924 :Default: ``5``
925
926
927 ``mon timecheck interval``
928
929 :Description: The time check interval (clock drift check) in seconds
930 for the Leader.
931
932 :Type: Float
933 :Default: ``300.0``
934
935
936 ``mon timecheck skew interval``
937
938 :Description: The time check interval (clock drift check) in seconds when in
939 presence of a skew in seconds for the Leader.
940 :Type: Float
941 :Default: ``30.0``
942
943
944 Client
945 ------
946
947 ``mon client hunt interval``
948
949 :Description: The client will try a new monitor every ``N`` seconds until it
950 establishes a connection.
951
952 :Type: Double
953 :Default: ``3.0``
954
955
956 ``mon client ping interval``
957
958 :Description: The client will ping the monitor every ``N`` seconds.
959 :Type: Double
960 :Default: ``10.0``
961
962
963 ``mon client max log entries per message``
964
965 :Description: The maximum number of log entries a monitor will generate
966 per client message.
967
968 :Type: Integer
969 :Default: ``1000``
970
971
972 ``mon client bytes``
973
974 :Description: The amount of client message data allowed in memory (in bytes).
975 :Type: 64-bit Integer Unsigned
976 :Default: ``100ul << 20``
977
978
979 Pool settings
980 =============
981 Since version v0.94 there is support for pool flags which allow or disallow changes to be made to pools.
982
983 Monitors can also disallow removal of pools if configured that way.
984
985 ``mon allow pool delete``
986
987 :Description: If the monitors should allow pools to be removed. Regardless of what the pool flags say.
988 :Type: Boolean
989 :Default: ``false``
990
991 ``osd pool default ec fast read``
992
993 :Description: Whether to turn on fast read on the pool or not. It will be used as
994 the default setting of newly created erasure coded pools if ``fast_read``
995 is not specified at create time.
996 :Type: Boolean
997 :Default: ``false``
998
999 ``osd pool default flag hashpspool``
1000
1001 :Description: Set the hashpspool flag on new pools
1002 :Type: Boolean
1003 :Default: ``true``
1004
1005 ``osd pool default flag nodelete``
1006
1007 :Description: Set the nodelete flag on new pools. Prevents allow pool removal with this flag in any way.
1008 :Type: Boolean
1009 :Default: ``false``
1010
1011 ``osd pool default flag nopgchange``
1012
1013 :Description: Set the nopgchange flag on new pools. Does not allow the number of PGs to be changed for a pool.
1014 :Type: Boolean
1015 :Default: ``false``
1016
1017 ``osd pool default flag nosizechange``
1018
1019 :Description: Set the nosizechange flag on new pools. Does not allow the size to be changed of pool.
1020 :Type: Boolean
1021 :Default: ``false``
1022
1023 For more information about the pool flags see `Pool values`_.
1024
1025 Miscellaneous
1026 =============
1027
1028
1029 ``mon max osd``
1030
1031 :Description: The maximum number of OSDs allowed in the cluster.
1032 :Type: 32-bit Integer
1033 :Default: ``10000``
1034
1035 ``mon globalid prealloc``
1036
1037 :Description: The number of global IDs to pre-allocate for clients and daemons in the cluster.
1038 :Type: 32-bit Integer
1039 :Default: ``100``
1040
1041 ``mon subscribe interval``
1042
1043 :Description: The refresh interval (in seconds) for subscriptions. The
1044 subscription mechanism enables obtaining the cluster maps
1045 and log information.
1046
1047 :Type: Double
1048 :Default: ``86400``
1049
1050
1051 ``mon stat smooth intervals``
1052
1053 :Description: Ceph will smooth statistics over the last ``N`` PG maps.
1054 :Type: Integer
1055 :Default: ``2``
1056
1057
1058 ``mon probe timeout``
1059
1060 :Description: Number of seconds the monitor will wait to find peers before bootstrapping.
1061 :Type: Double
1062 :Default: ``2.0``
1063
1064
1065 ``mon daemon bytes``
1066
1067 :Description: The message memory cap for metadata server and OSD messages (in bytes).
1068 :Type: 64-bit Integer Unsigned
1069 :Default: ``400ul << 20``
1070
1071
1072 ``mon max log entries per event``
1073
1074 :Description: The maximum number of log entries per event.
1075 :Type: Integer
1076 :Default: ``4096``
1077
1078
1079 ``mon osd prime pg temp``
1080
1081 :Description: Enables or disable priming the PGMap with the previous OSDs when an out
1082 OSD comes back into the cluster. With the ``true`` setting the clients
1083 will continue to use the previous OSDs until the newly in OSDs as that
1084 PG peered.
1085 :Type: Boolean
1086 :Default: ``true``
1087
1088
1089 ``mon osd prime pg temp max time``
1090
1091 :Description: How much time in seconds the monitor should spend trying to prime the
1092 PGMap when an out OSD comes back into the cluster.
1093 :Type: Float
1094 :Default: ``0.5``
1095
1096
1097 ``mon osd prime pg temp max time estimate``
1098
1099 :Description: Maximum estimate of time spent on each PG before we prime all PGs
1100 in parallel.
1101 :Type: Float
1102 :Default: ``0.25``
1103
1104
1105 ``mon osd allow primary affinity``
1106
1107 :Description: allow ``primary_affinity`` to be set in the osdmap.
1108 :Type: Boolean
1109 :Default: False
1110
1111
1112 ``mon mds skip sanity``
1113
1114 :Description: Skip safety assertions on FSMap (in case of bugs where we want to
1115 continue anyway). Monitor terminates if the FSMap sanity check
1116 fails, but we can disable it by enabling this option.
1117 :Type: Boolean
1118 :Default: False
1119
1120
1121 ``mon max mdsmap epochs``
1122
1123 :Description: The maximum amount of mdsmap epochs to trim during a single proposal.
1124 :Type: Integer
1125 :Default: 500
1126
1127
1128 ``mon config key max entry size``
1129
1130 :Description: The maximum size of config-key entry (in bytes)
1131 :Type: Integer
1132 :Default: 4096
1133
1134
1135 ``mon scrub interval``
1136
1137 :Description: How often (in seconds) the monitor scrub its store by comparing
1138 the stored checksums with the computed ones of all the stored
1139 keys.
1140 :Type: Integer
1141 :Default: 3600*24
1142
1143
1144 ``mon scrub max keys``
1145
1146 :Description: The maximum number of keys to scrub each time.
1147 :Type: Integer
1148 :Default: 100
1149
1150
1151 ``mon compact on start``
1152
1153 :Description: Compact the database used as Ceph Monitor store on
1154 ``ceph-mon`` start. A manual compaction helps to shrink the
1155 monitor database and improve the performance of it if the regular
1156 compaction fails to work.
1157 :Type: Boolean
1158 :Default: False
1159
1160
1161 ``mon compact on bootstrap``
1162
1163 :Description: Compact the database used as Ceph Monitor store on
1164 on bootstrap. Monitor starts probing each other for creating
1165 a quorum after bootstrap. If it times out before joining the
1166 quorum, it will start over and bootstrap itself again.
1167 :Type: Boolean
1168 :Default: False
1169
1170
1171 ``mon compact on trim``
1172
1173 :Description: Compact a certain prefix (including paxos) when we trim its old states.
1174 :Type: Boolean
1175 :Default: True
1176
1177
1178 ``mon cpu threads``
1179
1180 :Description: Number of threads for performing CPU intensive work on monitor.
1181 :Type: Boolean
1182 :Default: True
1183
1184
1185 ``mon osd mapping pgs per chunk``
1186
1187 :Description: We calculate the mapping from placement group to OSDs in chunks.
1188 This option specifies the number of placement groups per chunk.
1189 :Type: Integer
1190 :Default: 4096
1191
1192 ``mon session timeout``
1193
1194 :Description: Monitor will terminate inactive sessions stay idle over this
1195 time limit.
1196 :Type: Integer
1197 :Default: 300
1198
1199
1200
1201 .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)
1202 .. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys
1203 .. _Ceph configuration file: ../ceph-conf/#monitors
1204 .. _Network Configuration Reference: ../network-config-ref
1205 .. _Monitor lookup through DNS: ../mon-lookup-dns
1206 .. _ACID: https://en.wikipedia.org/wiki/ACID
1207 .. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
1208 .. _Add/Remove a Monitor (ceph-deploy): ../../deployment/ceph-deploy-mon
1209 .. _Monitoring a Cluster: ../../operations/monitoring
1210 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
1211 .. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
1212 .. _Changing a Monitor's IP Address: ../../operations/add-or-rm-mons#changing-a-monitor-s-ip-address
1213 .. _Monitor/OSD Interaction: ../mon-osd-interaction
1214 .. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
1215 .. _Pool values: ../../operations/pools/#set-pool-values