]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/configuration/mon-config-ref.rst
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / doc / rados / configuration / mon-config-ref.rst
1 ==========================
2 Monitor Config Reference
3 ==========================
4
5 Understanding how to configure a :term:`Ceph Monitor` is an important part of
6 building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
7 have at least one monitor**. A monitor configuration usually remains fairly
8 consistent, but you can add, remove or replace a monitor in a cluster. See
9 `Adding/Removing a Monitor`_ and `Add/Remove a Monitor (ceph-deploy)`_ for
10 details.
11
12
13 .. index:: Ceph Monitor; Paxos
14
15 Background
16 ==========
17
18 Ceph Monitors maintain a "master copy" of the :term:`cluster map`, which means a
19 :term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD
20 Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
21 retrieving a current cluster map. Before Ceph Clients can read from or write to
22 Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
23 first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
24 Client can compute the location for any object. The ability to compute object
25 locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
26 very important aspect of Ceph's high scalability and performance. See
27 `Scalability and High Availability`_ for additional details.
28
29 The primary role of the Ceph Monitor is to maintain a master copy of the cluster
30 map. Ceph Monitors also provide authentication and logging services. Ceph
31 Monitors write all changes in the monitor services to a single Paxos instance,
32 and Paxos writes the changes to a key/value store for strong consistency. Ceph
33 Monitors can query the most recent version of the cluster map during sync
34 operations. Ceph Monitors leverage the key/value store's snapshots and iterators
35 (using leveldb) to perform store-wide synchronization.
36
37 .. ditaa::
38
39 /-------------\ /-------------\
40 | Monitor | Write Changes | Paxos |
41 | cCCC +-------------->+ cCCC |
42 | | | |
43 +-------------+ \------+------/
44 | Auth | |
45 +-------------+ | Write Changes
46 | Log | |
47 +-------------+ v
48 | Monitor Map | /------+------\
49 +-------------+ | Key / Value |
50 | OSD Map | | Store |
51 +-------------+ | cCCC |
52 | PG Map | \------+------/
53 +-------------+ ^
54 | MDS Map | | Read Changes
55 +-------------+ |
56 | cCCC |*---------------------+
57 \-------------/
58
59
60 .. deprecated:: version 0.58
61
62 In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
63 each service and store the map as a file.
64
65 .. index:: Ceph Monitor; cluster map
66
67 Cluster Maps
68 ------------
69
70 The cluster map is a composite of maps, including the monitor map, the OSD map,
71 the placement group map and the metadata server map. The cluster map tracks a
72 number of important things: which processes are ``in`` the Ceph Storage Cluster;
73 which processes that are ``in`` the Ceph Storage Cluster are ``up`` and running
74 or ``down``; whether, the placement groups are ``active`` or ``inactive``, and
75 ``clean`` or in some other state; and, other details that reflect the current
76 state of the cluster such as the total amount of storage space, and the amount
77 of storage used.
78
79 When there is a significant change in the state of the cluster--e.g., a Ceph OSD
80 Daemon goes down, a placement group falls into a degraded state, etc.--the
81 cluster map gets updated to reflect the current state of the cluster.
82 Additionally, the Ceph Monitor also maintains a history of the prior states of
83 the cluster. The monitor map, OSD map, placement group map and metadata server
84 map each maintain a history of their map versions. We call each version an
85 "epoch."
86
87 When operating your Ceph Storage Cluster, keeping track of these states is an
88 important part of your system administration duties. See `Monitoring a Cluster`_
89 and `Monitoring OSDs and PGs`_ for additional details.
90
91 .. index:: high availability; quorum
92
93 Monitor Quorum
94 --------------
95
96 Our Configuring ceph section provides a trivial `Ceph configuration file`_ that
97 provides for one monitor in the test cluster. A cluster will run fine with a
98 single monitor; however, **a single monitor is a single-point-of-failure**. To
99 ensure high availability in a production Ceph Storage Cluster, you should run
100 Ceph with multiple monitors so that the failure of a single monitor **WILL NOT**
101 bring down your entire cluster.
102
103 When a Ceph Storage Cluster runs multiple Ceph Monitors for high availability,
104 Ceph Monitors use `Paxos`_ to establish consensus about the master cluster map.
105 A consensus requires a majority of monitors running to establish a quorum for
106 consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6;
107 etc.).
108
109
110 .. index:: Ceph Monitor; consistency
111
112 Consistency
113 -----------
114
115 When you add monitor settings to your Ceph configuration file, you need to be
116 aware of some of the architectural aspects of Ceph Monitors. **Ceph imposes
117 strict consistency requirements** for a Ceph monitor when discovering another
118 Ceph Monitor within the cluster. Whereas, Ceph Clients and other Ceph daemons
119 use the Ceph configuration file to discover monitors, monitors discover each
120 other using the monitor map (monmap), not the Ceph configuration file.
121
122 A Ceph Monitor always refers to the local copy of the monmap when discovering
123 other Ceph Monitors in the Ceph Storage Cluster. Using the monmap instead of the
124 Ceph configuration file avoids errors that could break the cluster (e.g., typos
125 in ``ceph.conf`` when specifying a monitor address or port). Since monitors use
126 monmaps for discovery and they share monmaps with clients and other Ceph
127 daemons, **the monmap provides monitors with a strict guarantee that their
128 consensus is valid.**
129
130 Strict consistency also applies to updates to the monmap. As with any other
131 updates on the Ceph Monitor, changes to the monmap always run through a
132 distributed consensus algorithm called `Paxos`_. The Ceph Monitors must agree on
133 each update to the monmap, such as adding or removing a Ceph Monitor, to ensure
134 that each monitor in the quorum has the same version of the monmap. Updates to
135 the monmap are incremental so that Ceph Monitors have the latest agreed upon
136 version, and a set of previous versions. Maintaining a history enables a Ceph
137 Monitor that has an older version of the monmap to catch up with the current
138 state of the Ceph Storage Cluster.
139
140 If Ceph Monitors discovered each other through the Ceph configuration file
141 instead of through the monmap, it would introduce additional risks because the
142 Ceph configuration files aren't updated and distributed automatically. Ceph
143 Monitors might inadvertently use an older Ceph configuration file, fail to
144 recognize a Ceph Monitor, fall out of a quorum, or develop a situation where
145 `Paxos`_ isn't able to determine the current state of the system accurately.
146
147
148 .. index:: Ceph Monitor; bootstrapping monitors
149
150 Bootstrapping Monitors
151 ----------------------
152
153 In most configuration and deployment cases, tools that deploy Ceph may help
154 bootstrap the Ceph Monitors by generating a monitor map for you (e.g.,
155 ``ceph-deploy``, etc). A Ceph Monitor requires a few explicit
156 settings:
157
158 - **Filesystem ID**: The ``fsid`` is the unique identifier for your
159 object store. Since you can run multiple clusters on the same
160 hardware, you must specify the unique ID of the object store when
161 bootstrapping a monitor. Deployment tools usually do this for you
162 (e.g., ``ceph-deploy`` can call a tool like ``uuidgen``), but you
163 may specify the ``fsid`` manually too.
164
165 - **Monitor ID**: A monitor ID is a unique ID assigned to each monitor within
166 the cluster. It is an alphanumeric value, and by convention the identifier
167 usually follows an alphabetical increment (e.g., ``a``, ``b``, etc.). This
168 can be set in a Ceph configuration file (e.g., ``[mon.a]``, ``[mon.b]``, etc.),
169 by a deployment tool, or using the ``ceph`` commandline.
170
171 - **Keys**: The monitor must have secret keys. A deployment tool such as
172 ``ceph-deploy`` usually does this for you, but you may
173 perform this step manually too. See `Monitor Keyrings`_ for details.
174
175 For additional details on bootstrapping, see `Bootstrapping a Monitor`_.
176
177 .. index:: Ceph Monitor; configuring monitors
178
179 Configuring Monitors
180 ====================
181
182 To apply configuration settings to the entire cluster, enter the configuration
183 settings under ``[global]``. To apply configuration settings to all monitors in
184 your cluster, enter the configuration settings under ``[mon]``. To apply
185 configuration settings to specific monitors, specify the monitor instance
186 (e.g., ``[mon.a]``). By convention, monitor instance names use alpha notation.
187
188 .. code-block:: ini
189
190 [global]
191
192 [mon]
193
194 [mon.a]
195
196 [mon.b]
197
198 [mon.c]
199
200
201 Minimum Configuration
202 ---------------------
203
204 The bare minimum monitor settings for a Ceph monitor via the Ceph configuration
205 file include a hostname and a monitor address for each monitor. You can configure
206 these under ``[mon]`` or under the entry for a specific monitor.
207
208 .. code-block:: ini
209
210 [mon]
211 mon host = hostname1,hostname2,hostname3
212 mon addr = 10.0.0.10:6789,10.0.0.11:6789,10.0.0.12:6789
213
214
215 .. code-block:: ini
216
217 [mon.a]
218 host = hostname1
219 mon addr = 10.0.0.10:6789
220
221 See the `Network Configuration Reference`_ for details.
222
223 .. note:: This minimum configuration for monitors assumes that a deployment
224 tool generates the ``fsid`` and the ``mon.`` key for you.
225
226 Once you deploy a Ceph cluster, you **SHOULD NOT** change the IP address of
227 the monitors. However, if you decide to change the monitor's IP address, you
228 must follow a specific procedure. See `Changing a Monitor's IP Address`_ for
229 details.
230
231 Monitors can also be found by clients using DNS SRV records. See `Monitor lookup through DNS`_ for details.
232
233 Cluster ID
234 ----------
235
236 Each Ceph Storage Cluster has a unique identifier (``fsid``). If specified, it
237 usually appears under the ``[global]`` section of the configuration file.
238 Deployment tools usually generate the ``fsid`` and store it in the monitor map,
239 so the value may not appear in a configuration file. The ``fsid`` makes it
240 possible to run daemons for multiple clusters on the same hardware.
241
242 ``fsid``
243
244 :Description: The cluster ID. One per cluster.
245 :Type: UUID
246 :Required: Yes.
247 :Default: N/A. May be generated by a deployment tool if not specified.
248
249 .. note:: Do not set this value if you use a deployment tool that does
250 it for you.
251
252
253 .. index:: Ceph Monitor; initial members
254
255 Initial Members
256 ---------------
257
258 We recommend running a production Ceph Storage Cluster with at least three Ceph
259 Monitors to ensure high availability. When you run multiple monitors, you may
260 specify the initial monitors that must be members of the cluster in order to
261 establish a quorum. This may reduce the time it takes for your cluster to come
262 online.
263
264 .. code-block:: ini
265
266 [mon]
267 mon initial members = a,b,c
268
269
270 ``mon initial members``
271
272 :Description: The IDs of initial monitors in a cluster during startup. If
273 specified, Ceph requires an odd number of monitors to form an
274 initial quorum (e.g., 3).
275
276 :Type: String
277 :Default: None
278
279 .. note:: A *majority* of monitors in your cluster must be able to reach
280 each other in order to establish a quorum. You can decrease the initial
281 number of monitors to establish a quorum with this setting.
282
283 .. index:: Ceph Monitor; data path
284
285 Data
286 ----
287
288 Ceph provides a default path where Ceph Monitors store data. For optimal
289 performance in a production Ceph Storage Cluster, we recommend running Ceph
290 Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb is using
291 ``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
292 very often, which can interfere with Ceph OSD Daemon workloads if the data
293 store is co-located with the OSD Daemons.
294
295 In Ceph versions 0.58 and earlier, Ceph Monitors store their data in files. This
296 approach allows users to inspect monitor data with common tools like ``ls``
297 and ``cat``. However, it doesn't provide strong consistency.
298
299 In Ceph versions 0.59 and later, Ceph Monitors store their data as key/value
300 pairs. Ceph Monitors require `ACID`_ transactions. Using a data store prevents
301 recovering Ceph Monitors from running corrupted versions through Paxos, and it
302 enables multiple modification operations in one single atomic batch, among other
303 advantages.
304
305 Generally, we do not recommend changing the default data location. If you modify
306 the default location, we recommend that you make it uniform across Ceph Monitors
307 by setting it in the ``[mon]`` section of the configuration file.
308
309
310 ``mon data``
311
312 :Description: The monitor's data location.
313 :Type: String
314 :Default: ``/var/lib/ceph/mon/$cluster-$id``
315
316
317 .. index:: Ceph Storage Cluster; capacity planning, Ceph Monitor; capacity planning
318
319 Storage Capacity
320 ----------------
321
322 When a Ceph Storage Cluster gets close to its maximum capacity (i.e., ``mon osd
323 full ratio``), Ceph prevents you from writing to or reading from Ceph OSD
324 Daemons as a safety measure to prevent data loss. Therefore, letting a
325 production Ceph Storage Cluster approach its full ratio is not a good practice,
326 because it sacrifices high availability. The default full ratio is ``.95``, or
327 95% of capacity. This a very aggressive setting for a test cluster with a small
328 number of OSDs.
329
330 .. tip:: When monitoring your cluster, be alert to warnings related to the
331 ``nearfull`` ratio. This means that a failure of some OSDs could result
332 in a temporary service disruption if one or more OSDs fails. Consider adding
333 more OSDs to increase storage capacity.
334
335 A common scenario for test clusters involves a system administrator removing a
336 Ceph OSD Daemon from the Ceph Storage Cluster to watch the cluster rebalance;
337 then, removing another Ceph OSD Daemon, and so on until the Ceph Storage Cluster
338 eventually reaches the full ratio and locks up. We recommend a bit of capacity
339 planning even with a test cluster. Planning enables you to gauge how much spare
340 capacity you will need in order to maintain high availability. Ideally, you want
341 to plan for a series of Ceph OSD Daemon failures where the cluster can recover
342 to an ``active + clean`` state without replacing those Ceph OSD Daemons
343 immediately. You can run a cluster in an ``active + degraded`` state, but this
344 is not ideal for normal operating conditions.
345
346 The following diagram depicts a simplistic Ceph Storage Cluster containing 33
347 Ceph Nodes with one Ceph OSD Daemon per host, each Ceph OSD Daemon reading from
348 and writing to a 3TB drive. So this exemplary Ceph Storage Cluster has a maximum
349 actual capacity of 99TB. With a ``mon osd full ratio`` of ``0.95``, if the Ceph
350 Storage Cluster falls to 5TB of remaining capacity, the cluster will not allow
351 Ceph Clients to read and write data. So the Ceph Storage Cluster's operating
352 capacity is 95TB, not 99TB.
353
354 .. ditaa::
355
356 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
357 | Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 |
358 | cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC |
359 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
360 | OSD 1 | | OSD 7 | | OSD 13 | | OSD 19 | | OSD 25 | | OSD 31 |
361 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
362 | OSD 2 | | OSD 8 | | OSD 14 | | OSD 20 | | OSD 26 | | OSD 32 |
363 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
364 | OSD 3 | | OSD 9 | | OSD 15 | | OSD 21 | | OSD 27 | | OSD 33 |
365 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
366 | OSD 4 | | OSD 10 | | OSD 16 | | OSD 22 | | OSD 28 | | Spare |
367 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
368 | OSD 5 | | OSD 11 | | OSD 17 | | OSD 23 | | OSD 29 | | Spare |
369 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
370 | OSD 6 | | OSD 12 | | OSD 18 | | OSD 24 | | OSD 30 | | Spare |
371 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
372
373 It is normal in such a cluster for one or two OSDs to fail. A less frequent but
374 reasonable scenario involves a rack's router or power supply failing, which
375 brings down multiple OSDs simultaneously (e.g., OSDs 7-12). In such a scenario,
376 you should still strive for a cluster that can remain operational and achieve an
377 ``active + clean`` state--even if that means adding a few hosts with additional
378 OSDs in short order. If your capacity utilization is too high, you may not lose
379 data, but you could still sacrifice data availability while resolving an outage
380 within a failure domain if capacity utilization of the cluster exceeds the full
381 ratio. For this reason, we recommend at least some rough capacity planning.
382
383 Identify two numbers for your cluster:
384
385 #. The number of OSDs.
386 #. The total capacity of the cluster
387
388 If you divide the total capacity of your cluster by the number of OSDs in your
389 cluster, you will find the mean average capacity of an OSD within your cluster.
390 Consider multiplying that number by the number of OSDs you expect will fail
391 simultaneously during normal operations (a relatively small number). Finally
392 multiply the capacity of the cluster by the full ratio to arrive at a maximum
393 operating capacity; then, subtract the number of amount of data from the OSDs
394 you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing
395 process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at
396 a reasonable number for a near full ratio.
397
398 .. code-block:: ini
399
400 [global]
401
402 mon osd full ratio = .80
403 mon osd backfillfull ratio = .75
404 mon osd nearfull ratio = .70
405
406
407 ``mon osd full ratio``
408
409 :Description: The percentage of disk space used before an OSD is
410 considered ``full``.
411
412 :Type: Float
413 :Default: ``.95``
414
415
416 ``mon osd backfillfull ratio``
417
418 :Description: The percentage of disk space used before an OSD is
419 considered too ``full`` to backfill.
420
421 :Type: Float
422 :Default: ``.90``
423
424
425 ``mon osd nearfull ratio``
426
427 :Description: The percentage of disk space used before an OSD is
428 considered ``nearfull``.
429
430 :Type: Float
431 :Default: ``.85``
432
433
434 .. tip:: If some OSDs are nearfull, but others have plenty of capacity, you
435 may have a problem with the CRUSH weight for the nearfull OSDs.
436
437 .. index:: heartbeat
438
439 Heartbeat
440 ---------
441
442 Ceph monitors know about the cluster by requiring reports from each OSD, and by
443 receiving reports from OSDs about the status of their neighboring OSDs. Ceph
444 provides reasonable default settings for monitor/OSD interaction; however, you
445 may modify them as needed. See `Monitor/OSD Interaction`_ for details.
446
447
448 .. index:: Ceph Monitor; leader, Ceph Monitor; provider, Ceph Monitor; requester, Ceph Monitor; synchronization
449
450 Monitor Store Synchronization
451 -----------------------------
452
453 When you run a production cluster with multiple monitors (recommended), each
454 monitor checks to see if a neighboring monitor has a more recent version of the
455 cluster map (e.g., a map in a neighboring monitor with one or more epoch numbers
456 higher than the most current epoch in the map of the instant monitor).
457 Periodically, one monitor in the cluster may fall behind the other monitors to
458 the point where it must leave the quorum, synchronize to retrieve the most
459 current information about the cluster, and then rejoin the quorum. For the
460 purposes of synchronization, monitors may assume one of three roles:
461
462 #. **Leader**: The `Leader` is the first monitor to achieve the most recent
463 Paxos version of the cluster map.
464
465 #. **Provider**: The `Provider` is a monitor that has the most recent version
466 of the cluster map, but wasn't the first to achieve the most recent version.
467
468 #. **Requester:** A `Requester` is a monitor that has fallen behind the leader
469 and must synchronize in order to retrieve the most recent information about
470 the cluster before it can rejoin the quorum.
471
472 These roles enable a leader to delegate synchronization duties to a provider,
473 which prevents synchronization requests from overloading the leader--improving
474 performance. In the following diagram, the requester has learned that it has
475 fallen behind the other monitors. The requester asks the leader to synchronize,
476 and the leader tells the requester to synchronize with a provider.
477
478
479 .. ditaa:: +-----------+ +---------+ +----------+
480 | Requester | | Leader | | Provider |
481 +-----------+ +---------+ +----------+
482 | | |
483 | | |
484 | Ask to Synchronize | |
485 |------------------->| |
486 | | |
487 |<-------------------| |
488 | Tell Requester to | |
489 | Sync with Provider | |
490 | | |
491 | Synchronize |
492 |--------------------+-------------------->|
493 | | |
494 |<-------------------+---------------------|
495 | Send Chunk to Requester |
496 | (repeat as necessary) |
497 | Requester Acks Chuck to Provider |
498 |--------------------+-------------------->|
499 | |
500 | Sync Complete |
501 | Notification |
502 |------------------->|
503 | |
504 |<-------------------|
505 | Ack |
506 | |
507
508
509 Synchronization always occurs when a new monitor joins the cluster. During
510 runtime operations, monitors may receive updates to the cluster map at different
511 times. This means the leader and provider roles may migrate from one monitor to
512 another. If this happens while synchronizing (e.g., a provider falls behind the
513 leader), the provider can terminate synchronization with a requester.
514
515 Once synchronization is complete, Ceph requires trimming across the cluster.
516 Trimming requires that the placement groups are ``active + clean``.
517
518
519 ``mon sync trim timeout``
520
521 :Description:
522 :Type: Double
523 :Default: ``30.0``
524
525
526 ``mon sync heartbeat timeout``
527
528 :Description:
529 :Type: Double
530 :Default: ``30.0``
531
532
533 ``mon sync heartbeat interval``
534
535 :Description:
536 :Type: Double
537 :Default: ``5.0``
538
539
540 ``mon sync backoff timeout``
541
542 :Description:
543 :Type: Double
544 :Default: ``30.0``
545
546
547 ``mon sync timeout``
548
549 :Description:
550 :Type: Double
551 :Default: ``30.0``
552
553
554 ``mon sync max retries``
555
556 :Description:
557 :Type: Integer
558 :Default: ``5``
559
560
561 ``mon sync max payload size``
562
563 :Description: The maximum size for a sync payload.
564 :Type: 32-bit Integer
565 :Default: ``1045676``
566
567
568 ``mon accept timeout``
569
570 :Description: Number of seconds the Leader will wait for the Requester(s) to
571 accept a Paxos update. It is also used during the Paxos recovery
572 phase for similar purposes.
573
574 :Type: Float
575 :Default: ``10.0``
576
577
578 ``paxos propose interval``
579
580 :Description: Gather updates for this time interval before proposing
581 a map update.
582
583 :Type: Double
584 :Default: ``1.0``
585
586
587 ``paxos min wait``
588
589 :Description: The minimum amount of time to gather updates after a period of
590 inactivity.
591
592 :Type: Double
593 :Default: ``0.05``
594
595
596 ``mon lease``
597
598 :Description: The length (in seconds) of the lease on the monitor's versions.
599 :Type: Float
600 :Default: ``5``
601
602
603 ``mon lease renew interval``
604
605 :Description: The interval (in seconds) for the Leader to renew the other
606 monitor's leases.
607
608 :Type: Float
609 :Default: ``3``
610
611
612 ``mon lease ack timeout``
613
614 :Description: The number of seconds the Leader will wait for the Providers to
615 acknowledge the lease extension.
616
617 :Type: Float
618 :Default: ``10.0``
619
620
621 ``mon min osdmap epochs``
622
623 :Description: Minimum number of OSD map epochs to keep at all times.
624 :Type: 32-bit Integer
625 :Default: ``500``
626
627
628 ``mon max pgmap epochs``
629
630 :Description: Maximum number of PG map epochs the monitor should keep.
631 :Type: 32-bit Integer
632 :Default: ``500``
633
634
635 ``mon max log epochs``
636
637 :Description: Maximum number of Log epochs the monitor should keep.
638 :Type: 32-bit Integer
639 :Default: ``500``
640
641
642
643
644 Slurp
645 -----
646
647 In Ceph version 0.58 and earlier, when a Paxos service drifts beyond a given
648 number of versions, Ceph triggers the `slurp` mechanism, which establishes a
649 connection with the quorum Leader and obtains every single version the Leader
650 has for every service that has drifted. In Ceph versions 0.59 and later, slurp
651 will not work, because there is a single Paxos instance for all services.
652
653 .. deprecated:: 0.58
654
655 ``paxos max join drift``
656
657 :Description: The maximum Paxos iterations before we must first sync the
658 monitor data stores.
659 :Type: Integer
660 :Default: ``10``
661
662
663 ``mon slurp timeout``
664
665 :Description: The number of seconds the monitor has to recover using slurp
666 before the process is aborted and the monitor bootstraps.
667
668 :Type: Double
669 :Default: ``10.0``
670
671
672 ``mon slurp bytes``
673
674 :Description: Limits the slurp messages to the specified number of bytes.
675 :Type: 32-bit Integer
676 :Default: ``256 * 1024``
677
678
679 .. index:: Ceph Monitor; clock
680
681 Clock
682 -----
683
684 Ceph daemons pass critical messages to each other, which must be processed
685 before daemons reach a timeout threshold. If the clocks in Ceph monitors
686 are not synchronized, it can lead to a number of anomalies. For example:
687
688 - Daemons ignoring received messages (e.g., timestamps outdated)
689 - Timeouts triggered too soon/late when a message wasn't received in time.
690
691 See `Monitor Store Synchronization`_ and `Slurp`_ for details.
692
693
694 .. tip:: You SHOULD install NTP on your Ceph monitor hosts to
695 ensure that the monitor cluster operates with synchronized clocks.
696
697 Clock drift may still be noticeable with NTP even though the discrepancy isn't
698 yet harmful. Ceph's clock drift / clock skew warnings may get triggered even
699 though NTP maintains a reasonable level of synchronization. Increasing your
700 clock drift may be tolerable under such circumstances; however, a number of
701 factors such as workload, network latency, configuring overrides to default
702 timeouts and the `Monitor Store Synchronization`_ settings may influence
703 the level of acceptable clock drift without compromising Paxos guarantees.
704
705 Ceph provides the following tunable options to allow you to find
706 acceptable values.
707
708
709 ``clock offset``
710
711 :Description: How much to offset the system clock. See ``Clock.cc`` for details.
712 :Type: Double
713 :Default: ``0``
714
715
716 .. deprecated:: 0.58
717
718 ``mon tick interval``
719
720 :Description: A monitor's tick interval in seconds.
721 :Type: 32-bit Integer
722 :Default: ``5``
723
724
725 ``mon clock drift allowed``
726
727 :Description: The clock drift in seconds allowed between monitors.
728 :Type: Float
729 :Default: ``.050``
730
731
732 ``mon clock drift warn backoff``
733
734 :Description: Exponential backoff for clock drift warnings
735 :Type: Float
736 :Default: ``5``
737
738
739 ``mon timecheck interval``
740
741 :Description: The time check interval (clock drift check) in seconds
742 for the leader.
743
744 :Type: Float
745 :Default: ``300.0``
746
747
748
749 Client
750 ------
751
752 ``mon client hunt interval``
753
754 :Description: The client will try a new monitor every ``N`` seconds until it
755 establishes a connection.
756
757 :Type: Double
758 :Default: ``3.0``
759
760
761 ``mon client ping interval``
762
763 :Description: The client will ping the monitor every ``N`` seconds.
764 :Type: Double
765 :Default: ``10.0``
766
767
768 ``mon client max log entries per message``
769
770 :Description: The maximum number of log entries a monitor will generate
771 per client message.
772
773 :Type: Integer
774 :Default: ``1000``
775
776
777 ``mon client bytes``
778
779 :Description: The amount of client message data allowed in memory (in bytes).
780 :Type: 64-bit Integer Unsigned
781 :Default: ``100ul << 20``
782
783
784 Pool settings
785 =============
786 Since version v0.94 there is support for pool flags which allow or disallow changes to be made to pools.
787
788 Monitors can also disallow removal of pools if configured that way.
789
790 ``mon allow pool delete``
791
792 :Description: If the monitors should allow pools to be removed. Regardless of what the pool flags say.
793 :Type: Boolean
794 :Default: ``false``
795
796 ``osd pool default flag hashpspool``
797
798 :Description: Set the hashpspool flag on new pools
799 :Type: Boolean
800 :Default: ``true``
801
802 ``osd pool default flag nodelete``
803
804 :Description: Set the nodelete flag on new pools. Prevents allow pool removal with this flag in any way.
805 :Type: Boolean
806 :Default: ``false``
807
808 ``osd pool default flag nopgchange``
809
810 :Description: Set the nopgchange flag on new pools. Does not allow the number of PGs to be changed for a pool.
811 :Type: Boolean
812 :Default: ``false``
813
814 ``osd pool default flag nosizechange``
815
816 :Description: Set the nosizechange flag on new pools. Does not allow the size to be changed of pool.
817 :Type: Boolean
818 :Default: ``false``
819
820 For more information about the pool flags see `Pool values`_.
821
822 Miscellaneous
823 =============
824
825
826 ``mon max osd``
827
828 :Description: The maximum number of OSDs allowed in the cluster.
829 :Type: 32-bit Integer
830 :Default: ``10000``
831
832 ``mon globalid prealloc``
833
834 :Description: The number of global IDs to pre-allocate for clients and daemons in the cluster.
835 :Type: 32-bit Integer
836 :Default: ``100``
837
838 ``mon sync fs threshold``
839
840 :Description: Synchronize with the filesystem when writing the specified number of objects. Set it to ``0`` to disable it.
841 :Type: 32-bit Integer
842 :Default: ``5``
843
844 ``mon subscribe interval``
845
846 :Description: The refresh interval (in seconds) for subscriptions. The
847 subscription mechanism enables obtaining the cluster maps
848 and log information.
849
850 :Type: Double
851 :Default: ``300``
852
853
854 ``mon stat smooth intervals``
855
856 :Description: Ceph will smooth statistics over the last ``N`` PG maps.
857 :Type: Integer
858 :Default: ``2``
859
860
861 ``mon probe timeout``
862
863 :Description: Number of seconds the monitor will wait to find peers before bootstrapping.
864 :Type: Double
865 :Default: ``2.0``
866
867
868 ``mon daemon bytes``
869
870 :Description: The message memory cap for metadata server and OSD messages (in bytes).
871 :Type: 64-bit Integer Unsigned
872 :Default: ``400ul << 20``
873
874
875 ``mon max log entries per event``
876
877 :Description: The maximum number of log entries per event.
878 :Type: Integer
879 :Default: ``4096``
880
881
882 ``mon osd prime pg temp``
883
884 :Description: Enables or disable priming the PGMap with the previous OSDs when an out
885 OSD comes back into the cluster. With the ``true`` setting the clients
886 will continue to use the previous OSDs until the newly in OSDs as that
887 PG peered.
888 :Type: Boolean
889 :Default: ``true``
890
891
892 ``mon osd prime pg temp max time``
893
894 :Description: How much time in seconds the monitor should spend trying to prime the
895 PGMap when an out OSD comes back into the cluster.
896 :Type: Float
897 :Default: ``0.5``
898
899
900
901 .. _Paxos: http://en.wikipedia.org/wiki/Paxos_(computer_science)
902 .. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys
903 .. _Ceph configuration file: ../ceph-conf/#monitors
904 .. _Network Configuration Reference: ../network-config-ref
905 .. _Monitor lookup through DNS: ../mon-lookup-dns
906 .. _ACID: http://en.wikipedia.org/wiki/ACID
907 .. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
908 .. _Add/Remove a Monitor (ceph-deploy): ../../deployment/ceph-deploy-mon
909 .. _Monitoring a Cluster: ../../operations/monitoring
910 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
911 .. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
912 .. _Changing a Monitor's IP Address: ../../operations/add-or-rm-mons#changing-a-monitor-s-ip-address
913 .. _Monitor/OSD Interaction: ../mon-osd-interaction
914 .. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
915 .. _Pool values: ../../operations/pools/#set-pool-values