1 ==========================
2 Monitor Config Reference
3 ==========================
5 Understanding how to configure a :term:`Ceph Monitor` is an important part of
6 building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
7 have at least one monitor**. The monitor complement usually remains fairly
8 consistent, but you can add, remove or replace a monitor in a cluster. See
9 `Adding/Removing a Monitor`_ for details.
12 .. index:: Ceph Monitor; Paxos
17 Ceph Monitors maintain a "master copy" of the :term:`Cluster Map`, which means a
18 :term:`Ceph Client` can determine the location of all Ceph Monitors, Ceph OSD
19 Daemons, and Ceph Metadata Servers just by connecting to one Ceph Monitor and
20 retrieving a current cluster map. Before Ceph Clients can read from or write to
21 Ceph OSD Daemons or Ceph Metadata Servers, they must connect to a Ceph Monitor
22 first. With a current copy of the cluster map and the CRUSH algorithm, a Ceph
23 Client can compute the location for any object. The ability to compute object
24 locations allows a Ceph Client to talk directly to Ceph OSD Daemons, which is a
25 very important aspect of Ceph's high scalability and performance. See
26 `Scalability and High Availability`_ for additional details.
28 The primary role of the Ceph Monitor is to maintain a master copy of the cluster
29 map. Ceph Monitors also provide authentication and logging services. Ceph
30 Monitors write all changes in the monitor services to a single Paxos instance,
31 and Paxos writes the changes to a key/value store for strong consistency. Ceph
32 Monitors can query the most recent version of the cluster map during sync
33 operations. Ceph Monitors leverage the key/value store's snapshots and iterators
34 (using leveldb) to perform store-wide synchronization.
37 /-------------\ /-------------\
38 | Monitor | Write Changes | Paxos |
39 | cCCC +-------------->+ cCCC |
41 +-------------+ \------+------/
43 +-------------+ | Write Changes
46 | Monitor Map | /------+------\
47 +-------------+ | Key / Value |
49 +-------------+ | cCCC |
50 | PG Map | \------+------/
52 | MDS Map | | Read Changes
54 | cCCC |*---------------------+
58 .. deprecated:: version 0.58
60 In Ceph versions 0.58 and earlier, Ceph Monitors use a Paxos instance for
61 each service and store the map as a file.
63 .. index:: Ceph Monitor; cluster map
68 The cluster map is a composite of maps, including the monitor map, the OSD map,
69 the placement group map and the metadata server map. The cluster map tracks a
70 number of important things: which processes are ``in`` the Ceph Storage Cluster;
71 which processes that are ``in`` the Ceph Storage Cluster are ``up`` and running
72 or ``down``; whether, the placement groups are ``active`` or ``inactive``, and
73 ``clean`` or in some other state; and, other details that reflect the current
74 state of the cluster such as the total amount of storage space, and the amount
77 When there is a significant change in the state of the cluster--e.g., a Ceph OSD
78 Daemon goes down, a placement group falls into a degraded state, etc.--the
79 cluster map gets updated to reflect the current state of the cluster.
80 Additionally, the Ceph Monitor also maintains a history of the prior states of
81 the cluster. The monitor map, OSD map, placement group map and metadata server
82 map each maintain a history of their map versions. We call each version an
85 When operating your Ceph Storage Cluster, keeping track of these states is an
86 important part of your system administration duties. See `Monitoring a Cluster`_
87 and `Monitoring OSDs and PGs`_ for additional details.
89 .. index:: high availability; quorum
94 Our Configuring ceph section provides a trivial `Ceph configuration file`_ that
95 provides for one monitor in the test cluster. A cluster will run fine with a
96 single monitor; however, **a single monitor is a single-point-of-failure**. To
97 ensure high availability in a production Ceph Storage Cluster, you should run
98 Ceph with multiple monitors so that the failure of a single monitor **WILL NOT**
99 bring down your entire cluster.
101 When a Ceph Storage Cluster runs multiple Ceph Monitors for high availability,
102 Ceph Monitors use `Paxos`_ to establish consensus about the master cluster map.
103 A consensus requires a majority of monitors running to establish a quorum for
104 consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6;
107 ``mon force quorum join``
109 :Description: Force monitor to join quorum even if it has been previously removed from the map
113 .. index:: Ceph Monitor; consistency
118 When you add monitor settings to your Ceph configuration file, you need to be
119 aware of some of the architectural aspects of Ceph Monitors. **Ceph imposes
120 strict consistency requirements** for a Ceph monitor when discovering another
121 Ceph Monitor within the cluster. Whereas, Ceph Clients and other Ceph daemons
122 use the Ceph configuration file to discover monitors, monitors discover each
123 other using the monitor map (monmap), not the Ceph configuration file.
125 A Ceph Monitor always refers to the local copy of the monmap when discovering
126 other Ceph Monitors in the Ceph Storage Cluster. Using the monmap instead of the
127 Ceph configuration file avoids errors that could break the cluster (e.g., typos
128 in ``ceph.conf`` when specifying a monitor address or port). Since monitors use
129 monmaps for discovery and they share monmaps with clients and other Ceph
130 daemons, **the monmap provides monitors with a strict guarantee that their
131 consensus is valid.**
133 Strict consistency also applies to updates to the monmap. As with any other
134 updates on the Ceph Monitor, changes to the monmap always run through a
135 distributed consensus algorithm called `Paxos`_. The Ceph Monitors must agree on
136 each update to the monmap, such as adding or removing a Ceph Monitor, to ensure
137 that each monitor in the quorum has the same version of the monmap. Updates to
138 the monmap are incremental so that Ceph Monitors have the latest agreed upon
139 version, and a set of previous versions. Maintaining a history enables a Ceph
140 Monitor that has an older version of the monmap to catch up with the current
141 state of the Ceph Storage Cluster.
143 If Ceph Monitors were to discover each other through the Ceph configuration file
144 instead of through the monmap, additional risks would be introduced because
145 Ceph configuration files are not updated and distributed automatically. Ceph
146 Monitors might inadvertently use an older Ceph configuration file, fail to
147 recognize a Ceph Monitor, fall out of a quorum, or develop a situation where
148 `Paxos`_ is not able to determine the current state of the system accurately.
151 .. index:: Ceph Monitor; bootstrapping monitors
153 Bootstrapping Monitors
154 ----------------------
156 In most configuration and deployment cases, tools that deploy Ceph help
157 bootstrap the Ceph Monitors by generating a monitor map for you (e.g.,
158 ``cephadm``, etc). A Ceph Monitor requires a few explicit
161 - **Filesystem ID**: The ``fsid`` is the unique identifier for your
162 object store. Since you can run multiple clusters on the same
163 hardware, you must specify the unique ID of the object store when
164 bootstrapping a monitor. Deployment tools usually do this for you
165 (e.g., ``cephadm`` can call a tool like ``uuidgen``), but you
166 may specify the ``fsid`` manually too.
168 - **Monitor ID**: A monitor ID is a unique ID assigned to each monitor within
169 the cluster. It is an alphanumeric value, and by convention the identifier
170 usually follows an alphabetical increment (e.g., ``a``, ``b``, etc.). This
171 can be set in a Ceph configuration file (e.g., ``[mon.a]``, ``[mon.b]``, etc.),
172 by a deployment tool, or using the ``ceph`` commandline.
174 - **Keys**: The monitor must have secret keys. A deployment tool such as
175 ``cephadm`` usually does this for you, but you may
176 perform this step manually too. See `Monitor Keyrings`_ for details.
178 For additional details on bootstrapping, see `Bootstrapping a Monitor`_.
180 .. index:: Ceph Monitor; configuring monitors
185 To apply configuration settings to the entire cluster, enter the configuration
186 settings under ``[global]``. To apply configuration settings to all monitors in
187 your cluster, enter the configuration settings under ``[mon]``. To apply
188 configuration settings to specific monitors, specify the monitor instance
189 (e.g., ``[mon.a]``). By convention, monitor instance names use alpha notation.
204 Minimum Configuration
205 ---------------------
207 The bare minimum monitor settings for a Ceph monitor via the Ceph configuration
208 file include a hostname and a network address for each monitor. You can configure
209 these under ``[mon]`` or under the entry for a specific monitor.
214 mon host = 10.0.0.2,10.0.0.3,10.0.0.4
220 mon addr = 10.0.0.10:6789
222 See the `Network Configuration Reference`_ for details.
224 .. note:: This minimum configuration for monitors assumes that a deployment
225 tool generates the ``fsid`` and the ``mon.`` key for you.
227 Once you deploy a Ceph cluster, you **SHOULD NOT** change the IP addresses of
228 monitors. However, if you decide to change the monitor's IP address, you
229 must follow a specific procedure. See `Changing a Monitor's IP Address`_ for
232 Monitors can also be found by clients by using DNS SRV records. See `Monitor lookup through DNS`_ for details.
237 Each Ceph Storage Cluster has a unique identifier (``fsid``). If specified, it
238 usually appears under the ``[global]`` section of the configuration file.
239 Deployment tools usually generate the ``fsid`` and store it in the monitor map,
240 so the value may not appear in a configuration file. The ``fsid`` makes it
241 possible to run daemons for multiple clusters on the same hardware.
245 :Description: The cluster ID. One per cluster.
248 :Default: N/A. May be generated by a deployment tool if not specified.
250 .. note:: Do not set this value if you use a deployment tool that does
254 .. index:: Ceph Monitor; initial members
259 We recommend running a production Ceph Storage Cluster with at least three Ceph
260 Monitors to ensure high availability. When you run multiple monitors, you may
261 specify the initial monitors that must be members of the cluster in order to
262 establish a quorum. This may reduce the time it takes for your cluster to come
268 mon_initial_members = a,b,c
271 ``mon_initial_members``
273 :Description: The IDs of initial monitors in a cluster during startup. If
274 specified, Ceph requires an odd number of monitors to form an
275 initial quorum (e.g., 3).
280 .. note:: A *majority* of monitors in your cluster must be able to reach
281 each other in order to establish a quorum. You can decrease the initial
282 number of monitors to establish a quorum with this setting.
284 .. index:: Ceph Monitor; data path
289 Ceph provides a default path where Ceph Monitors store data. For optimal
290 performance in a production Ceph Storage Cluster, we recommend running Ceph
291 Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb uses
292 ``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
293 very often, which can interfere with Ceph OSD Daemon workloads if the data
294 store is co-located with the OSD Daemons.
296 In Ceph versions 0.58 and earlier, Ceph Monitors store their data in plain files. This
297 approach allows users to inspect monitor data with common tools like ``ls``
298 and ``cat``. However, this approach didn't provide strong consistency.
300 In Ceph versions 0.59 and later, Ceph Monitors store their data as key/value
301 pairs. Ceph Monitors require `ACID`_ transactions. Using a data store prevents
302 recovering Ceph Monitors from running corrupted versions through Paxos, and it
303 enables multiple modification operations in one single atomic batch, among other
306 Generally, we do not recommend changing the default data location. If you modify
307 the default location, we recommend that you make it uniform across Ceph Monitors
308 by setting it in the ``[mon]`` section of the configuration file.
313 :Description: The monitor's data location.
315 :Default: ``/var/lib/ceph/mon/$cluster-$id``
318 ``mon_data_size_warn``
320 :Description: Raise ``HEALTH_WARN`` status when a monitor's data
321 store grows to be larger than this size, 15GB by default.
324 :Default: ``15*1024*1024*1024``
327 ``mon_data_avail_warn``
329 :Description: Raise ``HEALTH_WARN`` status when the filesystem that houses a
330 monitor's data store reports that its available capacity is
331 less than or equal to this percentage .
337 ``mon_data_avail_crit``
339 :Description: Raise ``HEALTH_ERR`` status when the filesystem that houses a
340 monitor's data store reports that its available capacity is
341 less than or equal to this percentage.
346 ``mon_warn_on_cache_pools_without_hit_sets``
348 :Description: Raise ``HEALTH_WARN`` when a cache pool does not
349 have the ``hit_set_type`` value configured.
350 See :ref:`hit_set_type <hit_set_type>` for more
356 ``mon_warn_on_crush_straw_calc_version_zero``
358 :Description: Raise ``HEALTH_WARN`` when the CRUSH
359 ``straw_calc_version`` is zero. See
360 :ref:`CRUSH map tunables <crush-map-tunables>` for
367 ``mon_warn_on_legacy_crush_tunables``
369 :Description: Raise ``HEALTH_WARN`` when
370 CRUSH tunables are too old (older than ``mon_min_crush_required_version``)
376 ``mon_crush_min_required_version``
378 :Description: The minimum tunable profile required by the cluster.
380 :ref:`CRUSH map tunables <crush-map-tunables>` for
387 ``mon_warn_on_osd_down_out_interval_zero``
389 :Description: Raise ``HEALTH_WARN`` when
390 ``mon_osd_down_out_interval`` is zero. Having this option set to
391 zero on the leader acts much like the ``noout`` flag. It's hard
392 to figure out what's going wrong with clusters without the
393 ``noout`` flag set but acting like that just the same, so we
394 report a warning in this case.
400 ``mon_warn_on_slow_ping_ratio``
402 :Description: Raise ``HEALTH_WARN`` when any heartbeat
403 between OSDs exceeds ``mon_warn_on_slow_ping_ratio``
404 of ``osd_heartbeat_grace``. The default is 5%.
409 ``mon_warn_on_slow_ping_time``
411 :Description: Override ``mon_warn_on_slow_ping_ratio`` with a specific value.
412 Raise ``HEALTH_WARN`` if any heartbeat
413 between OSDs exceeds ``mon_warn_on_slow_ping_time``
414 milliseconds. The default is 0 (disabled).
419 ``mon_warn_on_pool_no_redundancy``
421 :Description: Raise ``HEALTH_WARN`` if any pool is
422 configured with no replicas.
427 ``mon_cache_target_full_warn_ratio``
429 :Description: Position between pool's ``cache_target_full`` and
430 ``target_max_object`` where we start warning
436 ``mon_health_to_clog``
438 :Description: Enable sending a health summary to the cluster log periodically.
443 ``mon_health_to_clog_tick_interval``
445 :Description: How often (in seconds) the monitor sends a health summary to the cluster
446 log (a non-positive number disables). If current health summary
447 is empty or identical to the last time, monitor will not send it
454 ``mon_health_to_clog_interval``
456 :Description: How often (in seconds) the monitor sends a health summary to the cluster
457 log (a non-positive number disables). Monitors will always
458 send a summary to the cluster log whether or not it differs from
459 the previous summary.
466 .. index:: Ceph Storage Cluster; capacity planning, Ceph Monitor; capacity planning
468 .. _storage-capacity:
473 When a Ceph Storage Cluster gets close to its maximum capacity
474 (see``mon_osd_full ratio``), Ceph prevents you from writing to or reading from OSDs
475 as a safety measure to prevent data loss. Therefore, letting a
476 production Ceph Storage Cluster approach its full ratio is not a good practice,
477 because it sacrifices high availability. The default full ratio is ``.95``, or
478 95% of capacity. This a very aggressive setting for a test cluster with a small
481 .. tip:: When monitoring your cluster, be alert to warnings related to the
482 ``nearfull`` ratio. This means that a failure of some OSDs could result
483 in a temporary service disruption if one or more OSDs fails. Consider adding
484 more OSDs to increase storage capacity.
486 A common scenario for test clusters involves a system administrator removing an
487 OSD from the Ceph Storage Cluster, watching the cluster rebalance, then removing
488 another OSD, and another, until at least one OSD eventually reaches the full
489 ratio and the cluster locks up. We recommend a bit of capacity
490 planning even with a test cluster. Planning enables you to gauge how much spare
491 capacity you will need in order to maintain high availability. Ideally, you want
492 to plan for a series of Ceph OSD Daemon failures where the cluster can recover
493 to an ``active+clean`` state without replacing those OSDs
494 immediately. Cluster operation continues in the ``active+degraded`` state, but this
495 is not ideal for normal operation and should be addressed promptly.
497 The following diagram depicts a simplistic Ceph Storage Cluster containing 33
498 Ceph Nodes with one OSD per host, each OSD reading from
499 and writing to a 3TB drive. So this exemplary Ceph Storage Cluster has a maximum
500 actual capacity of 99TB. With a ``mon osd full ratio`` of ``0.95``, if the Ceph
501 Storage Cluster falls to 5TB of remaining capacity, the cluster will not allow
502 Ceph Clients to read and write data. So the Ceph Storage Cluster's operating
503 capacity is 95TB, not 99TB.
506 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
507 | Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 |
508 | cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC |
509 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
510 | OSD 1 | | OSD 7 | | OSD 13 | | OSD 19 | | OSD 25 | | OSD 31 |
511 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
512 | OSD 2 | | OSD 8 | | OSD 14 | | OSD 20 | | OSD 26 | | OSD 32 |
513 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
514 | OSD 3 | | OSD 9 | | OSD 15 | | OSD 21 | | OSD 27 | | OSD 33 |
515 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
516 | OSD 4 | | OSD 10 | | OSD 16 | | OSD 22 | | OSD 28 | | Spare |
517 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
518 | OSD 5 | | OSD 11 | | OSD 17 | | OSD 23 | | OSD 29 | | Spare |
519 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
520 | OSD 6 | | OSD 12 | | OSD 18 | | OSD 24 | | OSD 30 | | Spare |
521 +--------+ +--------+ +--------+ +--------+ +--------+ +--------+
523 It is normal in such a cluster for one or two OSDs to fail. A less frequent but
524 reasonable scenario involves a rack's router or power supply failing, which
525 brings down multiple OSDs simultaneously (e.g., OSDs 7-12). In such a scenario,
526 you should still strive for a cluster that can remain operational and achieve an
527 ``active + clean`` state--even if that means adding a few hosts with additional
528 OSDs in short order. If your capacity utilization is too high, you may not lose
529 data, but you could still sacrifice data availability while resolving an outage
530 within a failure domain if capacity utilization of the cluster exceeds the full
531 ratio. For this reason, we recommend at least some rough capacity planning.
533 Identify two numbers for your cluster:
535 #. The number of OSDs.
536 #. The total capacity of the cluster
538 If you divide the total capacity of your cluster by the number of OSDs in your
539 cluster, you will find the mean average capacity of an OSD within your cluster.
540 Consider multiplying that number by the number of OSDs you expect will fail
541 simultaneously during normal operations (a relatively small number). Finally
542 multiply the capacity of the cluster by the full ratio to arrive at a maximum
543 operating capacity; then, subtract the number of amount of data from the OSDs
544 you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing
545 process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at
546 a reasonable number for a near full ratio.
548 The following settings only apply on cluster creation and are then stored in
549 the OSDMap. To clarify, in normal operation the values that are used by OSDs
550 are those found in the OSDMap, not those in the configuration file or central
556 mon_osd_full_ratio = .80
557 mon_osd_backfillfull_ratio = .75
558 mon_osd_nearfull_ratio = .70
561 ``mon_osd_full_ratio``
563 :Description: The threshold percentage of device space utilized before an OSD is
570 ``mon_osd_backfillfull_ratio``
572 :Description: The threshold percentage of device space utilized before an OSD is
573 considered too ``full`` to backfill.
579 ``mon_osd_nearfull_ratio``
581 :Description: The threshold percentage of device space used before an OSD is
582 considered ``nearfull``.
588 .. tip:: If some OSDs are nearfull, but others have plenty of capacity, you
589 may have an inaccurate CRUSH weight set for the nearfull OSDs.
591 .. tip:: These settings only apply during cluster creation. Afterwards they need
592 to be changed in the OSDMap using ``ceph osd set-nearfull-ratio`` and
593 ``ceph osd set-full-ratio``
600 Ceph monitors know about the cluster by requiring reports from each OSD, and by
601 receiving reports from OSDs about the status of their neighboring OSDs. Ceph
602 provides reasonable default settings for monitor/OSD interaction; however, you
603 may modify them as needed. See `Monitor/OSD Interaction`_ for details.
606 .. index:: Ceph Monitor; leader, Ceph Monitor; provider, Ceph Monitor; requester, Ceph Monitor; synchronization
608 Monitor Store Synchronization
609 -----------------------------
611 When you run a production cluster with multiple monitors (recommended), each
612 monitor checks to see if a neighboring monitor has a more recent version of the
613 cluster map (e.g., a map in a neighboring monitor with one or more epoch numbers
614 higher than the most current epoch in the map of the instant monitor).
615 Periodically, one monitor in the cluster may fall behind the other monitors to
616 the point where it must leave the quorum, synchronize to retrieve the most
617 current information about the cluster, and then rejoin the quorum. For the
618 purposes of synchronization, monitors may assume one of three roles:
620 #. **Leader**: The `Leader` is the first monitor to achieve the most recent
621 Paxos version of the cluster map.
623 #. **Provider**: The `Provider` is a monitor that has the most recent version
624 of the cluster map, but wasn't the first to achieve the most recent version.
626 #. **Requester:** A `Requester` is a monitor that has fallen behind the leader
627 and must synchronize in order to retrieve the most recent information about
628 the cluster before it can rejoin the quorum.
630 These roles enable a leader to delegate synchronization duties to a provider,
631 which prevents synchronization requests from overloading the leader--improving
632 performance. In the following diagram, the requester has learned that it has
633 fallen behind the other monitors. The requester asks the leader to synchronize,
634 and the leader tells the requester to synchronize with a provider.
638 +-----------+ +---------+ +----------+
639 | Requester | | Leader | | Provider |
640 +-----------+ +---------+ +----------+
643 | Ask to Synchronize | |
644 |------------------->| |
646 |<-------------------| |
647 | Tell Requester to | |
648 | Sync with Provider | |
651 |--------------------+-------------------->|
653 |<-------------------+---------------------|
654 | Send Chunk to Requester |
655 | (repeat as necessary) |
656 | Requester Acks Chuck to Provider |
657 |--------------------+-------------------->|
661 |------------------->|
663 |<-------------------|
668 Synchronization always occurs when a new monitor joins the cluster. During
669 runtime operations, monitors may receive updates to the cluster map at different
670 times. This means the leader and provider roles may migrate from one monitor to
671 another. If this happens while synchronizing (e.g., a provider falls behind the
672 leader), the provider can terminate synchronization with a requester.
674 Once synchronization is complete, Ceph performs trimming across the cluster.
675 Trimming requires that the placement groups are ``active+clean``.
680 :Description: Number of seconds the monitor will wait for the next update
681 message from its sync provider before it gives up and bootstrap
688 ``mon_sync_max_payload_size``
690 :Description: The maximum size for a sync payload (in bytes).
691 :Type: 32-bit Integer
692 :Default: ``1048576``
695 ``paxos_max_join_drift``
697 :Description: The maximum Paxos iterations before we must first sync the
698 monitor data stores. When a monitor finds that its peer is too
699 far ahead of it, it will first sync with data stores before moving
706 ``paxos_stash_full_interval``
708 :Description: How often (in commits) to stash a full copy of the PaxosService state.
709 Current this setting only affects ``mds``, ``mon``, ``auth`` and ``mgr``
716 ``paxos_propose_interval``
718 :Description: Gather updates for this time interval before proposing
727 :Description: The minimum number of Paxos states to keep around
734 :Description: The minimum amount of time to gather updates after a period of
743 :Description: Number of extra proposals tolerated before trimming
750 :Description: The maximum number of extra proposals to trim at a time
755 ``paxos_service_trim_min``
757 :Description: The minimum amount of versions to trigger a trim (0 disables it)
762 ``paxos_service_trim_max``
764 :Description: The maximum amount of versions to trim during a single proposal (0 disables it)
769 ``paxos service trim max multiplier``
771 :Description: The factor by which paxos service trim max will be multiplied
772 to get a new upper bound when trim sizes are high (0 disables it)
777 ``mon mds force trim to``
779 :Description: Force monitor to trim mdsmaps to this point (0 disables it.
780 dangerous, use with care)
786 ``mon_osd_force_trim_to``
788 :Description: Force monitor to trim osdmaps to this point, even if there is
789 PGs not clean at the specified epoch (0 disables it. dangerous,
796 ``mon_osd_cache_size``
798 :Description: The size of osdmaps cache, not to rely on underlying store's cache
803 ``mon_election_timeout``
805 :Description: On election proposer, maximum waiting time for all ACKs in seconds.
812 :Description: The length (in seconds) of the lease on the monitor's versions.
817 ``mon_lease_renew_interval_factor``
819 :Description: ``mon_lease`` \* ``mon_lease_renew_interval_factor`` will be the
820 interval for the Leader to renew the other monitor's leases. The
821 factor should be less than ``1.0``.
827 ``mon_lease_ack_timeout_factor``
829 :Description: The Leader will wait ``mon_lease`` \* ``mon_lease_ack_timeout_factor``
830 for the Providers to acknowledge the lease extension.
836 ``mon_accept_timeout_factor``
838 :Description: The Leader will wait ``mon_lease`` \* ``mon_accept_timeout_factor``
839 for the Requester(s) to accept a Paxos update. It is also used
840 during the Paxos recovery phase for similar purposes.
846 ``mon_min_osdmap_epochs``
848 :Description: Minimum number of OSD map epochs to keep at all times.
849 :Type: 32-bit Integer
853 ``mon_max_log_epochs``
855 :Description: Maximum number of Log epochs the monitor should keep.
856 :Type: 32-bit Integer
861 .. index:: Ceph Monitor; clock
866 Ceph daemons pass critical messages to each other, which must be processed
867 before daemons reach a timeout threshold. If the clocks in Ceph monitors
868 are not synchronized, it can lead to a number of anomalies. For example:
870 - Daemons ignoring received messages (e.g., timestamps outdated)
871 - Timeouts triggered too soon/late when a message wasn't received in time.
873 See `Monitor Store Synchronization`_ for details.
876 .. tip:: You must configure NTP or PTP daemons on your Ceph monitor hosts to
877 ensure that the monitor cluster operates with synchronized clocks.
878 It can be advantageous to have monitor hosts sync with each other
879 as well as with multiple quality upstream time sources.
881 Clock drift may still be noticeable with NTP even though the discrepancy is not
882 yet harmful. Ceph's clock drift / clock skew warnings may get triggered even
883 though NTP maintains a reasonable level of synchronization. Increasing your
884 clock drift may be tolerable under such circumstances; however, a number of
885 factors such as workload, network latency, configuring overrides to default
886 timeouts and the `Monitor Store Synchronization`_ settings may influence
887 the level of acceptable clock drift without compromising Paxos guarantees.
889 Ceph provides the following tunable options to allow you to find
893 ``mon_tick_interval``
895 :Description: A monitor's tick interval in seconds.
896 :Type: 32-bit Integer
900 ``mon_clock_drift_allowed``
902 :Description: The clock drift in seconds allowed between monitors.
907 ``mon_clock_drift_warn_backoff``
909 :Description: Exponential backoff for clock drift warnings
914 ``mon_timecheck_interval``
916 :Description: The time check interval (clock drift check) in seconds
923 ``mon_timecheck_skew_interval``
925 :Description: The time check interval (clock drift check) in seconds when in
926 presence of a skew in seconds for the Leader.
935 ``mon_client_hunt_interval``
937 :Description: The client will try a new monitor every ``N`` seconds until it
938 establishes a connection.
944 ``mon_client_ping_interval``
946 :Description: The client will ping the monitor every ``N`` seconds.
951 ``mon_client_max_log_entries_per_message``
953 :Description: The maximum number of log entries a monitor will generate
962 :Description: The amount of client message data allowed in memory (in bytes).
963 :Type: 64-bit Integer Unsigned
964 :Default: ``100ul << 20``
971 Since version v0.94 there is support for pool flags which allow or disallow changes to be made to pools.
972 Monitors can also disallow removal of pools if appropriately configured. The inconvenience of this guardrail
973 is far outweighed by the number of accidental pool (and thus data) deletions it prevents.
975 ``mon_allow_pool_delete``
977 :Description: Should monitors allow pools to be removed, regardless of what the pool flags say?
983 ``osd_pool_default_ec_fast_read``
985 :Description: Whether to turn on fast read on the pool or not. It will be used as
986 the default setting of newly created erasure coded pools if ``fast_read``
987 is not specified at create time.
993 ``osd_pool_default_flag_hashpspool``
995 :Description: Set the hashpspool flag on new pools
1000 ``osd_pool_default_flag_nodelete``
1002 :Description: Set the ``nodelete`` flag on new pools, which prevents pool removal.
1007 ``osd_pool_default_flag_nopgchange``
1009 :Description: Set the ``nopgchange`` flag on new pools. Does not allow the number of PGs to be changed.
1014 ``osd_pool_default_flag_nosizechange``
1016 :Description: Set the ``nosizechange`` flag on new pools. Does not allow the ``size`` to be changed.
1020 For more information about the pool flags see `Pool values`_.
1027 :Description: The maximum number of OSDs allowed in the cluster.
1028 :Type: 32-bit Integer
1032 ``mon_globalid_prealloc``
1034 :Description: The number of global IDs to pre-allocate for clients and daemons in the cluster.
1035 :Type: 32-bit Integer
1039 ``mon_subscribe_interval``
1041 :Description: The refresh interval (in seconds) for subscriptions. The
1042 subscription mechanism enables obtaining cluster maps
1043 and log information.
1046 :Default: ``86400.00``
1049 ``mon_stat_smooth_intervals``
1051 :Description: Ceph will smooth statistics over the last ``N`` PG maps.
1056 ``mon_probe_timeout``
1058 :Description: Number of seconds the monitor will wait to find peers before bootstrapping.
1063 ``mon_daemon_bytes``
1065 :Description: The message memory cap for metadata server and OSD messages (in bytes).
1066 :Type: 64-bit Integer Unsigned
1067 :Default: ``400ul << 20``
1070 ``mon_max_log_entries_per_event``
1072 :Description: The maximum number of log entries per event.
1077 ``mon_osd_prime_pg_temp``
1079 :Description: Enables or disables priming the PGMap with the previous OSDs when an ``out``
1080 OSD comes back into the cluster. With the ``true`` setting, clients
1081 will continue to use the previous OSDs until the newly ``in`` OSDs for
1088 ``mon_osd_prime pg temp max time``
1090 :Description: How much time in seconds the monitor should spend trying to prime the
1091 PGMap when an out OSD comes back into the cluster.
1097 ``mon_osd_prime_pg_temp_max_time_estimate``
1099 :Description: Maximum estimate of time spent on each PG before we prime all PGs
1106 ``mon_mds_skip_sanity``
1108 :Description: Skip safety assertions on FSMap (in case of bugs where we want to
1109 continue anyway). Monitor terminates if the FSMap sanity check
1110 fails, but we can disable it by enabling this option.
1116 ``mon_max_mdsmap_epochs``
1118 :Description: The maximum number of mdsmap epochs to trim during a single proposal.
1123 ``mon_config_key_max_entry_size``
1125 :Description: The maximum size of config-key entry (in bytes)
1130 ``mon_scrub_interval``
1132 :Description: How often the monitor scrubs its store by comparing
1133 the stored checksums with the computed ones for all stored
1134 keys. (0 disables it. dangerous, use with care)
1140 ``mon_scrub_max_keys``
1142 :Description: The maximum number of keys to scrub each time.
1147 ``mon_compact_on_start``
1149 :Description: Compact the database used as Ceph Monitor store on
1150 ``ceph-mon`` start. A manual compaction helps to shrink the
1151 monitor database and improve the performance of it if the regular
1152 compaction fails to work.
1158 ``mon_compact_on_bootstrap``
1160 :Description: Compact the database used as Ceph Monitor store
1161 on bootstrap. Monitors probe each other to establish
1162 a quorum after bootstrap. If a monitor times out before joining the
1163 quorum, it will start over and bootstrap again.
1169 ``mon_compact_on_trim``
1171 :Description: Compact a certain prefix (including paxos) when we trim its old states.
1178 :Description: Number of threads for performing CPU intensive work on monitor.
1183 ``mon_osd_mapping_pgs_per_chunk``
1185 :Description: We calculate the mapping from placement group to OSDs in chunks.
1186 This option specifies the number of placement groups per chunk.
1192 ``mon_session_timeout``
1194 :Description: Monitor will terminate inactive sessions stay idle over this
1201 ``mon_osd_cache_size_min``
1203 :Description: The minimum amount of bytes to be kept mapped in memory for osd
1206 :Type: 64-bit Integer
1207 :Default: ``134217728``
1210 ``mon_memory_target``
1212 :Description: The amount of bytes pertaining to OSD monitor caches and KV cache
1213 to be kept mapped in memory with cache auto-tuning enabled.
1215 :Type: 64-bit Integer
1216 :Default: ``2147483648``
1219 ``mon_memory_autotune``
1221 :Description: Autotune the cache memory used for OSD monitors and KV
1228 .. _Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)
1229 .. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys
1230 .. _Ceph configuration file: ../ceph-conf/#monitors
1231 .. _Network Configuration Reference: ../network-config-ref
1232 .. _Monitor lookup through DNS: ../mon-lookup-dns
1233 .. _ACID: https://en.wikipedia.org/wiki/ACID
1234 .. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
1235 .. _Monitoring a Cluster: ../../operations/monitoring
1236 .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
1237 .. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
1238 .. _Changing a Monitor's IP Address: ../../operations/add-or-rm-mons#changing-a-monitor-s-ip-address
1239 .. _Monitor/OSD Interaction: ../mon-osd-interaction
1240 .. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
1241 .. _Pool values: ../../operations/pools/#set-pool-values