5 When you start the Ceph service, the initialization process activates a series
6 of daemons that run in the background. A :term:`Ceph Storage Cluster` runs
9 - :term:`Ceph Monitor` (``ceph-mon``)
10 - :term:`Ceph OSD Daemon` (``ceph-osd``)
12 Ceph Storage Clusters that support the :term:`Ceph Filesystem` run at least one
13 :term:`Ceph Metadata Server` (``ceph-mds``). Clusters that support :term:`Ceph
14 Object Storage` run Ceph Gateway daemons (``radosgw``). For your convenience,
15 each daemon has a series of default values (*i.e.*, many are set by
16 ``ceph/src/common/config_opts.h``). You may override these settings with a Ceph
22 The Configuration File
23 ======================
25 When you start a Ceph Storage Cluster, each daemon looks for a Ceph
26 configuration file (i.e., ``ceph.conf`` by default) that provides the cluster's
27 configuration settings. For manual deployments, you need to create a Ceph
28 configuration file. For tools that create configuration files for you (*e.g.*,
29 ``ceph-deploy``, Chef, etc.), you may use the information contained herein as a
30 reference. The Ceph configuration file defines:
33 - Authentication settings
40 - Other runtime options
42 The default Ceph configuration file locations in sequential order include:
44 #. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF``
46 #. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
47 #. ``/etc/ceph/ceph.conf``
49 #. ``./ceph.conf`` (*i.e.,* in the current working directory)
52 The Ceph configuration file uses an *ini* style syntax. You can add comments
53 by preceding comments with a pound sign (#) or a semi-colon (;). For example:
57 # <--A number (#) sign precedes a comment.
58 ; A comment may be anything.
59 # Comments always follow a semi-colon (;) or a pound (#) on each line.
60 # The end of the line terminates a comment.
61 # We recommend that you provide comments in your configuration file(s).
64 .. _ceph-conf-settings:
69 The configuration file can configure all Ceph daemons in a Ceph Storage Cluster,
70 or all Ceph daemons of a particular type. To configure a series of daemons, the
71 settings must be included under the processes that will receive the
72 configuration as follows:
76 :Description: Settings under ``[global]`` affect all daemons in a Ceph Storage
79 :Example: ``auth supported = cephx``
83 :Description: Settings under ``[osd]`` affect all ``ceph-osd`` daemons in
84 the Ceph Storage Cluster, and override the same setting in
87 :Example: ``osd journal size = 1000``
91 :Description: Settings under ``[mon]`` affect all ``ceph-mon`` daemons in
92 the Ceph Storage Cluster, and override the same setting in
95 :Example: ``mon addr = 10.0.0.101:6789``
100 :Description: Settings under ``[mds]`` affect all ``ceph-mds`` daemons in
101 the Ceph Storage Cluster, and override the same setting in
104 :Example: ``host = myserver01``
108 :Description: Settings under ``[client]`` affect all Ceph Clients
109 (e.g., mounted Ceph Filesystems, mounted Ceph Block Devices,
112 :Example: ``log file = /var/log/ceph/radosgw.log``
115 Global settings affect all instances of all daemon in the Ceph Storage Cluster.
116 Use the ``[global]`` setting for values that are common for all daemons in the
117 Ceph Storage Cluster. You can override each ``[global]`` setting by:
119 #. Changing the setting in a particular process type
120 (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
122 #. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ).
124 Overriding a global setting affects all child processes, except those that
125 you specifically override in a particular daemon.
127 A typical global setting involves activating authentication. For example:
132 #Enable authentication between hosts within the cluster.
134 auth supported = cephx
137 auth cluster required = cephx
138 auth service required = cephx
139 auth client required = cephx
142 You can specify settings that apply to a particular type of daemon. When you
143 specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
144 particular instance, the setting will apply to all OSDs, monitors or metadata
145 daemons respectively.
147 A typical daemon-wide setting involves setting journal sizes, filestore
148 settings, etc. For example:
153 osd journal size = 1000
156 You may specify settings for particular instances of a daemon. You may specify
157 an instance by entering its type, delimited by a period (.) and by the instance
158 ID. The instance ID for a Ceph OSD Daemon is always numeric, but it may be
159 alphanumeric for Ceph Monitors and Ceph Metadata Servers.
164 # settings affect osd.1 only.
167 # settings affect mon.a only.
170 # settings affect mds.b only.
173 If the daemon you specify is a Ceph Gateway client, specify the daemon and the
174 instance, delimited by a period (.). For example::
176 [client.radosgw.instance-name]
177 # settings affect client.radosgw.instance-name only.
181 .. _ceph-metavariables:
186 Metavariables simplify Ceph Storage Cluster configuration dramatically. When a
187 metavariable is set in a configuration value, Ceph expands the metavariable into
188 a concrete value. Metavariables are very powerful when used within the
189 ``[global]``, ``[osd]``, ``[mon]``, ``[mds]`` or ``[client]`` sections of your
190 configuration file. Ceph metavariables are similar to Bash shell expansion.
192 Ceph supports the following metavariables:
197 :Description: Expands to the Ceph Storage Cluster name. Useful when running
198 multiple Ceph Storage Clusters on the same hardware.
200 :Example: ``/etc/ceph/$cluster.keyring``
206 :Description: Expands to one of ``mds``, ``osd``, or ``mon``, depending on the
207 type of the instant daemon.
209 :Example: ``/var/lib/ceph/$type``
214 :Description: Expands to the daemon identifier. For ``osd.0``, this would be
215 ``0``; for ``mds.a``, it would be ``a``.
217 :Example: ``/var/lib/ceph/$type/$cluster-$id``
222 :Description: Expands to the host name of the instant daemon.
227 :Description: Expands to ``$type.$id``.
228 :Example: ``/var/run/ceph/$cluster-$name.asok``
232 :Description: Expands to daemon pid.
233 :Example: ``/var/run/ceph/$cluster-$name-$pid.asok``
236 .. _ceph-conf-common-settings:
241 The `Hardware Recommendations`_ section provides some hardware guidelines for
242 configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
243 Node` to run multiple daemons. For example, a single node with multiple drives
244 may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
245 particular type of process. For example, some nodes may run ``ceph-osd``
246 daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
247 run ``ceph-mon`` daemons.
249 Each node has a name identified by the ``host`` setting. Monitors also specify
250 a network address and port (i.e., domain name or IP address) identified by the
251 ``addr`` setting. A basic configuration file will typically specify only
252 minimal settings for each instance of monitor daemons. For example:
257 mon_initial_members = ceph1
261 .. important:: The ``host`` setting is the short name of the node (i.e., not
262 an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
263 the command line to retrieve the name of the node. Do not use ``host``
264 settings for anything other than initial monitors unless you are deploying
265 Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
266 when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
267 will enter the appropriate values for you in the cluster map.
270 .. _ceph-network-config:
275 See the `Network Configuration Reference`_ for a detailed discussion about
276 configuring a network for use with Ceph.
282 Ceph production clusters typically deploy with a minimum 3 :term:`Ceph Monitor`
283 daemons to ensure high availability should a monitor instance crash. At least
284 three (3) monitors ensures that the Paxos algorithm can determine which version
285 of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
286 Monitors in the quorum.
288 .. note:: You may deploy Ceph with a single monitor, but if the instance fails,
289 the lack of other monitors may interrupt data service availability.
291 Ceph Monitors typically listen on port ``6789``. For example:
297 mon addr = 150.140.130.120:6789
299 By default, Ceph expects that you will store a monitor's data under the
302 /var/lib/ceph/mon/$cluster-$id
304 You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
305 directory. With metavariables fully expressed and a cluster named "ceph", the
306 foregoing directory would evaluate to::
308 /var/lib/ceph/mon/ceph-a
310 For additional details, see the `Monitor Config Reference`_.
312 .. _Monitor Config Reference: ../mon-config-ref
321 .. versionadded:: Bobtail 0.56
323 For Bobtail (v 0.56) and beyond, you should expressly enable or disable
324 authentication in the ``[global]`` section of your Ceph configuration file. ::
326 auth cluster required = cephx
327 auth service required = cephx
328 auth client required = cephx
330 Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
332 .. important:: When upgrading, we recommend expressly disabling authentication
333 first, then perform the upgrade. Once the upgrade is complete, re-enable
336 .. _Cephx Config Reference: ../auth-config-ref
339 .. _ceph-monitor-config:
345 Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
346 has one OSD daemon running a filestore on one storage drive. A typical
347 deployment specifies a journal size. For example:
352 osd journal size = 10000
355 host = {hostname} #manual deployments only.
358 By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
361 /var/lib/ceph/osd/$cluster-$id
363 You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
364 directory. With metavariables fully expressed and a cluster named "ceph", the
365 foregoing directory would evaluate to::
367 /var/lib/ceph/osd/ceph-0
369 You may override this path using the ``osd data`` setting. We don't recommend
370 changing the default location. Create the default directory on your OSD host.
375 sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
377 The ``osd data`` path ideally leads to a mount point with a hard disk that is
378 separate from the hard disk storing and running the operating system and
379 daemons. If the OSD is for a disk other than the OS disk, prepare it for
380 use with Ceph, and mount it to the directory you just created::
383 sudo mkfs -t {fstype} /dev/{disk}
384 sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
386 We recommend using the ``xfs`` file system when running
387 :command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and no
390 See the `OSD Config Reference`_ for additional configuration details.
396 During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
397 and report their findings to the Ceph Monitor. You do not have to provide any
398 settings. However, if you have network latency issues, you may wish to modify
401 See `Configuring Monitor/OSD Interaction`_ for additional details.
404 .. _ceph-logging-and-debugging:
409 Sometimes you may encounter issues with Ceph that require
410 modifying logging output and using Ceph's debugging. See `Debugging and
411 Logging`_ for details on log rotation.
413 .. _Debugging and Logging: ../../troubleshooting/log-and-debug
419 .. literalinclude:: demo-ceph.conf
422 .. _ceph-runtime-config:
427 Ceph allows you to make changes to the configuration of a ``ceph-osd``,
428 ``ceph-mon``, or ``ceph-mds`` daemon at runtime. This capability is quite
429 useful for increasing/decreasing logging output, enabling/disabling debug
430 settings, and even for runtime optimization. The following reflects runtime
431 configuration usage::
433 ceph tell {daemon-type}.{id or *} injectargs --{name} {value} [--{name} {value}]
435 Replace ``{daemon-type}`` with one of ``osd``, ``mon`` or ``mds``. You may apply
436 the runtime setting to all daemons of a particular type with ``*``, or specify
437 a specific daemon's ID (i.e., its number or letter). For example, to increase
438 debug logging for a ``ceph-osd`` daemon named ``osd.0``, execute the following::
440 ceph tell osd.0 injectargs --debug-osd 20 --debug-ms 1
442 In your ``ceph.conf`` file, you may use spaces when specifying a
443 setting name. When specifying a setting name on the command line,
444 ensure that you use an underscore or hyphen (``_`` or ``-``) between
445 terms (e.g., ``debug osd`` becomes ``--debug-osd``).
448 Viewing a Configuration at Runtime
449 ==================================
451 If your Ceph Storage Cluster is running, and you would like to see the
452 configuration settings from a running daemon, execute the following::
454 ceph daemon {daemon-type}.{id} config show | less
456 If you are on a machine where osd.0 is running, the command would be::
458 ceph daemon osd.0 config show | less
460 Reading Configuration Metadata at Runtime
461 =========================================
463 Information about the available configuration options is available via
464 the ``config help`` command:
468 ceph daemon {daemon-type}.{id} config help | less
471 This metadata is primarily intended to be used when integrating other
472 software with Ceph, such as graphical user interfaces. The output is
473 a list of JSON objects, for example:
479 "type": "std::string",
481 "desc": "list of hosts or addresses to search for a monitor",
482 "long_desc": "This is a comma, whitespace, or semicolon separated list of IP addresses or hostnames. Hostnames are resolved via DNS and all A or AAAA records are included in the search list.",
484 "daemon_default": "",
498 The type of the setting, given as a C++ type name.
503 One of `basic`, `advanced`, `dev`. The `dev` options are not intended
504 for use outside of development and testing.
509 A short description -- this is a sentence fragment suitable for display
510 in small spaces like a single line in a list.
515 A full description of what the setting does, this may be as long as needed.
520 The default value, if any.
525 An alternative default used for daemons (services) as opposed to clients.
530 A list of strings indicating topics to which this setting relates. Examples
531 of tags are `performance` and `networking`.
536 A list of strings indicating which Ceph services the setting relates to, such
537 as `osd`, `mds`, `mon`. For settings that are relevant to any Ceph client
538 or server, `common` is used.
543 A list of strings indicating other configuration options that may also
544 be of interest to a user setting this option.
549 Optional: a list of strings indicating the valid settings.
554 Optional: upper and lower (inclusive) bounds on valid settings.
559 Running Multiple Clusters
560 =========================
562 With Ceph, you can run multiple Ceph Storage Clusters on the same hardware.
563 Running multiple clusters provides a higher level of isolation compared to
564 using different pools on the same cluster with different CRUSH rulesets. A
565 separate cluster will have separate monitor, OSD and metadata server processes.
566 When running Ceph with default settings, the default cluster name is ``ceph``,
567 which means you would save your Ceph configuration file with the file name
568 ``ceph.conf`` in the ``/etc/ceph`` default directory.
570 See `ceph-deploy new`_ for details.
571 .. _ceph-deploy new:../ceph-deploy-new
573 When you run multiple clusters, you must name your cluster and save the Ceph
574 configuration file with the name of the cluster. For example, a cluster named
575 ``openstack`` will have a Ceph configuration file with the file name
576 ``openstack.conf`` in the ``/etc/ceph`` default directory.
578 .. important:: Cluster names must consist of letters a-z and digits 0-9 only.
580 Separate clusters imply separate data disks and journals, which are not shared
581 between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable
582 evaluates to the cluster name (i.e., ``openstack`` in the foregoing example).
583 Various settings use the ``$cluster`` metavariable, including:
590 - ``mon cluster log file``
596 See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_,
597 `RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the
598 ``$cluster`` metavariable.
600 .. _General Settings: ../general-config-ref
601 .. _OSD Settings: ../osd-config-ref
602 .. _Monitor Settings: ../mon-config-ref
603 .. _MDS Settings: ../../../cephfs/mds-config-ref
604 .. _RGW Settings: ../../../radosgw/config-ref/
605 .. _Log Settings: ../../troubleshooting/log-and-debug
608 When creating default directories or files, you should use the cluster
609 name at the appropriate places in the path. For example::
611 sudo mkdir /var/lib/ceph/osd/openstack-0
612 sudo mkdir /var/lib/ceph/mon/openstack-a
614 .. important:: When running monitors on the same host, you should use
615 different ports. By default, monitors use port 6789. If you already
616 have monitors using port 6789, use a different port for your other cluster(s).
618 To invoke a cluster other than the default ``ceph`` cluster, use the
619 ``-c {filename}.conf`` option with the ``ceph`` command. For example::
621 ceph -c {cluster-name}.conf health
622 ceph -c openstack.conf health
625 .. _Hardware Recommendations: ../../../start/hardware-recommendations
626 .. _Network Configuration Reference: ../network-config-ref
627 .. _OSD Config Reference: ../osd-config-ref
628 .. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction
629 .. _ceph-deploy new: ../../deployment/ceph-deploy-new#naming-a-cluster