]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/configuration/ceph-conf.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / rados / configuration / ceph-conf.rst
CommitLineData
7c673cae
FG
1==================
2 Configuring Ceph
3==================
4
5When you start the Ceph service, the initialization process activates a series
6of daemons that run in the background. A :term:`Ceph Storage Cluster` runs
7two types of daemons:
8
9- :term:`Ceph Monitor` (``ceph-mon``)
10- :term:`Ceph OSD Daemon` (``ceph-osd``)
11
12Ceph Storage Clusters that support the :term:`Ceph Filesystem` run at least one
13:term:`Ceph Metadata Server` (``ceph-mds``). Clusters that support :term:`Ceph
14Object Storage` run Ceph Gateway daemons (``radosgw``). For your convenience,
15each daemon has a series of default values (*i.e.*, many are set by
16``ceph/src/common/config_opts.h``). You may override these settings with a Ceph
17configuration file.
18
19
20.. _ceph-conf-file:
21
22The Configuration File
23======================
24
25When you start a Ceph Storage Cluster, each daemon looks for a Ceph
26configuration file (i.e., ``ceph.conf`` by default) that provides the cluster's
27configuration settings. For manual deployments, you need to create a Ceph
28configuration file. For tools that create configuration files for you (*e.g.*,
29``ceph-deploy``, Chef, etc.), you may use the information contained herein as a
30reference. The Ceph configuration file defines:
31
32- Cluster Identity
33- Authentication settings
34- Cluster membership
35- Host names
36- Host addresses
37- Paths to keyrings
38- Paths to journals
39- Paths to data
40- Other runtime options
41
42The default Ceph configuration file locations in sequential order include:
43
44#. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF``
45 environment variable)
46#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
47#. ``/etc/ceph/ceph.conf``
48#. ``~/.ceph/config``
49#. ``./ceph.conf`` (*i.e.,* in the current working directory)
50
51
52The Ceph configuration file uses an *ini* style syntax. You can add comments
53by preceding comments with a pound sign (#) or a semi-colon (;). For example:
54
55.. code-block:: ini
56
57 # <--A number (#) sign precedes a comment.
58 ; A comment may be anything.
59 # Comments always follow a semi-colon (;) or a pound (#) on each line.
60 # The end of the line terminates a comment.
61 # We recommend that you provide comments in your configuration file(s).
62
63
64.. _ceph-conf-settings:
65
66Config Sections
67===============
68
69The configuration file can configure all Ceph daemons in a Ceph Storage Cluster,
70or all Ceph daemons of a particular type. To configure a series of daemons, the
71settings must be included under the processes that will receive the
72configuration as follows:
73
74``[global]``
75
76:Description: Settings under ``[global]`` affect all daemons in a Ceph Storage
77 Cluster.
78
79:Example: ``auth supported = cephx``
80
81``[osd]``
82
83:Description: Settings under ``[osd]`` affect all ``ceph-osd`` daemons in
84 the Ceph Storage Cluster, and override the same setting in
85 ``[global]``.
86
87:Example: ``osd journal size = 1000``
88
89``[mon]``
90
91:Description: Settings under ``[mon]`` affect all ``ceph-mon`` daemons in
92 the Ceph Storage Cluster, and override the same setting in
93 ``[global]``.
94
95:Example: ``mon addr = 10.0.0.101:6789``
96
97
98``[mds]``
99
100:Description: Settings under ``[mds]`` affect all ``ceph-mds`` daemons in
101 the Ceph Storage Cluster, and override the same setting in
102 ``[global]``.
103
104:Example: ``host = myserver01``
105
106``[client]``
107
108:Description: Settings under ``[client]`` affect all Ceph Clients
109 (e.g., mounted Ceph Filesystems, mounted Ceph Block Devices,
110 etc.).
111
112:Example: ``log file = /var/log/ceph/radosgw.log``
113
114
115Global settings affect all instances of all daemon in the Ceph Storage Cluster.
116Use the ``[global]`` setting for values that are common for all daemons in the
117Ceph Storage Cluster. You can override each ``[global]`` setting by:
118
119#. Changing the setting in a particular process type
120 (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
121
122#. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ).
123
124Overriding a global setting affects all child processes, except those that
125you specifically override in a particular daemon.
126
127A typical global setting involves activating authentication. For example:
128
129.. code-block:: ini
130
131 [global]
132 #Enable authentication between hosts within the cluster.
133 #v 0.54 and earlier
134 auth supported = cephx
135
136 #v 0.55 and after
137 auth cluster required = cephx
138 auth service required = cephx
139 auth client required = cephx
140
141
142You can specify settings that apply to a particular type of daemon. When you
143specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
144particular instance, the setting will apply to all OSDs, monitors or metadata
145daemons respectively.
146
147A typical daemon-wide setting involves setting journal sizes, filestore
148settings, etc. For example:
149
150.. code-block:: ini
151
152 [osd]
153 osd journal size = 1000
154
155
156You may specify settings for particular instances of a daemon. You may specify
157an instance by entering its type, delimited by a period (.) and by the instance
158ID. The instance ID for a Ceph OSD Daemon is always numeric, but it may be
159alphanumeric for Ceph Monitors and Ceph Metadata Servers.
160
161.. code-block:: ini
162
163 [osd.1]
164 # settings affect osd.1 only.
165
166 [mon.a]
167 # settings affect mon.a only.
168
169 [mds.b]
170 # settings affect mds.b only.
171
172
173If the daemon you specify is a Ceph Gateway client, specify the daemon and the
174instance, delimited by a period (.). For example::
175
176 [client.radosgw.instance-name]
177 # settings affect client.radosgw.instance-name only.
178
179
180
181.. _ceph-metavariables:
182
183Metavariables
184=============
185
186Metavariables simplify Ceph Storage Cluster configuration dramatically. When a
187metavariable is set in a configuration value, Ceph expands the metavariable into
188a concrete value. Metavariables are very powerful when used within the
189``[global]``, ``[osd]``, ``[mon]``, ``[mds]`` or ``[client]`` sections of your
190configuration file. Ceph metavariables are similar to Bash shell expansion.
191
192Ceph supports the following metavariables:
193
194
195``$cluster``
196
197:Description: Expands to the Ceph Storage Cluster name. Useful when running
198 multiple Ceph Storage Clusters on the same hardware.
199
200:Example: ``/etc/ceph/$cluster.keyring``
201:Default: ``ceph``
202
203
204``$type``
205
206:Description: Expands to one of ``mds``, ``osd``, or ``mon``, depending on the
207 type of the instant daemon.
208
209:Example: ``/var/lib/ceph/$type``
210
211
212``$id``
213
214:Description: Expands to the daemon identifier. For ``osd.0``, this would be
215 ``0``; for ``mds.a``, it would be ``a``.
216
217:Example: ``/var/lib/ceph/$type/$cluster-$id``
218
219
220``$host``
221
222:Description: Expands to the host name of the instant daemon.
223
224
225``$name``
226
227:Description: Expands to ``$type.$id``.
228:Example: ``/var/run/ceph/$cluster-$name.asok``
229
230``$pid``
231
232:Description: Expands to daemon pid.
233:Example: ``/var/run/ceph/$cluster-$name-$pid.asok``
234
235
236.. _ceph-conf-common-settings:
237
238Common Settings
239===============
240
241The `Hardware Recommendations`_ section provides some hardware guidelines for
242configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
243Node` to run multiple daemons. For example, a single node with multiple drives
244may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
245particular type of process. For example, some nodes may run ``ceph-osd``
246daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
247run ``ceph-mon`` daemons.
248
249Each node has a name identified by the ``host`` setting. Monitors also specify
250a network address and port (i.e., domain name or IP address) identified by the
251``addr`` setting. A basic configuration file will typically specify only
252minimal settings for each instance of monitor daemons. For example:
253
254.. code-block:: ini
255
256 [global]
257 mon_initial_members = ceph1
258 mon_host = 10.0.0.1
259
260
261.. important:: The ``host`` setting is the short name of the node (i.e., not
262 an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
263 the command line to retrieve the name of the node. Do not use ``host``
264 settings for anything other than initial monitors unless you are deploying
265 Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
266 when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
267 will enter the appropriate values for you in the cluster map.
268
269
270.. _ceph-network-config:
271
272Networks
273========
274
275See the `Network Configuration Reference`_ for a detailed discussion about
276configuring a network for use with Ceph.
277
278
279Monitors
280========
281
282Ceph production clusters typically deploy with a minimum 3 :term:`Ceph Monitor`
283daemons to ensure high availability should a monitor instance crash. At least
284three (3) monitors ensures that the Paxos algorithm can determine which version
285of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
286Monitors in the quorum.
287
288.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
289 the lack of other monitors may interrupt data service availability.
290
291Ceph Monitors typically listen on port ``6789``. For example:
292
293.. code-block:: ini
294
295 [mon.a]
296 host = hostName
297 mon addr = 150.140.130.120:6789
298
299By default, Ceph expects that you will store a monitor's data under the
300following path::
301
302 /var/lib/ceph/mon/$cluster-$id
303
304You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
305directory. With metavariables fully expressed and a cluster named "ceph", the
306foregoing directory would evaluate to::
307
308 /var/lib/ceph/mon/ceph-a
309
310For additional details, see the `Monitor Config Reference`_.
311
312.. _Monitor Config Reference: ../mon-config-ref
313
314
315.. _ceph-osd-config:
316
317
318Authentication
319==============
320
321.. versionadded:: Bobtail 0.56
322
323For Bobtail (v 0.56) and beyond, you should expressly enable or disable
324authentication in the ``[global]`` section of your Ceph configuration file. ::
325
326 auth cluster required = cephx
327 auth service required = cephx
328 auth client required = cephx
329
330Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
331
332.. important:: When upgrading, we recommend expressly disabling authentication
333 first, then perform the upgrade. Once the upgrade is complete, re-enable
334 authentication.
335
336.. _Cephx Config Reference: ../auth-config-ref
337
338
339.. _ceph-monitor-config:
340
341
342OSDs
343====
344
345Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
346has one OSD daemon running a filestore on one storage drive. A typical
347deployment specifies a journal size. For example:
348
349.. code-block:: ini
350
351 [osd]
352 osd journal size = 10000
353
354 [osd.0]
355 host = {hostname} #manual deployments only.
356
357
358By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
359following path::
360
361 /var/lib/ceph/osd/$cluster-$id
362
363You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
364directory. With metavariables fully expressed and a cluster named "ceph", the
365foregoing directory would evaluate to::
366
367 /var/lib/ceph/osd/ceph-0
368
369You may override this path using the ``osd data`` setting. We don't recommend
370changing the default location. Create the default directory on your OSD host.
371
372::
373
374 ssh {osd-host}
375 sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
376
377The ``osd data`` path ideally leads to a mount point with a hard disk that is
378separate from the hard disk storing and running the operating system and
379daemons. If the OSD is for a disk other than the OS disk, prepare it for
380use with Ceph, and mount it to the directory you just created::
381
382 ssh {new-osd-host}
383 sudo mkfs -t {fstype} /dev/{disk}
384 sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
385
224ce89b
WB
386We recommend using the ``xfs`` file system when running
387:command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and no
388longer tested.)
7c673cae
FG
389
390See the `OSD Config Reference`_ for additional configuration details.
391
392
393Heartbeats
394==========
395
396During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
397and report their findings to the Ceph Monitor. You do not have to provide any
398settings. However, if you have network latency issues, you may wish to modify
399the settings.
400
401See `Configuring Monitor/OSD Interaction`_ for additional details.
402
403
404.. _ceph-logging-and-debugging:
405
406Logs / Debugging
407================
408
409Sometimes you may encounter issues with Ceph that require
410modifying logging output and using Ceph's debugging. See `Debugging and
411Logging`_ for details on log rotation.
412
413.. _Debugging and Logging: ../../troubleshooting/log-and-debug
414
415
416Example ceph.conf
417=================
418
419.. literalinclude:: demo-ceph.conf
420 :language: ini
421
422.. _ceph-runtime-config:
423
424Runtime Changes
425===============
426
427Ceph allows you to make changes to the configuration of a ``ceph-osd``,
428``ceph-mon``, or ``ceph-mds`` daemon at runtime. This capability is quite
429useful for increasing/decreasing logging output, enabling/disabling debug
430settings, and even for runtime optimization. The following reflects runtime
431configuration usage::
432
433 ceph tell {daemon-type}.{id or *} injectargs --{name} {value} [--{name} {value}]
434
435Replace ``{daemon-type}`` with one of ``osd``, ``mon`` or ``mds``. You may apply
436the runtime setting to all daemons of a particular type with ``*``, or specify
437a specific daemon's ID (i.e., its number or letter). For example, to increase
438debug logging for a ``ceph-osd`` daemon named ``osd.0``, execute the following::
439
440 ceph tell osd.0 injectargs --debug-osd 20 --debug-ms 1
441
442In your ``ceph.conf`` file, you may use spaces when specifying a
443setting name. When specifying a setting name on the command line,
444ensure that you use an underscore or hyphen (``_`` or ``-``) between
445terms (e.g., ``debug osd`` becomes ``--debug-osd``).
446
447
448Viewing a Configuration at Runtime
449==================================
450
451If your Ceph Storage Cluster is running, and you would like to see the
452configuration settings from a running daemon, execute the following::
453
454 ceph daemon {daemon-type}.{id} config show | less
455
456If you are on a machine where osd.0 is running, the command would be::
457
458 ceph daemon osd.0 config show | less
459
460
461Running Multiple Clusters
462=========================
463
464With Ceph, you can run multiple Ceph Storage Clusters on the same hardware.
465Running multiple clusters provides a higher level of isolation compared to
466using different pools on the same cluster with different CRUSH rulesets. A
467separate cluster will have separate monitor, OSD and metadata server processes.
468When running Ceph with default settings, the default cluster name is ``ceph``,
469which means you would save your Ceph configuration file with the file name
470``ceph.conf`` in the ``/etc/ceph`` default directory.
471
472See `ceph-deploy new`_ for details.
473.. _ceph-deploy new:../ceph-deploy-new
474
475When you run multiple clusters, you must name your cluster and save the Ceph
476configuration file with the name of the cluster. For example, a cluster named
477``openstack`` will have a Ceph configuration file with the file name
478``openstack.conf`` in the ``/etc/ceph`` default directory.
479
480.. important:: Cluster names must consist of letters a-z and digits 0-9 only.
481
482Separate clusters imply separate data disks and journals, which are not shared
483between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable
484evaluates to the cluster name (i.e., ``openstack`` in the foregoing example).
485Various settings use the ``$cluster`` metavariable, including:
486
487- ``keyring``
488- ``admin socket``
489- ``log file``
490- ``pid file``
491- ``mon data``
492- ``mon cluster log file``
493- ``osd data``
494- ``osd journal``
495- ``mds data``
496- ``rgw data``
497
498See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_,
499`RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the
500``$cluster`` metavariable.
501
502.. _General Settings: ../general-config-ref
503.. _OSD Settings: ../osd-config-ref
504.. _Monitor Settings: ../mon-config-ref
505.. _MDS Settings: ../../../cephfs/mds-config-ref
506.. _RGW Settings: ../../../radosgw/config-ref/
507.. _Log Settings: ../../troubleshooting/log-and-debug
508
509
510When creating default directories or files, you should use the cluster
511name at the appropriate places in the path. For example::
512
513 sudo mkdir /var/lib/ceph/osd/openstack-0
514 sudo mkdir /var/lib/ceph/mon/openstack-a
515
516.. important:: When running monitors on the same host, you should use
517 different ports. By default, monitors use port 6789. If you already
518 have monitors using port 6789, use a different port for your other cluster(s).
519
520To invoke a cluster other than the default ``ceph`` cluster, use the
521``-c {filename}.conf`` option with the ``ceph`` command. For example::
522
523 ceph -c {cluster-name}.conf health
524 ceph -c openstack.conf health
525
526
527.. _Hardware Recommendations: ../../../start/hardware-recommendations
528.. _Network Configuration Reference: ../network-config-ref
529.. _OSD Config Reference: ../osd-config-ref
530.. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction
531.. _ceph-deploy new: ../../deployment/ceph-deploy-new#naming-a-cluster