]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/configuration/ceph-conf.rst
a56eee880000ec32fa61f7dbf29af8e5686ba324
[ceph.git] / ceph / doc / rados / configuration / ceph-conf.rst
1 ==================
2 Configuring Ceph
3 ==================
4
5 When you start the Ceph service, the initialization process activates a series
6 of daemons that run in the background. A :term:`Ceph Storage Cluster` runs
7 two types of daemons:
8
9 - :term:`Ceph Monitor` (``ceph-mon``)
10 - :term:`Ceph OSD Daemon` (``ceph-osd``)
11
12 Ceph Storage Clusters that support the :term:`Ceph Filesystem` run at least one
13 :term:`Ceph Metadata Server` (``ceph-mds``). Clusters that support :term:`Ceph
14 Object Storage` run Ceph Gateway daemons (``radosgw``). For your convenience,
15 each daemon has a series of default values (*i.e.*, many are set by
16 ``ceph/src/common/config_opts.h``). You may override these settings with a Ceph
17 configuration file.
18
19
20 .. _ceph-conf-file:
21
22 The Configuration File
23 ======================
24
25 When you start a Ceph Storage Cluster, each daemon looks for a Ceph
26 configuration file (i.e., ``ceph.conf`` by default) that provides the cluster's
27 configuration settings. For manual deployments, you need to create a Ceph
28 configuration file. For tools that create configuration files for you (*e.g.*,
29 ``ceph-deploy``, Chef, etc.), you may use the information contained herein as a
30 reference. The Ceph configuration file defines:
31
32 - Cluster Identity
33 - Authentication settings
34 - Cluster membership
35 - Host names
36 - Host addresses
37 - Paths to keyrings
38 - Paths to journals
39 - Paths to data
40 - Other runtime options
41
42 The default Ceph configuration file locations in sequential order include:
43
44 #. ``$CEPH_CONF`` (*i.e.,* the path following the ``$CEPH_CONF``
45 environment variable)
46 #. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
47 #. ``/etc/ceph/ceph.conf``
48 #. ``~/.ceph/config``
49 #. ``./ceph.conf`` (*i.e.,* in the current working directory)
50
51
52 The Ceph configuration file uses an *ini* style syntax. You can add comments
53 by preceding comments with a pound sign (#) or a semi-colon (;). For example:
54
55 .. code-block:: ini
56
57 # <--A number (#) sign precedes a comment.
58 ; A comment may be anything.
59 # Comments always follow a semi-colon (;) or a pound (#) on each line.
60 # The end of the line terminates a comment.
61 # We recommend that you provide comments in your configuration file(s).
62
63
64 .. _ceph-conf-settings:
65
66 Config Sections
67 ===============
68
69 The configuration file can configure all Ceph daemons in a Ceph Storage Cluster,
70 or all Ceph daemons of a particular type. To configure a series of daemons, the
71 settings must be included under the processes that will receive the
72 configuration as follows:
73
74 ``[global]``
75
76 :Description: Settings under ``[global]`` affect all daemons in a Ceph Storage
77 Cluster.
78
79 :Example: ``auth supported = cephx``
80
81 ``[osd]``
82
83 :Description: Settings under ``[osd]`` affect all ``ceph-osd`` daemons in
84 the Ceph Storage Cluster, and override the same setting in
85 ``[global]``.
86
87 :Example: ``osd journal size = 1000``
88
89 ``[mon]``
90
91 :Description: Settings under ``[mon]`` affect all ``ceph-mon`` daemons in
92 the Ceph Storage Cluster, and override the same setting in
93 ``[global]``.
94
95 :Example: ``mon addr = 10.0.0.101:6789``
96
97
98 ``[mds]``
99
100 :Description: Settings under ``[mds]`` affect all ``ceph-mds`` daemons in
101 the Ceph Storage Cluster, and override the same setting in
102 ``[global]``.
103
104 :Example: ``host = myserver01``
105
106 ``[client]``
107
108 :Description: Settings under ``[client]`` affect all Ceph Clients
109 (e.g., mounted Ceph Filesystems, mounted Ceph Block Devices,
110 etc.).
111
112 :Example: ``log file = /var/log/ceph/radosgw.log``
113
114
115 Global settings affect all instances of all daemon in the Ceph Storage Cluster.
116 Use the ``[global]`` setting for values that are common for all daemons in the
117 Ceph Storage Cluster. You can override each ``[global]`` setting by:
118
119 #. Changing the setting in a particular process type
120 (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
121
122 #. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` ).
123
124 Overriding a global setting affects all child processes, except those that
125 you specifically override in a particular daemon.
126
127 A typical global setting involves activating authentication. For example:
128
129 .. code-block:: ini
130
131 [global]
132 #Enable authentication between hosts within the cluster.
133 #v 0.54 and earlier
134 auth supported = cephx
135
136 #v 0.55 and after
137 auth cluster required = cephx
138 auth service required = cephx
139 auth client required = cephx
140
141
142 You can specify settings that apply to a particular type of daemon. When you
143 specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
144 particular instance, the setting will apply to all OSDs, monitors or metadata
145 daemons respectively.
146
147 A typical daemon-wide setting involves setting journal sizes, filestore
148 settings, etc. For example:
149
150 .. code-block:: ini
151
152 [osd]
153 osd journal size = 1000
154
155
156 You may specify settings for particular instances of a daemon. You may specify
157 an instance by entering its type, delimited by a period (.) and by the instance
158 ID. The instance ID for a Ceph OSD Daemon is always numeric, but it may be
159 alphanumeric for Ceph Monitors and Ceph Metadata Servers.
160
161 .. code-block:: ini
162
163 [osd.1]
164 # settings affect osd.1 only.
165
166 [mon.a]
167 # settings affect mon.a only.
168
169 [mds.b]
170 # settings affect mds.b only.
171
172
173 If the daemon you specify is a Ceph Gateway client, specify the daemon and the
174 instance, delimited by a period (.). For example::
175
176 [client.radosgw.instance-name]
177 # settings affect client.radosgw.instance-name only.
178
179
180
181 .. _ceph-metavariables:
182
183 Metavariables
184 =============
185
186 Metavariables simplify Ceph Storage Cluster configuration dramatically. When a
187 metavariable is set in a configuration value, Ceph expands the metavariable into
188 a concrete value. Metavariables are very powerful when used within the
189 ``[global]``, ``[osd]``, ``[mon]``, ``[mds]`` or ``[client]`` sections of your
190 configuration file. Ceph metavariables are similar to Bash shell expansion.
191
192 Ceph supports the following metavariables:
193
194
195 ``$cluster``
196
197 :Description: Expands to the Ceph Storage Cluster name. Useful when running
198 multiple Ceph Storage Clusters on the same hardware.
199
200 :Example: ``/etc/ceph/$cluster.keyring``
201 :Default: ``ceph``
202
203
204 ``$type``
205
206 :Description: Expands to one of ``mds``, ``osd``, or ``mon``, depending on the
207 type of the instant daemon.
208
209 :Example: ``/var/lib/ceph/$type``
210
211
212 ``$id``
213
214 :Description: Expands to the daemon identifier. For ``osd.0``, this would be
215 ``0``; for ``mds.a``, it would be ``a``.
216
217 :Example: ``/var/lib/ceph/$type/$cluster-$id``
218
219
220 ``$host``
221
222 :Description: Expands to the host name of the instant daemon.
223
224
225 ``$name``
226
227 :Description: Expands to ``$type.$id``.
228 :Example: ``/var/run/ceph/$cluster-$name.asok``
229
230 ``$pid``
231
232 :Description: Expands to daemon pid.
233 :Example: ``/var/run/ceph/$cluster-$name-$pid.asok``
234
235
236 .. _ceph-conf-common-settings:
237
238 Common Settings
239 ===============
240
241 The `Hardware Recommendations`_ section provides some hardware guidelines for
242 configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
243 Node` to run multiple daemons. For example, a single node with multiple drives
244 may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
245 particular type of process. For example, some nodes may run ``ceph-osd``
246 daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
247 run ``ceph-mon`` daemons.
248
249 Each node has a name identified by the ``host`` setting. Monitors also specify
250 a network address and port (i.e., domain name or IP address) identified by the
251 ``addr`` setting. A basic configuration file will typically specify only
252 minimal settings for each instance of monitor daemons. For example:
253
254 .. code-block:: ini
255
256 [global]
257 mon_initial_members = ceph1
258 mon_host = 10.0.0.1
259
260
261 .. important:: The ``host`` setting is the short name of the node (i.e., not
262 an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
263 the command line to retrieve the name of the node. Do not use ``host``
264 settings for anything other than initial monitors unless you are deploying
265 Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
266 when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
267 will enter the appropriate values for you in the cluster map.
268
269
270 .. _ceph-network-config:
271
272 Networks
273 ========
274
275 See the `Network Configuration Reference`_ for a detailed discussion about
276 configuring a network for use with Ceph.
277
278
279 Monitors
280 ========
281
282 Ceph production clusters typically deploy with a minimum 3 :term:`Ceph Monitor`
283 daemons to ensure high availability should a monitor instance crash. At least
284 three (3) monitors ensures that the Paxos algorithm can determine which version
285 of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
286 Monitors in the quorum.
287
288 .. note:: You may deploy Ceph with a single monitor, but if the instance fails,
289 the lack of other monitors may interrupt data service availability.
290
291 Ceph Monitors typically listen on port ``6789``. For example:
292
293 .. code-block:: ini
294
295 [mon.a]
296 host = hostName
297 mon addr = 150.140.130.120:6789
298
299 By default, Ceph expects that you will store a monitor's data under the
300 following path::
301
302 /var/lib/ceph/mon/$cluster-$id
303
304 You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
305 directory. With metavariables fully expressed and a cluster named "ceph", the
306 foregoing directory would evaluate to::
307
308 /var/lib/ceph/mon/ceph-a
309
310 For additional details, see the `Monitor Config Reference`_.
311
312 .. _Monitor Config Reference: ../mon-config-ref
313
314
315 .. _ceph-osd-config:
316
317
318 Authentication
319 ==============
320
321 .. versionadded:: Bobtail 0.56
322
323 For Bobtail (v 0.56) and beyond, you should expressly enable or disable
324 authentication in the ``[global]`` section of your Ceph configuration file. ::
325
326 auth cluster required = cephx
327 auth service required = cephx
328 auth client required = cephx
329
330 Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
331
332 .. important:: When upgrading, we recommend expressly disabling authentication
333 first, then perform the upgrade. Once the upgrade is complete, re-enable
334 authentication.
335
336 .. _Cephx Config Reference: ../auth-config-ref
337
338
339 .. _ceph-monitor-config:
340
341
342 OSDs
343 ====
344
345 Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
346 has one OSD daemon running a filestore on one storage drive. A typical
347 deployment specifies a journal size. For example:
348
349 .. code-block:: ini
350
351 [osd]
352 osd journal size = 10000
353
354 [osd.0]
355 host = {hostname} #manual deployments only.
356
357
358 By default, Ceph expects that you will store a Ceph OSD Daemon's data with the
359 following path::
360
361 /var/lib/ceph/osd/$cluster-$id
362
363 You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
364 directory. With metavariables fully expressed and a cluster named "ceph", the
365 foregoing directory would evaluate to::
366
367 /var/lib/ceph/osd/ceph-0
368
369 You may override this path using the ``osd data`` setting. We don't recommend
370 changing the default location. Create the default directory on your OSD host.
371
372 ::
373
374 ssh {osd-host}
375 sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
376
377 The ``osd data`` path ideally leads to a mount point with a hard disk that is
378 separate from the hard disk storing and running the operating system and
379 daemons. If the OSD is for a disk other than the OS disk, prepare it for
380 use with Ceph, and mount it to the directory you just created::
381
382 ssh {new-osd-host}
383 sudo mkfs -t {fstype} /dev/{disk}
384 sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
385
386 We recommend using the ``xfs`` file system or the ``btrfs`` file system when
387 running :command:`mkfs`.
388
389 See the `OSD Config Reference`_ for additional configuration details.
390
391
392 Heartbeats
393 ==========
394
395 During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
396 and report their findings to the Ceph Monitor. You do not have to provide any
397 settings. However, if you have network latency issues, you may wish to modify
398 the settings.
399
400 See `Configuring Monitor/OSD Interaction`_ for additional details.
401
402
403 .. _ceph-logging-and-debugging:
404
405 Logs / Debugging
406 ================
407
408 Sometimes you may encounter issues with Ceph that require
409 modifying logging output and using Ceph's debugging. See `Debugging and
410 Logging`_ for details on log rotation.
411
412 .. _Debugging and Logging: ../../troubleshooting/log-and-debug
413
414
415 Example ceph.conf
416 =================
417
418 .. literalinclude:: demo-ceph.conf
419 :language: ini
420
421 .. _ceph-runtime-config:
422
423 Runtime Changes
424 ===============
425
426 Ceph allows you to make changes to the configuration of a ``ceph-osd``,
427 ``ceph-mon``, or ``ceph-mds`` daemon at runtime. This capability is quite
428 useful for increasing/decreasing logging output, enabling/disabling debug
429 settings, and even for runtime optimization. The following reflects runtime
430 configuration usage::
431
432 ceph tell {daemon-type}.{id or *} injectargs --{name} {value} [--{name} {value}]
433
434 Replace ``{daemon-type}`` with one of ``osd``, ``mon`` or ``mds``. You may apply
435 the runtime setting to all daemons of a particular type with ``*``, or specify
436 a specific daemon's ID (i.e., its number or letter). For example, to increase
437 debug logging for a ``ceph-osd`` daemon named ``osd.0``, execute the following::
438
439 ceph tell osd.0 injectargs --debug-osd 20 --debug-ms 1
440
441 In your ``ceph.conf`` file, you may use spaces when specifying a
442 setting name. When specifying a setting name on the command line,
443 ensure that you use an underscore or hyphen (``_`` or ``-``) between
444 terms (e.g., ``debug osd`` becomes ``--debug-osd``).
445
446
447 Viewing a Configuration at Runtime
448 ==================================
449
450 If your Ceph Storage Cluster is running, and you would like to see the
451 configuration settings from a running daemon, execute the following::
452
453 ceph daemon {daemon-type}.{id} config show | less
454
455 If you are on a machine where osd.0 is running, the command would be::
456
457 ceph daemon osd.0 config show | less
458
459
460 Running Multiple Clusters
461 =========================
462
463 With Ceph, you can run multiple Ceph Storage Clusters on the same hardware.
464 Running multiple clusters provides a higher level of isolation compared to
465 using different pools on the same cluster with different CRUSH rulesets. A
466 separate cluster will have separate monitor, OSD and metadata server processes.
467 When running Ceph with default settings, the default cluster name is ``ceph``,
468 which means you would save your Ceph configuration file with the file name
469 ``ceph.conf`` in the ``/etc/ceph`` default directory.
470
471 See `ceph-deploy new`_ for details.
472 .. _ceph-deploy new:../ceph-deploy-new
473
474 When you run multiple clusters, you must name your cluster and save the Ceph
475 configuration file with the name of the cluster. For example, a cluster named
476 ``openstack`` will have a Ceph configuration file with the file name
477 ``openstack.conf`` in the ``/etc/ceph`` default directory.
478
479 .. important:: Cluster names must consist of letters a-z and digits 0-9 only.
480
481 Separate clusters imply separate data disks and journals, which are not shared
482 between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable
483 evaluates to the cluster name (i.e., ``openstack`` in the foregoing example).
484 Various settings use the ``$cluster`` metavariable, including:
485
486 - ``keyring``
487 - ``admin socket``
488 - ``log file``
489 - ``pid file``
490 - ``mon data``
491 - ``mon cluster log file``
492 - ``osd data``
493 - ``osd journal``
494 - ``mds data``
495 - ``rgw data``
496
497 See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_,
498 `RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the
499 ``$cluster`` metavariable.
500
501 .. _General Settings: ../general-config-ref
502 .. _OSD Settings: ../osd-config-ref
503 .. _Monitor Settings: ../mon-config-ref
504 .. _MDS Settings: ../../../cephfs/mds-config-ref
505 .. _RGW Settings: ../../../radosgw/config-ref/
506 .. _Log Settings: ../../troubleshooting/log-and-debug
507
508
509 When creating default directories or files, you should use the cluster
510 name at the appropriate places in the path. For example::
511
512 sudo mkdir /var/lib/ceph/osd/openstack-0
513 sudo mkdir /var/lib/ceph/mon/openstack-a
514
515 .. important:: When running monitors on the same host, you should use
516 different ports. By default, monitors use port 6789. If you already
517 have monitors using port 6789, use a different port for your other cluster(s).
518
519 To invoke a cluster other than the default ``ceph`` cluster, use the
520 ``-c {filename}.conf`` option with the ``ceph`` command. For example::
521
522 ceph -c {cluster-name}.conf health
523 ceph -c openstack.conf health
524
525
526 .. _Hardware Recommendations: ../../../start/hardware-recommendations
527 .. _Network Configuration Reference: ../network-config-ref
528 .. _OSD Config Reference: ../osd-config-ref
529 .. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction
530 .. _ceph-deploy new: ../../deployment/ceph-deploy-new#naming-a-cluster