]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/configuration/common.rst
update source to Ceph Pacific 16.2.2
[ceph.git] / ceph / doc / rados / configuration / common.rst
1
2 .. _ceph-conf-common-settings:
3
4 Common Settings
5 ===============
6
7 The `Hardware Recommendations`_ section provides some hardware guidelines for
8 configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
9 Node` to run multiple daemons. For example, a single node with multiple drives
10 may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
11 particular type of process. For example, some nodes may run ``ceph-osd``
12 daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
13 run ``ceph-mon`` daemons.
14
15 Each node has a name identified by the ``host`` setting. Monitors also specify
16 a network address and port (i.e., domain name or IP address) identified by the
17 ``addr`` setting. A basic configuration file will typically specify only
18 minimal settings for each instance of monitor daemons. For example:
19
20 .. code-block:: ini
21
22 [global]
23 mon_initial_members = ceph1
24 mon_host = 10.0.0.1
25
26
27 .. important:: The ``host`` setting is the short name of the node (i.e., not
28 an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
29 the command line to retrieve the name of the node. Do not use ``host``
30 settings for anything other than initial monitors unless you are deploying
31 Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
32 when using deployment tools like ``chef`` or ``cephadm``, as those tools
33 will enter the appropriate values for you in the cluster map.
34
35
36 .. _ceph-network-config:
37
38 Networks
39 ========
40
41 See the `Network Configuration Reference`_ for a detailed discussion about
42 configuring a network for use with Ceph.
43
44
45 Monitors
46 ========
47
48 Production Ceph clusters typically provision a minimum of three :term:`Ceph Monitor`
49 daemons to ensure availability should a monitor instance crash. A minimum of
50 three ensures that the Paxos algorithm can determine which version
51 of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
52 Monitors in the quorum.
53
54 .. note:: You may deploy Ceph with a single monitor, but if the instance fails,
55 the lack of other monitors may interrupt data service availability.
56
57 Ceph Monitors normally listen on port ``3300`` for the new v2 protocol, and ``6789`` for the old v1 protocol.
58
59 By default, Ceph expects to store monitor data under the
60 following path::
61
62 /var/lib/ceph/mon/$cluster-$id
63
64 You or a deployment tool (e.g., ``cephadm``) must create the corresponding
65 directory. With metavariables fully expressed and a cluster named "ceph", the
66 foregoing directory would evaluate to::
67
68 /var/lib/ceph/mon/ceph-a
69
70 For additional details, see the `Monitor Config Reference`_.
71
72 .. _Monitor Config Reference: ../mon-config-ref
73
74
75 .. _ceph-osd-config:
76
77
78 Authentication
79 ==============
80
81 .. versionadded:: Bobtail 0.56
82
83 For Bobtail (v 0.56) and beyond, you should expressly enable or disable
84 authentication in the ``[global]`` section of your Ceph configuration file.
85
86 .. code-block:: ini
87
88 auth_cluster_required = cephx
89 auth_service_required = cephx
90 auth_client_required = cephx
91
92 Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
93
94 .. _Cephx Config Reference: ../auth-config-ref
95
96
97 .. _ceph-monitor-config:
98
99
100 OSDs
101 ====
102
103 Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
104 has one OSD daemon running a Filestore on one storage device. The BlueStore back
105 end is now default, but when using Filestore you specify a journal size. For example:
106
107 .. code-block:: ini
108
109 [osd]
110 osd_journal_size = 10000
111
112 [osd.0]
113 host = {hostname} #manual deployments only.
114
115
116 By default, Ceph expects to store a Ceph OSD Daemon's data at the
117 following path::
118
119 /var/lib/ceph/osd/$cluster-$id
120
121 You or a deployment tool (e.g., ``cephadm``) must create the corresponding
122 directory. With metavariables fully expressed and a cluster named "ceph", this
123 example would evaluate to::
124
125 /var/lib/ceph/osd/ceph-0
126
127 You may override this path using the ``osd_data`` setting. We recommend not
128 changing the default location. Create the default directory on your OSD host.
129
130 .. prompt:: bash $
131
132 ssh {osd-host}
133 sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
134
135 The ``osd_data`` path ideally leads to a mount point with a device that is
136 separate from the device that contains the operating system and
137 daemons. If an OSD is to use a device other than the OS device, prepare it for
138 use with Ceph, and mount it to the directory you just created
139
140 .. prompt:: bash $
141
142 ssh {new-osd-host}
143 sudo mkfs -t {fstype} /dev/{disk}
144 sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
145
146 We recommend using the ``xfs`` file system when running
147 :command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and are no
148 longer tested.)
149
150 See the `OSD Config Reference`_ for additional configuration details.
151
152
153 Heartbeats
154 ==========
155
156 During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
157 and report their findings to the Ceph Monitor. You do not have to provide any
158 settings. However, if you have network latency issues, you may wish to modify
159 the settings.
160
161 See `Configuring Monitor/OSD Interaction`_ for additional details.
162
163
164 .. _ceph-logging-and-debugging:
165
166 Logs / Debugging
167 ================
168
169 Sometimes you may encounter issues with Ceph that require
170 modifying logging output and using Ceph's debugging. See `Debugging and
171 Logging`_ for details on log rotation.
172
173 .. _Debugging and Logging: ../../troubleshooting/log-and-debug
174
175
176 Example ceph.conf
177 =================
178
179 .. literalinclude:: demo-ceph.conf
180 :language: ini
181
182 .. _ceph-runtime-config:
183
184
185
186 Running Multiple Clusters (DEPRECATED)
187 ======================================
188
189 Each Ceph cluster has an internal name that is used as part of configuration
190 and log file names as well as directory and mountpoint names. This name
191 defaults to "ceph". Previous releases of Ceph allowed one to specify a custom
192 name instead, for example "ceph2". This was intended to faciliate running
193 multiple logical clusters on the same physical hardware, but in practice this
194 was rarely exploited and should no longer be attempted. Prior documentation
195 could also be misinterpreted as requiring unique cluster names in order to
196 use ``rbd-mirror``.
197
198 Custom cluster names are now considered deprecated and the ability to deploy
199 them has already been removed from some tools, though existing custom name
200 deployments continue to operate. The ability to run and manage clusters with
201 custom names may be progressively removed by future Ceph releases, so it is
202 strongly recommended to deploy all new clusters with the default name "ceph".
203
204 Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This
205 option is present purely for backward compatibility and need not be accomodated
206 by new tools and deployments.
207
208 If you do need to allow multiple clusters to exist on the same host, please use
209 :ref:`cephadm`, which uses containers to fully isolate each cluster.
210
211
212
213
214
215 .. _Hardware Recommendations: ../../../start/hardware-recommendations
216 .. _Network Configuration Reference: ../network-config-ref
217 .. _OSD Config Reference: ../osd-config-ref
218 .. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction