2 .. _ceph-conf-common-settings:
7 The `Hardware Recommendations`_ section provides some hardware guidelines for
8 configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
9 Node` to run multiple daemons. For example, a single node with multiple drives
10 may run one ``ceph-osd`` for each drive. Ideally, you will have a node for a
11 particular type of process. For example, some nodes may run ``ceph-osd``
12 daemons, other nodes may run ``ceph-mds`` daemons, and still other nodes may
13 run ``ceph-mon`` daemons.
15 Each node has a name identified by the ``host`` setting. Monitors also specify
16 a network address and port (i.e., domain name or IP address) identified by the
17 ``addr`` setting. A basic configuration file will typically specify only
18 minimal settings for each instance of monitor daemons. For example:
23 mon_initial_members = ceph1
27 .. important:: The ``host`` setting is the short name of the node (i.e., not
28 an fqdn). It is **NOT** an IP address either. Enter ``hostname -s`` on
29 the command line to retrieve the name of the node. Do not use ``host``
30 settings for anything other than initial monitors unless you are deploying
31 Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
32 when using deployment tools like ``chef`` or ``cephadm``, as those tools
33 will enter the appropriate values for you in the cluster map.
36 .. _ceph-network-config:
41 See the `Network Configuration Reference`_ for a detailed discussion about
42 configuring a network for use with Ceph.
48 Production Ceph clusters typically provision a minimum of three :term:`Ceph Monitor`
49 daemons to ensure availability should a monitor instance crash. A minimum of
50 three ensures that the Paxos algorithm can determine which version
51 of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
52 Monitors in the quorum.
54 .. note:: You may deploy Ceph with a single monitor, but if the instance fails,
55 the lack of other monitors may interrupt data service availability.
57 Ceph Monitors normally listen on port ``3300`` for the new v2 protocol, and ``6789`` for the old v1 protocol.
59 By default, Ceph expects to store monitor data under the
62 /var/lib/ceph/mon/$cluster-$id
64 You or a deployment tool (e.g., ``cephadm``) must create the corresponding
65 directory. With metavariables fully expressed and a cluster named "ceph", the
66 foregoing directory would evaluate to::
68 /var/lib/ceph/mon/ceph-a
70 For additional details, see the `Monitor Config Reference`_.
72 .. _Monitor Config Reference: ../mon-config-ref
81 .. versionadded:: Bobtail 0.56
83 For Bobtail (v 0.56) and beyond, you should expressly enable or disable
84 authentication in the ``[global]`` section of your Ceph configuration file.
88 auth_cluster_required = cephx
89 auth_service_required = cephx
90 auth_client_required = cephx
92 Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.
94 .. _Cephx Config Reference: ../auth-config-ref
97 .. _ceph-monitor-config:
103 Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
104 has one OSD daemon running a Filestore on one storage device. The BlueStore back
105 end is now default, but when using Filestore you specify a journal size. For example:
110 osd_journal_size = 10000
113 host = {hostname} #manual deployments only.
116 By default, Ceph expects to store a Ceph OSD Daemon's data at the
119 /var/lib/ceph/osd/$cluster-$id
121 You or a deployment tool (e.g., ``cephadm``) must create the corresponding
122 directory. With metavariables fully expressed and a cluster named "ceph", this
123 example would evaluate to::
125 /var/lib/ceph/osd/ceph-0
127 You may override this path using the ``osd_data`` setting. We recommend not
128 changing the default location. Create the default directory on your OSD host.
133 sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}
135 The ``osd_data`` path ideally leads to a mount point with a device that is
136 separate from the device that contains the operating system and
137 daemons. If an OSD is to use a device other than the OS device, prepare it for
138 use with Ceph, and mount it to the directory you just created
143 sudo mkfs -t {fstype} /dev/{disk}
144 sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}
146 We recommend using the ``xfs`` file system when running
147 :command:`mkfs`. (``btrfs`` and ``ext4`` are not recommended and are no
150 See the `OSD Config Reference`_ for additional configuration details.
156 During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
157 and report their findings to the Ceph Monitor. You do not have to provide any
158 settings. However, if you have network latency issues, you may wish to modify
161 See `Configuring Monitor/OSD Interaction`_ for additional details.
164 .. _ceph-logging-and-debugging:
169 Sometimes you may encounter issues with Ceph that require
170 modifying logging output and using Ceph's debugging. See `Debugging and
171 Logging`_ for details on log rotation.
173 .. _Debugging and Logging: ../../troubleshooting/log-and-debug
179 .. literalinclude:: demo-ceph.conf
182 .. _ceph-runtime-config:
186 Running Multiple Clusters (DEPRECATED)
187 ======================================
189 Each Ceph cluster has an internal name that is used as part of configuration
190 and log file names as well as directory and mountpoint names. This name
191 defaults to "ceph". Previous releases of Ceph allowed one to specify a custom
192 name instead, for example "ceph2". This was intended to faciliate running
193 multiple logical clusters on the same physical hardware, but in practice this
194 was rarely exploited and should no longer be attempted. Prior documentation
195 could also be misinterpreted as requiring unique cluster names in order to
198 Custom cluster names are now considered deprecated and the ability to deploy
199 them has already been removed from some tools, though existing custom name
200 deployments continue to operate. The ability to run and manage clusters with
201 custom names may be progressively removed by future Ceph releases, so it is
202 strongly recommended to deploy all new clusters with the default name "ceph".
204 Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This
205 option is present purely for backward compatibility and need not be accomodated
206 by new tools and deployments.
208 If you do need to allow multiple clusters to exist on the same host, please use
209 :ref:`cephadm`, which uses containers to fully isolate each cluster.
215 .. _Hardware Recommendations: ../../../start/hardware-recommendations
216 .. _Network Configuration Reference: ../network-config-ref
217 .. _OSD Config Reference: ../osd-config-ref
218 .. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction