7 Thank you for trying Ceph! We recommend setting up a ``ceph-deploy`` admin
8 :term:`node` and a 3-node :term:`Ceph Storage Cluster` to explore the basics of
9 Ceph. This **Preflight Checklist** will help you prepare a ``ceph-deploy``
10 admin node and three Ceph Nodes (or virtual machines) that will host your Ceph
11 Storage Cluster. Before proceeding any further, see `OS Recommendations`_ to
12 verify that you have a supported distribution and version of Linux. When
13 you use a single Linux distribution and version across the cluster, it will
14 make it easier for you to troubleshoot issues that arise in production.
16 In the descriptions below, :term:`Node` refers to a single machine.
18 .. include:: quick-common.rst
24 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
30 For Debian and Ubuntu distributions, perform the following steps:
32 #. Add the release key::
34 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
36 #. Add the Ceph packages to your repository. Replace ``{ceph-stable-release}``
37 with a stable Ceph release (e.g., ``hammer``, ``jewel``, etc.)
40 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
42 #. Update your repository and install ``ceph-deploy``::
44 sudo apt-get update && sudo apt-get install ceph-deploy
46 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages.
47 Simply replace ``http://ceph.com/`` by ``http://eu.ceph.com/``
53 For CentOS 7, perform the following steps:
55 #. On Red Hat Enterprise Linux 7, register the target machine with ``subscription-manager``, verify your subscriptions, and enable the "Extras" repository for package dependencies. For example::
57 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
59 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
62 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
64 Please see the `EPEL wiki`_ page for more information.
67 #. Add the package to your repository. Open a text editor and create a
68 Yellowdog Updater, Modified (YUM) entry. Use the file path
69 ``/etc/yum.repos.d/ceph.repo``. For example::
71 sudo vim /etc/yum.repos.d/ceph.repo
73 Paste the following example code. Replace ``{ceph-release}`` with
74 the recent major release of Ceph (e.g., ``jewel``). Replace ``{distro}``
75 with your Linux distribution (e.g., ``el7`` for CentOS 7). Finally, save the
77 ``/etc/yum.repos.d/ceph.repo`` file. ::
80 name=Ceph noarch packages
81 baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
85 gpgkey=https://download.ceph.com/keys/release.asc
88 #. Update your repository and install ``ceph-deploy``::
90 sudo yum update && sudo yum install ceph-deploy
93 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages.
94 Simply replace ``http://ceph.com/`` by ``http://eu.ceph.com/``
100 The Ceph project does not currently publish release RPMs for openSUSE, but
101 a stable version of Ceph is included in the default update repository, so
102 installing it is just a matter of::
104 sudo zypper install ceph
105 sudo zypper install ceph-deploy
107 If the distro version is out-of-date, open a bug at
108 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
109 the following repositories:
113 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
117 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
123 The admin node must be have password-less SSH access to Ceph nodes.
124 When ceph-deploy logs in to a Ceph node as a user, that particular
125 user must have passwordless ``sudo`` privileges.
131 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
132 prevent issues arising from clock drift. See `Clock`_ for details.
134 On CentOS / RHEL, execute::
136 sudo yum install ntp ntpdate ntp-doc
138 On Debian / Ubuntu, execute::
140 sudo apt-get install ntp
142 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
143 same NTP time server. See `NTP`_ for details.
149 For **ALL** Ceph Nodes perform the following steps:
151 #. Install an SSH server (if necessary) on each Ceph Node::
153 sudo apt-get install openssh-server
157 sudo yum install openssh-server
160 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
163 Create a Ceph Deploy User
164 -------------------------
166 The ``ceph-deploy`` utility must login to a Ceph node as a user
167 that has passwordless ``sudo`` privileges, because it needs to install
168 software and configuration files without prompting for passwords.
170 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
171 specify any user that has password-less ``sudo`` (including ``root``, although
172 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
173 user you specify must have password-less SSH access to the Ceph node, as
174 ``ceph-deploy`` will not prompt you for a password.
176 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
177 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
178 name across the cluster may improve ease of use (not required), but you should
179 avoid obvious user names, because hackers typically use them with brute force
180 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
181 substituting ``{username}`` for the user name you define, describes how to
182 create a user with passwordless ``sudo``.
184 .. note:: Starting with the `Infernalis release`_ the "ceph" user name is reserved
185 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
186 removing the user must be done before attempting an upgrade.
188 #. Create a new user on each Ceph Node. ::
191 sudo useradd -d /home/{username} -m {username}
192 sudo passwd {username}
194 #. For the new user you added to each Ceph node, ensure that the user has
195 ``sudo`` privileges. ::
197 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
198 sudo chmod 0440 /etc/sudoers.d/{username}
201 Enable Password-less SSH
202 ------------------------
204 Since ``ceph-deploy`` will not prompt for a password, you must generate
205 SSH keys on the admin node and distribute the public key to each Ceph
206 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
209 #. Generate the SSH keys, but do not use ``sudo`` or the
210 ``root`` user. Leave the passphrase empty::
214 Generating public/private key pair.
215 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
216 Enter passphrase (empty for no passphrase):
217 Enter same passphrase again:
218 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
219 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
221 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
222 you created with `Create a Ceph Deploy User`_. ::
224 ssh-copy-id {username}@node1
225 ssh-copy-id {username}@node2
226 ssh-copy-id {username}@node3
228 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
229 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
230 created without requiring you to specify ``--username {username}`` each
231 time you execute ``ceph-deploy``. This has the added benefit of streamlining
232 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
246 Enable Networking On Bootup
247 ---------------------------
249 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
250 If networking is ``off`` by default, the Ceph cluster cannot come online
251 during bootup until you enable networking.
253 The default configuration on some distributions (e.g., CentOS) has the
254 networking interface(s) off by default. Ensure that, during boot up, your
255 network interface(s) turn(s) on so that your Ceph daemons can communicate over
256 the network. For example, on Red Hat and CentOS, navigate to
257 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
258 has ``ONBOOT`` set to ``yes``.
264 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
265 Address hostname resolution issues as necessary.
267 .. note:: Hostnames should resolve to a network IP address, not to the
268 loopback IP address (e.g., hostnames should resolve to an IP address other
269 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
270 should also ensure that it resolves to its hostname and IP address
271 (i.e., not its loopback IP address).
277 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
278 in a port range of ``6800:7300`` by default. See the `Network Configuration
279 Reference`_ for details. Ceph OSDs can use multiple network connections to
280 communicate with clients, monitors, other OSDs for replication, and other OSDs
283 On some distributions (e.g., RHEL), the default firewall configuration is fairly
284 strict. You may need to adjust your firewall settings allow inbound requests so
285 that clients in your network can communicate with daemons on your Ceph nodes.
287 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
288 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
289 ensure that you make the settings permanent so that they are enabled on reboot.
291 For example, on monitors::
293 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
295 and on OSDs and MDSs::
297 sudo firewall-cmd --zone=public --add-service=ceph --permanent
299 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
301 sudo firewall-cmd --reload
303 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
304 for Ceph OSDs. For example::
306 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
308 Once you have finished configuring ``iptables``, ensure that you make the
309 changes persistent on each node so that they will be in effect when your nodes
310 reboot. For example::
312 /sbin/service iptables save
317 On CentOS and RHEL, you may receive an error while trying to execute
318 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
319 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
320 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
321 out to ensure that ``ceph-deploy`` can connect using the user you created with
322 `Create a Ceph Deploy User`_.
324 .. note:: If editing, ``/etc/sudoers``, ensure that you use
325 ``sudo visudo`` rather than a text editor.
331 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
332 installation, we recommend setting SELinux to ``Permissive`` or disabling it
333 entirely and ensuring that your installation and cluster are working properly
334 before hardening your configuration. To set SELinux to ``Permissive``, execute the
339 To configure SELinux persistently (recommended if SELinux is an issue), modify
340 the configuration file at ``/etc/selinux/config``.
343 Priorities/Preferences
344 ----------------------
346 Ensure that your package manager has priority/preferences packages installed and
347 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
348 enable optional repositories. ::
350 sudo yum install yum-plugin-priorities
352 For example, on RHEL 7 server, execute the following to install
353 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
356 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
362 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
365 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
366 .. _OS Recommendations: ../os-recommendations
367 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
368 .. _Clock: ../../rados/configuration/mon-config-ref#clock
369 .. _NTP: http://www.ntp.org/
370 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
371 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL