5 The ``ceph-deploy`` tool operates out of a directory on an admin
6 :term:`node`. Any host with network connectivity and a modern python
7 environment and ssh (such as Linux) should work.
9 In the descriptions below, :term:`Node` refers to a single machine.
11 .. include:: quick-common.rst
17 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
23 For Debian and Ubuntu distributions, perform the following steps:
25 #. Add the release key::
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
29 #. Add the Ceph packages to your repository. Use the command below and
30 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
31 ``luminous``.) For example::
33 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
35 #. Update your repository and install ``ceph-deploy``::
38 sudo apt install ceph-deploy
40 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
46 For CentOS 7, perform the following steps:
48 #. On Red Hat Enterprise Linux 7, register the target machine with
49 ``subscription-manager``, verify your subscriptions, and enable the
50 "Extras" repository for package dependencies. For example::
52 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
54 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
57 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
59 Please see the `EPEL wiki`_ page for more information.
61 #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
62 ``luminous``.) For example::
64 cat << EOM > /etc/yum.repos.d/ceph.repo
66 name=Ceph noarch packages
67 baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
71 gpgkey=https://download.ceph.com/keys/release.asc
74 #. You may need to install python setuptools required by ceph-deploy:
76 sudo yum install python-setuptools
78 #. Update your repository and install ``ceph-deploy``::
81 sudo yum install ceph-deploy
83 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
89 The Ceph project does not currently publish release RPMs for openSUSE, but
90 a stable version of Ceph is included in the default update repository, so
91 installing it is just a matter of::
93 sudo zypper install ceph
94 sudo zypper install ceph-deploy
96 If the distro version is out-of-date, open a bug at
97 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
98 the following repositories:
102 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
106 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
112 The admin node must have password-less SSH access to Ceph nodes.
113 When ceph-deploy logs in to a Ceph node as a user, that particular
114 user must have passwordless ``sudo`` privileges.
120 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
121 prevent issues arising from clock drift. See `Clock`_ for details.
123 On CentOS / RHEL, execute::
125 sudo yum install ntp ntpdate ntp-doc
127 On Debian / Ubuntu, execute::
129 sudo apt install ntpsec
133 sudo apt install chrony
135 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
136 same NTP time server. See `NTP`_ for details.
142 For **ALL** Ceph Nodes perform the following steps:
144 #. Install an SSH server (if necessary) on each Ceph Node::
146 sudo apt install openssh-server
150 sudo yum install openssh-server
153 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
156 Create a Ceph Deploy User
157 -------------------------
159 The ``ceph-deploy`` utility must login to a Ceph node as a user
160 that has passwordless ``sudo`` privileges, because it needs to install
161 software and configuration files without prompting for passwords.
163 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
164 specify any user that has password-less ``sudo`` (including ``root``, although
165 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
166 user you specify must have password-less SSH access to the Ceph node, as
167 ``ceph-deploy`` will not prompt you for a password.
169 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
170 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
171 name across the cluster may improve ease of use (not required), but you should
172 avoid obvious user names, because hackers typically use them with brute force
173 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
174 substituting ``{username}`` for the user name you define, describes how to
175 create a user with passwordless ``sudo``.
177 .. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
178 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
179 removing the user must be done before attempting an upgrade.
181 #. Create a new user on each Ceph Node. ::
184 sudo useradd -d /home/{username} -m {username}
185 sudo passwd {username}
187 #. For the new user you added to each Ceph node, ensure that the user has
188 ``sudo`` privileges. ::
190 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
191 sudo chmod 0440 /etc/sudoers.d/{username}
194 Enable Password-less SSH
195 ------------------------
197 Since ``ceph-deploy`` will not prompt for a password, you must generate
198 SSH keys on the admin node and distribute the public key to each Ceph
199 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
202 #. Generate the SSH keys, but do not use ``sudo`` or the
203 ``root`` user. Leave the passphrase empty::
207 Generating public/private key pair.
208 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
209 Enter passphrase (empty for no passphrase):
210 Enter same passphrase again:
211 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
212 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
214 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
215 you created with `Create a Ceph Deploy User`_. ::
217 ssh-copy-id {username}@node1
218 ssh-copy-id {username}@node2
219 ssh-copy-id {username}@node3
221 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
222 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
223 created without requiring you to specify ``--username {username}`` each
224 time you execute ``ceph-deploy``. This has the added benefit of streamlining
225 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
239 Enable Networking On Bootup
240 ---------------------------
242 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
243 If networking is ``off`` by default, the Ceph cluster cannot come online
244 during bootup until you enable networking.
246 The default configuration on some distributions (e.g., CentOS) has the
247 networking interface(s) off by default. Ensure that, during boot up, your
248 network interface(s) turn(s) on so that your Ceph daemons can communicate over
249 the network. For example, on Red Hat and CentOS, navigate to
250 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
251 has ``ONBOOT`` set to ``yes``.
257 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
258 Address hostname resolution issues as necessary.
260 .. note:: Hostnames should resolve to a network IP address, not to the
261 loopback IP address (e.g., hostnames should resolve to an IP address other
262 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
263 should also ensure that it resolves to its hostname and IP address
264 (i.e., not its loopback IP address).
270 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
271 in a port range of ``6800:7300`` by default. See the `Network Configuration
272 Reference`_ for details. Ceph OSDs can use multiple network connections to
273 communicate with clients, monitors, other OSDs for replication, and other OSDs
276 On some distributions (e.g., RHEL), the default firewall configuration is fairly
277 strict. You may need to adjust your firewall settings allow inbound requests so
278 that clients in your network can communicate with daemons on your Ceph nodes.
280 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
281 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
282 ensure that you make the settings permanent so that they are enabled on reboot.
284 For example, on monitors::
286 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
288 and on OSDs and MDSs::
290 sudo firewall-cmd --zone=public --add-service=ceph --permanent
292 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
294 sudo firewall-cmd --reload
296 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
297 for Ceph OSDs. For example::
299 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
301 Once you have finished configuring ``iptables``, ensure that you make the
302 changes persistent on each node so that they will be in effect when your nodes
303 reboot. For example::
305 /sbin/service iptables save
310 On CentOS and RHEL, you may receive an error while trying to execute
311 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
312 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
313 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
314 out to ensure that ``ceph-deploy`` can connect using the user you created with
315 `Create a Ceph Deploy User`_.
317 .. note:: If editing, ``/etc/sudoers``, ensure that you use
318 ``sudo visudo`` rather than a text editor.
324 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
325 installation, we recommend setting SELinux to ``Permissive`` or disabling it
326 entirely and ensuring that your installation and cluster are working properly
327 before hardening your configuration. To set SELinux to ``Permissive``, execute the
332 To configure SELinux persistently (recommended if SELinux is an issue), modify
333 the configuration file at ``/etc/selinux/config``.
336 Priorities/Preferences
337 ----------------------
339 Ensure that your package manager has priority/preferences packages installed and
340 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
341 enable optional repositories. ::
343 sudo yum install yum-plugin-priorities
345 For example, on RHEL 7 server, execute the following to install
346 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
349 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
355 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
358 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
359 .. _OS Recommendations: ../os-recommendations
360 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
361 .. _Clock: ../../rados/configuration/mon-config-ref#clock
362 .. _NTP: http://www.ntp.org/
363 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
364 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL