5 The ``ceph-deploy`` tool operates out of a directory on an admin
6 :term:`node`. Any host with network connectivity and a modern python
7 environment and ssh (such as Linux) should work.
9 In the descriptions below, :term:`Node` refers to a single machine.
11 .. include:: quick-common.rst
17 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
23 For Debian and Ubuntu distributions, perform the following steps:
25 #. Add the release key::
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
29 #. Add the Ceph packages to your repository. Use the command below and
30 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
31 ``luminous``.) For example::
33 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
35 #. Update your repository and install ``ceph-deploy``::
38 sudo apt install ceph-deploy
40 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
46 For CentOS 7, perform the following steps:
48 #. On Red Hat Enterprise Linux 7, register the target machine with
49 ``subscription-manager``, verify your subscriptions, and enable the
50 "Extras" repository for package dependencies. For example::
52 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
54 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
57 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
59 Please see the `EPEL wiki`_ page for more information.
61 #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
62 ``luminous``.) For example::
64 cat << EOM > /etc/yum.repos.d/ceph.repo
66 name=Ceph noarch packages
67 baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
71 gpgkey=https://download.ceph.com/keys/release.asc
74 #. Update your repository and install ``ceph-deploy``::
77 sudo yum install ceph-deploy
79 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
85 The Ceph project does not currently publish release RPMs for openSUSE, but
86 a stable version of Ceph is included in the default update repository, so
87 installing it is just a matter of::
89 sudo zypper install ceph
90 sudo zypper install ceph-deploy
92 If the distro version is out-of-date, open a bug at
93 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
94 the following repositories:
98 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
102 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
108 The admin node must have password-less SSH access to Ceph nodes.
109 When ceph-deploy logs in to a Ceph node as a user, that particular
110 user must have passwordless ``sudo`` privileges.
116 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
117 prevent issues arising from clock drift. See `Clock`_ for details.
119 On CentOS / RHEL, execute::
121 sudo yum install ntp ntpdate ntp-doc
123 On Debian / Ubuntu, execute::
127 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
128 same NTP time server. See `NTP`_ for details.
134 For **ALL** Ceph Nodes perform the following steps:
136 #. Install an SSH server (if necessary) on each Ceph Node::
138 sudo apt install openssh-server
142 sudo yum install openssh-server
145 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
148 Create a Ceph Deploy User
149 -------------------------
151 The ``ceph-deploy`` utility must login to a Ceph node as a user
152 that has passwordless ``sudo`` privileges, because it needs to install
153 software and configuration files without prompting for passwords.
155 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
156 specify any user that has password-less ``sudo`` (including ``root``, although
157 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
158 user you specify must have password-less SSH access to the Ceph node, as
159 ``ceph-deploy`` will not prompt you for a password.
161 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
162 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
163 name across the cluster may improve ease of use (not required), but you should
164 avoid obvious user names, because hackers typically use them with brute force
165 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
166 substituting ``{username}`` for the user name you define, describes how to
167 create a user with passwordless ``sudo``.
169 .. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
170 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
171 removing the user must be done before attempting an upgrade.
173 #. Create a new user on each Ceph Node. ::
176 sudo useradd -d /home/{username} -m {username}
177 sudo passwd {username}
179 #. For the new user you added to each Ceph node, ensure that the user has
180 ``sudo`` privileges. ::
182 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
183 sudo chmod 0440 /etc/sudoers.d/{username}
186 Enable Password-less SSH
187 ------------------------
189 Since ``ceph-deploy`` will not prompt for a password, you must generate
190 SSH keys on the admin node and distribute the public key to each Ceph
191 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
194 #. Generate the SSH keys, but do not use ``sudo`` or the
195 ``root`` user. Leave the passphrase empty::
199 Generating public/private key pair.
200 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
201 Enter passphrase (empty for no passphrase):
202 Enter same passphrase again:
203 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
204 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
206 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
207 you created with `Create a Ceph Deploy User`_. ::
209 ssh-copy-id {username}@node1
210 ssh-copy-id {username}@node2
211 ssh-copy-id {username}@node3
213 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
214 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
215 created without requiring you to specify ``--username {username}`` each
216 time you execute ``ceph-deploy``. This has the added benefit of streamlining
217 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
231 Enable Networking On Bootup
232 ---------------------------
234 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
235 If networking is ``off`` by default, the Ceph cluster cannot come online
236 during bootup until you enable networking.
238 The default configuration on some distributions (e.g., CentOS) has the
239 networking interface(s) off by default. Ensure that, during boot up, your
240 network interface(s) turn(s) on so that your Ceph daemons can communicate over
241 the network. For example, on Red Hat and CentOS, navigate to
242 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
243 has ``ONBOOT`` set to ``yes``.
249 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
250 Address hostname resolution issues as necessary.
252 .. note:: Hostnames should resolve to a network IP address, not to the
253 loopback IP address (e.g., hostnames should resolve to an IP address other
254 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
255 should also ensure that it resolves to its hostname and IP address
256 (i.e., not its loopback IP address).
262 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
263 in a port range of ``6800:7300`` by default. See the `Network Configuration
264 Reference`_ for details. Ceph OSDs can use multiple network connections to
265 communicate with clients, monitors, other OSDs for replication, and other OSDs
268 On some distributions (e.g., RHEL), the default firewall configuration is fairly
269 strict. You may need to adjust your firewall settings allow inbound requests so
270 that clients in your network can communicate with daemons on your Ceph nodes.
272 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
273 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
274 ensure that you make the settings permanent so that they are enabled on reboot.
276 For example, on monitors::
278 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
280 and on OSDs and MDSs::
282 sudo firewall-cmd --zone=public --add-service=ceph --permanent
284 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
286 sudo firewall-cmd --reload
288 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
289 for Ceph OSDs. For example::
291 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
293 Once you have finished configuring ``iptables``, ensure that you make the
294 changes persistent on each node so that they will be in effect when your nodes
295 reboot. For example::
297 /sbin/service iptables save
302 On CentOS and RHEL, you may receive an error while trying to execute
303 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
304 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
305 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
306 out to ensure that ``ceph-deploy`` can connect using the user you created with
307 `Create a Ceph Deploy User`_.
309 .. note:: If editing, ``/etc/sudoers``, ensure that you use
310 ``sudo visudo`` rather than a text editor.
316 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
317 installation, we recommend setting SELinux to ``Permissive`` or disabling it
318 entirely and ensuring that your installation and cluster are working properly
319 before hardening your configuration. To set SELinux to ``Permissive``, execute the
324 To configure SELinux persistently (recommended if SELinux is an issue), modify
325 the configuration file at ``/etc/selinux/config``.
328 Priorities/Preferences
329 ----------------------
331 Ensure that your package manager has priority/preferences packages installed and
332 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
333 enable optional repositories. ::
335 sudo yum install yum-plugin-priorities
337 For example, on RHEL 7 server, execute the following to install
338 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
341 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
347 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
350 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
351 .. _OS Recommendations: ../os-recommendations
352 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
353 .. _Clock: ../../rados/configuration/mon-config-ref#clock
354 .. _NTP: http://www.ntp.org/
355 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
356 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL