]> git.proxmox.com Git - ceph.git/blame - ceph/doc/start/quick-start-preflight.rst
update sources to v12.1.1
[ceph.git] / ceph / doc / start / quick-start-preflight.rst
CommitLineData
7c673cae
FG
1=====================
2 Preflight Checklist
3=====================
4
224ce89b
WB
5The ``ceph-deploy`` tool operates out of a directory on an admin
6:term:`node`. Any host with network connectivity and a modern python
7environment and ssh (such as Linux) should work.
7c673cae
FG
8
9In the descriptions below, :term:`Node` refers to a single machine.
10
11.. include:: quick-common.rst
12
13
224ce89b 14Ceph-deploy Setup
7c673cae
FG
15=================
16
17Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
18``ceph-deploy``.
19
20Debian/Ubuntu
21-------------
22
23For Debian and Ubuntu distributions, perform the following steps:
24
25#. Add the release key::
26
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
28
224ce89b
WB
29#. Add the Ceph packages to your repository::
30
31 echo deb https://download.ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
32
33 The above URL contains the latest stable release of Ceph. If you
34 would like to select a specific release, use the command below and
35 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
36 ``luminous``.) For example::
7c673cae
FG
37
38 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
39
40#. Update your repository and install ``ceph-deploy``::
41
224ce89b
WB
42 sudo apt update
43 sudo apt install ceph-deploy
7c673cae 44
224ce89b 45.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
7c673cae
FG
46
47
48RHEL/CentOS
49-----------
50
51For CentOS 7, perform the following steps:
52
224ce89b
WB
53#. On Red Hat Enterprise Linux 7, register the target machine with
54 ``subscription-manager``, verify your subscriptions, and enable the
55 "Extras" repository for package dependencies. For example::
7c673cae
FG
56
57 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
58
59#. Install and enable the Extra Packages for Enterprise Linux (EPEL)
60 repository::
61
62 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
63
64 Please see the `EPEL wiki`_ page for more information.
65
224ce89b 66#. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command::
7c673cae 67
224ce89b
WB
68 cat >/etc/yum.repos.d/ceph.repro
69 [ceph-noarch]
70 name=Ceph noarch packages
71 baseurl=https://download.ceph.com/rpm/el7/noarch
72 enabled=1
73 gpgcheck=1
74 type=rpm-md
75 gpgkey=https://download.ceph.com/keys/release.asc
7c673cae 76
224ce89b 77 and then this *Control-D*. This will use the latest stable Ceph release. If you would like to install a different release, replace ``https://download.ceph.com/rpm/el7/noarch`` with ``https://download.ceph.com/rpm-{ceph-release}/el7/noarch`` where ``{ceph-release}`` is a release name like ``luminous``.
7c673cae
FG
78
79#. Update your repository and install ``ceph-deploy``::
80
224ce89b
WB
81 sudo yum update
82 sudo yum install ceph-deploy
7c673cae 83
224ce89b 84.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
7c673cae
FG
85
86
87openSUSE
88--------
89
90The Ceph project does not currently publish release RPMs for openSUSE, but
91a stable version of Ceph is included in the default update repository, so
92installing it is just a matter of::
93
94 sudo zypper install ceph
95 sudo zypper install ceph-deploy
96
97If the distro version is out-of-date, open a bug at
98https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
99the following repositories:
100
101#. Hammer::
102
103 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
104
105#. Jewel::
106
107 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
108
109
110Ceph Node Setup
111===============
112
113The admin node must be have password-less SSH access to Ceph nodes.
114When ceph-deploy logs in to a Ceph node as a user, that particular
115user must have passwordless ``sudo`` privileges.
116
117
118Install NTP
119-----------
120
121We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
122prevent issues arising from clock drift. See `Clock`_ for details.
123
124On CentOS / RHEL, execute::
125
126 sudo yum install ntp ntpdate ntp-doc
127
128On Debian / Ubuntu, execute::
129
224ce89b 130 sudo apt install ntp
7c673cae
FG
131
132Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
133same NTP time server. See `NTP`_ for details.
134
135
136Install SSH Server
137------------------
138
139For **ALL** Ceph Nodes perform the following steps:
140
141#. Install an SSH server (if necessary) on each Ceph Node::
142
224ce89b 143 sudo apt install openssh-server
7c673cae
FG
144
145 or::
146
147 sudo yum install openssh-server
148
149
150#. Ensure the SSH server is running on **ALL** Ceph Nodes.
151
152
153Create a Ceph Deploy User
154-------------------------
155
156The ``ceph-deploy`` utility must login to a Ceph node as a user
157that has passwordless ``sudo`` privileges, because it needs to install
158software and configuration files without prompting for passwords.
159
160Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
161specify any user that has password-less ``sudo`` (including ``root``, although
162this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
163user you specify must have password-less SSH access to the Ceph node, as
164``ceph-deploy`` will not prompt you for a password.
165
166We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
167in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
168name across the cluster may improve ease of use (not required), but you should
169avoid obvious user names, because hackers typically use them with brute force
170hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
171substituting ``{username}`` for the user name you define, describes how to
172create a user with passwordless ``sudo``.
173
174.. note:: Starting with the `Infernalis release`_ the "ceph" user name is reserved
175 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
176 removing the user must be done before attempting an upgrade.
177
178#. Create a new user on each Ceph Node. ::
179
180 ssh user@ceph-server
181 sudo useradd -d /home/{username} -m {username}
182 sudo passwd {username}
183
184#. For the new user you added to each Ceph node, ensure that the user has
185 ``sudo`` privileges. ::
186
187 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
188 sudo chmod 0440 /etc/sudoers.d/{username}
189
190
191Enable Password-less SSH
192------------------------
193
194Since ``ceph-deploy`` will not prompt for a password, you must generate
195SSH keys on the admin node and distribute the public key to each Ceph
196node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
197monitors.
198
199#. Generate the SSH keys, but do not use ``sudo`` or the
200 ``root`` user. Leave the passphrase empty::
201
202 ssh-keygen
203
204 Generating public/private key pair.
205 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
206 Enter passphrase (empty for no passphrase):
207 Enter same passphrase again:
208 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
209 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
210
211#. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
212 you created with `Create a Ceph Deploy User`_. ::
213
214 ssh-copy-id {username}@node1
215 ssh-copy-id {username}@node2
216 ssh-copy-id {username}@node3
217
218#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
219 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
220 created without requiring you to specify ``--username {username}`` each
221 time you execute ``ceph-deploy``. This has the added benefit of streamlining
222 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
223 created::
224
225 Host node1
226 Hostname node1
227 User {username}
228 Host node2
229 Hostname node2
230 User {username}
231 Host node3
232 Hostname node3
233 User {username}
234
235
236Enable Networking On Bootup
237---------------------------
238
239Ceph OSDs peer with each other and report to Ceph Monitors over the network.
240If networking is ``off`` by default, the Ceph cluster cannot come online
241during bootup until you enable networking.
242
243The default configuration on some distributions (e.g., CentOS) has the
244networking interface(s) off by default. Ensure that, during boot up, your
245network interface(s) turn(s) on so that your Ceph daemons can communicate over
246the network. For example, on Red Hat and CentOS, navigate to
247``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
248has ``ONBOOT`` set to ``yes``.
249
250
251Ensure Connectivity
252-------------------
253
254Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
255Address hostname resolution issues as necessary.
256
257.. note:: Hostnames should resolve to a network IP address, not to the
258 loopback IP address (e.g., hostnames should resolve to an IP address other
259 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
260 should also ensure that it resolves to its hostname and IP address
261 (i.e., not its loopback IP address).
262
263
264Open Required Ports
265-------------------
266
267Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
268in a port range of ``6800:7300`` by default. See the `Network Configuration
269Reference`_ for details. Ceph OSDs can use multiple network connections to
270communicate with clients, monitors, other OSDs for replication, and other OSDs
271for heartbeats.
272
273On some distributions (e.g., RHEL), the default firewall configuration is fairly
274strict. You may need to adjust your firewall settings allow inbound requests so
275that clients in your network can communicate with daemons on your Ceph nodes.
276
277For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
278nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
279ensure that you make the settings permanent so that they are enabled on reboot.
280
281For example, on monitors::
282
283 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
284
285and on OSDs and MDSs::
286
287 sudo firewall-cmd --zone=public --add-service=ceph --permanent
288
289Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
290
291 sudo firewall-cmd --reload
292
293For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
294for Ceph OSDs. For example::
295
296 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
297
298Once you have finished configuring ``iptables``, ensure that you make the
299changes persistent on each node so that they will be in effect when your nodes
300reboot. For example::
301
302 /sbin/service iptables save
303
304TTY
305---
306
307On CentOS and RHEL, you may receive an error while trying to execute
308``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
309nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
310requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
311out to ensure that ``ceph-deploy`` can connect using the user you created with
312`Create a Ceph Deploy User`_.
313
314.. note:: If editing, ``/etc/sudoers``, ensure that you use
315 ``sudo visudo`` rather than a text editor.
316
317
318SELinux
319-------
320
321On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
322installation, we recommend setting SELinux to ``Permissive`` or disabling it
323entirely and ensuring that your installation and cluster are working properly
324before hardening your configuration. To set SELinux to ``Permissive``, execute the
325following::
326
327 sudo setenforce 0
328
329To configure SELinux persistently (recommended if SELinux is an issue), modify
330the configuration file at ``/etc/selinux/config``.
331
332
333Priorities/Preferences
334----------------------
335
336Ensure that your package manager has priority/preferences packages installed and
337enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
338enable optional repositories. ::
339
340 sudo yum install yum-plugin-priorities
341
342For example, on RHEL 7 server, execute the following to install
343``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
344repository::
345
346 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
347
348
349Summary
350=======
351
352This completes the Quick Start Preflight. Proceed to the `Storage Cluster
353Quick Start`_.
354
355.. _Storage Cluster Quick Start: ../quick-ceph-deploy
356.. _OS Recommendations: ../os-recommendations
357.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
358.. _Clock: ../../rados/configuration/mon-config-ref#clock
359.. _NTP: http://www.ntp.org/
360.. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
361.. _EPEL wiki: https://fedoraproject.org/wiki/EPEL