]> git.proxmox.com Git - ceph.git/blob - ceph/doc/install/ceph-deploy/quick-start-preflight.rst
bump version to 15.2.11-pve1
[ceph.git] / ceph / doc / install / ceph-deploy / quick-start-preflight.rst
1 =====================
2 Preflight Checklist
3 =====================
4
5 The ``ceph-deploy`` tool operates out of a directory on an admin
6 :term:`node`. Any host with network connectivity and a modern python
7 environment and ssh (such as Linux) should work.
8
9 In the descriptions below, :term:`Node` refers to a single machine.
10
11 .. include:: quick-common.rst
12
13
14 Ceph-deploy Setup
15 =================
16
17 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
18 ``ceph-deploy``.
19
20 Debian/Ubuntu
21 -------------
22
23 For Debian and Ubuntu distributions, perform the following steps:
24
25 #. Add the release key::
26
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
28
29 #. Add the Ceph packages to your repository. Use the command below and
30 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
31 ``luminous``.) For example::
32
33 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
34
35 #. Update your repository and install ``ceph-deploy``::
36
37 sudo apt update
38 sudo apt install ceph-deploy
39
40 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
41
42
43 RHEL/CentOS
44 -----------
45
46 For CentOS 7, perform the following steps:
47
48 #. On Red Hat Enterprise Linux 7, register the target machine with
49 ``subscription-manager``, verify your subscriptions, and enable the
50 "Extras" repository for package dependencies. For example::
51
52 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
53
54 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
55 repository::
56
57 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
58
59 Please see the `EPEL wiki`_ page for more information.
60
61 #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
62 ``luminous``.) For example::
63
64 cat << EOM > /etc/yum.repos.d/ceph.repo
65 [ceph-noarch]
66 name=Ceph noarch packages
67 baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
68 enabled=1
69 gpgcheck=1
70 type=rpm-md
71 gpgkey=https://download.ceph.com/keys/release.asc
72 EOM
73
74 #. You may need to install python setuptools required by ceph-deploy:
75
76 sudo yum install python-setuptools
77
78 #. Update your repository and install ``ceph-deploy``::
79
80 sudo yum update
81 sudo yum install ceph-deploy
82
83 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
84
85
86 openSUSE
87 --------
88
89 The Ceph project does not currently publish release RPMs for openSUSE, but
90 a stable version of Ceph is included in the default update repository, so
91 installing it is just a matter of::
92
93 sudo zypper install ceph
94 sudo zypper install ceph-deploy
95
96 If the distro version is out-of-date, open a bug at
97 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
98 the following repositories:
99
100 #. Hammer::
101
102 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
103
104 #. Jewel::
105
106 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
107
108
109 Ceph Node Setup
110 ===============
111
112 The admin node must have password-less SSH access to Ceph nodes.
113 When ceph-deploy logs in to a Ceph node as a user, that particular
114 user must have passwordless ``sudo`` privileges.
115
116
117 Install NTP
118 -----------
119
120 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
121 prevent issues arising from clock drift. See `Clock`_ for details.
122
123 On CentOS / RHEL, execute::
124
125 sudo yum install ntp ntpdate ntp-doc
126
127 On Debian / Ubuntu, execute::
128
129 sudo apt install ntpsec
130
131 or::
132
133 sudo apt install chrony
134
135 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
136 same NTP time server. See `NTP`_ for details.
137
138
139 Install SSH Server
140 ------------------
141
142 For **ALL** Ceph Nodes perform the following steps:
143
144 #. Install an SSH server (if necessary) on each Ceph Node::
145
146 sudo apt install openssh-server
147
148 or::
149
150 sudo yum install openssh-server
151
152
153 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
154
155
156 Create a Ceph Deploy User
157 -------------------------
158
159 The ``ceph-deploy`` utility must login to a Ceph node as a user
160 that has passwordless ``sudo`` privileges, because it needs to install
161 software and configuration files without prompting for passwords.
162
163 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
164 specify any user that has password-less ``sudo`` (including ``root``, although
165 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
166 user you specify must have password-less SSH access to the Ceph node, as
167 ``ceph-deploy`` will not prompt you for a password.
168
169 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
170 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
171 name across the cluster may improve ease of use (not required), but you should
172 avoid obvious user names, because hackers typically use them with brute force
173 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
174 substituting ``{username}`` for the user name you define, describes how to
175 create a user with passwordless ``sudo``.
176
177 .. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
178 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
179 removing the user must be done before attempting an upgrade.
180
181 #. Create a new user on each Ceph Node. ::
182
183 ssh user@ceph-server
184 sudo useradd -d /home/{username} -m {username}
185 sudo passwd {username}
186
187 #. For the new user you added to each Ceph node, ensure that the user has
188 ``sudo`` privileges. ::
189
190 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
191 sudo chmod 0440 /etc/sudoers.d/{username}
192
193
194 Enable Password-less SSH
195 ------------------------
196
197 Since ``ceph-deploy`` will not prompt for a password, you must generate
198 SSH keys on the admin node and distribute the public key to each Ceph
199 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
200 monitors.
201
202 #. Generate the SSH keys, but do not use ``sudo`` or the
203 ``root`` user. Leave the passphrase empty::
204
205 ssh-keygen
206
207 Generating public/private key pair.
208 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
209 Enter passphrase (empty for no passphrase):
210 Enter same passphrase again:
211 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
212 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
213
214 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
215 you created with `Create a Ceph Deploy User`_. ::
216
217 ssh-copy-id {username}@node1
218 ssh-copy-id {username}@node2
219 ssh-copy-id {username}@node3
220
221 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
222 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
223 created without requiring you to specify ``--username {username}`` each
224 time you execute ``ceph-deploy``. This has the added benefit of streamlining
225 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
226 created::
227
228 Host node1
229 Hostname node1
230 User {username}
231 Host node2
232 Hostname node2
233 User {username}
234 Host node3
235 Hostname node3
236 User {username}
237
238
239 Enable Networking On Bootup
240 ---------------------------
241
242 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
243 If networking is ``off`` by default, the Ceph cluster cannot come online
244 during bootup until you enable networking.
245
246 The default configuration on some distributions (e.g., CentOS) has the
247 networking interface(s) off by default. Ensure that, during boot up, your
248 network interface(s) turn(s) on so that your Ceph daemons can communicate over
249 the network. For example, on Red Hat and CentOS, navigate to
250 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
251 has ``ONBOOT`` set to ``yes``.
252
253
254 Ensure Connectivity
255 -------------------
256
257 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
258 Address hostname resolution issues as necessary.
259
260 .. note:: Hostnames should resolve to a network IP address, not to the
261 loopback IP address (e.g., hostnames should resolve to an IP address other
262 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
263 should also ensure that it resolves to its hostname and IP address
264 (i.e., not its loopback IP address).
265
266
267 Open Required Ports
268 -------------------
269
270 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
271 in a port range of ``6800:7300`` by default. See the `Network Configuration
272 Reference`_ for details. Ceph OSDs can use multiple network connections to
273 communicate with clients, monitors, other OSDs for replication, and other OSDs
274 for heartbeats.
275
276 On some distributions (e.g., RHEL), the default firewall configuration is fairly
277 strict. You may need to adjust your firewall settings allow inbound requests so
278 that clients in your network can communicate with daemons on your Ceph nodes.
279
280 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
281 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
282 ensure that you make the settings permanent so that they are enabled on reboot.
283
284 For example, on monitors::
285
286 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
287
288 and on OSDs and MDSs::
289
290 sudo firewall-cmd --zone=public --add-service=ceph --permanent
291
292 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
293
294 sudo firewall-cmd --reload
295
296 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
297 for Ceph OSDs. For example::
298
299 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
300
301 Once you have finished configuring ``iptables``, ensure that you make the
302 changes persistent on each node so that they will be in effect when your nodes
303 reboot. For example::
304
305 /sbin/service iptables save
306
307 TTY
308 ---
309
310 On CentOS and RHEL, you may receive an error while trying to execute
311 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
312 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
313 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
314 out to ensure that ``ceph-deploy`` can connect using the user you created with
315 `Create a Ceph Deploy User`_.
316
317 .. note:: If editing, ``/etc/sudoers``, ensure that you use
318 ``sudo visudo`` rather than a text editor.
319
320
321 SELinux
322 -------
323
324 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
325 installation, we recommend setting SELinux to ``Permissive`` or disabling it
326 entirely and ensuring that your installation and cluster are working properly
327 before hardening your configuration. To set SELinux to ``Permissive``, execute the
328 following::
329
330 sudo setenforce 0
331
332 To configure SELinux persistently (recommended if SELinux is an issue), modify
333 the configuration file at ``/etc/selinux/config``.
334
335
336 Priorities/Preferences
337 ----------------------
338
339 Ensure that your package manager has priority/preferences packages installed and
340 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
341 enable optional repositories. ::
342
343 sudo yum install yum-plugin-priorities
344
345 For example, on RHEL 7 server, execute the following to install
346 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
347 repository::
348
349 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
350
351
352 Summary
353 =======
354
355 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
356 Quick Start`_.
357
358 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
359 .. _OS Recommendations: ../os-recommendations
360 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
361 .. _Clock: ../../rados/configuration/mon-config-ref#clock
362 .. _NTP: http://www.ntp.org/
363 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
364 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL