]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/quick-start-preflight.rst
update download target update for octopus release
[ceph.git] / ceph / doc / start / quick-start-preflight.rst
1 =====================
2 Preflight Checklist
3 =====================
4
5 The ``ceph-deploy`` tool operates out of a directory on an admin
6 :term:`node`. Any host with network connectivity and a modern python
7 environment and ssh (such as Linux) should work.
8
9 In the descriptions below, :term:`Node` refers to a single machine.
10
11 .. include:: quick-common.rst
12
13
14 Ceph-deploy Setup
15 =================
16
17 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
18 ``ceph-deploy``.
19
20 Debian/Ubuntu
21 -------------
22
23 For Debian and Ubuntu distributions, perform the following steps:
24
25 #. Add the release key::
26
27 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
28
29 #. Add the Ceph packages to your repository. Use the command below and
30 replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
31 ``luminous``.) For example::
32
33 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
34
35 #. Update your repository and install ``ceph-deploy``::
36
37 sudo apt update
38 sudo apt install ceph-deploy
39
40 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
41
42
43 RHEL/CentOS
44 -----------
45
46 For CentOS 7, perform the following steps:
47
48 #. On Red Hat Enterprise Linux 7, register the target machine with
49 ``subscription-manager``, verify your subscriptions, and enable the
50 "Extras" repository for package dependencies. For example::
51
52 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
53
54 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
55 repository::
56
57 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
58
59 Please see the `EPEL wiki`_ page for more information.
60
61 #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
62 ``luminous``.) For example::
63
64 cat << EOM > /etc/yum.repos.d/ceph.repo
65 [ceph-noarch]
66 name=Ceph noarch packages
67 baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
68 enabled=1
69 gpgcheck=1
70 type=rpm-md
71 gpgkey=https://download.ceph.com/keys/release.asc
72 EOM
73
74 #. Update your repository and install ``ceph-deploy``::
75
76 sudo yum update
77 sudo yum install ceph-deploy
78
79 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/``
80
81
82 openSUSE
83 --------
84
85 The Ceph project does not currently publish release RPMs for openSUSE, but
86 a stable version of Ceph is included in the default update repository, so
87 installing it is just a matter of::
88
89 sudo zypper install ceph
90 sudo zypper install ceph-deploy
91
92 If the distro version is out-of-date, open a bug at
93 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
94 the following repositories:
95
96 #. Hammer::
97
98 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
99
100 #. Jewel::
101
102 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
103
104
105 Ceph Node Setup
106 ===============
107
108 The admin node must have password-less SSH access to Ceph nodes.
109 When ceph-deploy logs in to a Ceph node as a user, that particular
110 user must have passwordless ``sudo`` privileges.
111
112
113 Install NTP
114 -----------
115
116 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
117 prevent issues arising from clock drift. See `Clock`_ for details.
118
119 On CentOS / RHEL, execute::
120
121 sudo yum install ntp ntpdate ntp-doc
122
123 On Debian / Ubuntu, execute::
124
125 sudo apt install ntp
126
127 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
128 same NTP time server. See `NTP`_ for details.
129
130
131 Install SSH Server
132 ------------------
133
134 For **ALL** Ceph Nodes perform the following steps:
135
136 #. Install an SSH server (if necessary) on each Ceph Node::
137
138 sudo apt install openssh-server
139
140 or::
141
142 sudo yum install openssh-server
143
144
145 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
146
147
148 Create a Ceph Deploy User
149 -------------------------
150
151 The ``ceph-deploy`` utility must login to a Ceph node as a user
152 that has passwordless ``sudo`` privileges, because it needs to install
153 software and configuration files without prompting for passwords.
154
155 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
156 specify any user that has password-less ``sudo`` (including ``root``, although
157 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
158 user you specify must have password-less SSH access to the Ceph node, as
159 ``ceph-deploy`` will not prompt you for a password.
160
161 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
162 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
163 name across the cluster may improve ease of use (not required), but you should
164 avoid obvious user names, because hackers typically use them with brute force
165 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
166 substituting ``{username}`` for the user name you define, describes how to
167 create a user with passwordless ``sudo``.
168
169 .. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
170 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
171 removing the user must be done before attempting an upgrade.
172
173 #. Create a new user on each Ceph Node. ::
174
175 ssh user@ceph-server
176 sudo useradd -d /home/{username} -m {username}
177 sudo passwd {username}
178
179 #. For the new user you added to each Ceph node, ensure that the user has
180 ``sudo`` privileges. ::
181
182 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
183 sudo chmod 0440 /etc/sudoers.d/{username}
184
185
186 Enable Password-less SSH
187 ------------------------
188
189 Since ``ceph-deploy`` will not prompt for a password, you must generate
190 SSH keys on the admin node and distribute the public key to each Ceph
191 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
192 monitors.
193
194 #. Generate the SSH keys, but do not use ``sudo`` or the
195 ``root`` user. Leave the passphrase empty::
196
197 ssh-keygen
198
199 Generating public/private key pair.
200 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
201 Enter passphrase (empty for no passphrase):
202 Enter same passphrase again:
203 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
204 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
205
206 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
207 you created with `Create a Ceph Deploy User`_. ::
208
209 ssh-copy-id {username}@node1
210 ssh-copy-id {username}@node2
211 ssh-copy-id {username}@node3
212
213 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
214 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
215 created without requiring you to specify ``--username {username}`` each
216 time you execute ``ceph-deploy``. This has the added benefit of streamlining
217 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
218 created::
219
220 Host node1
221 Hostname node1
222 User {username}
223 Host node2
224 Hostname node2
225 User {username}
226 Host node3
227 Hostname node3
228 User {username}
229
230
231 Enable Networking On Bootup
232 ---------------------------
233
234 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
235 If networking is ``off`` by default, the Ceph cluster cannot come online
236 during bootup until you enable networking.
237
238 The default configuration on some distributions (e.g., CentOS) has the
239 networking interface(s) off by default. Ensure that, during boot up, your
240 network interface(s) turn(s) on so that your Ceph daemons can communicate over
241 the network. For example, on Red Hat and CentOS, navigate to
242 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
243 has ``ONBOOT`` set to ``yes``.
244
245
246 Ensure Connectivity
247 -------------------
248
249 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
250 Address hostname resolution issues as necessary.
251
252 .. note:: Hostnames should resolve to a network IP address, not to the
253 loopback IP address (e.g., hostnames should resolve to an IP address other
254 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
255 should also ensure that it resolves to its hostname and IP address
256 (i.e., not its loopback IP address).
257
258
259 Open Required Ports
260 -------------------
261
262 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
263 in a port range of ``6800:7300`` by default. See the `Network Configuration
264 Reference`_ for details. Ceph OSDs can use multiple network connections to
265 communicate with clients, monitors, other OSDs for replication, and other OSDs
266 for heartbeats.
267
268 On some distributions (e.g., RHEL), the default firewall configuration is fairly
269 strict. You may need to adjust your firewall settings allow inbound requests so
270 that clients in your network can communicate with daemons on your Ceph nodes.
271
272 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
273 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
274 ensure that you make the settings permanent so that they are enabled on reboot.
275
276 For example, on monitors::
277
278 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
279
280 and on OSDs and MDSs::
281
282 sudo firewall-cmd --zone=public --add-service=ceph --permanent
283
284 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
285
286 sudo firewall-cmd --reload
287
288 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
289 for Ceph OSDs. For example::
290
291 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
292
293 Once you have finished configuring ``iptables``, ensure that you make the
294 changes persistent on each node so that they will be in effect when your nodes
295 reboot. For example::
296
297 /sbin/service iptables save
298
299 TTY
300 ---
301
302 On CentOS and RHEL, you may receive an error while trying to execute
303 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
304 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
305 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
306 out to ensure that ``ceph-deploy`` can connect using the user you created with
307 `Create a Ceph Deploy User`_.
308
309 .. note:: If editing, ``/etc/sudoers``, ensure that you use
310 ``sudo visudo`` rather than a text editor.
311
312
313 SELinux
314 -------
315
316 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
317 installation, we recommend setting SELinux to ``Permissive`` or disabling it
318 entirely and ensuring that your installation and cluster are working properly
319 before hardening your configuration. To set SELinux to ``Permissive``, execute the
320 following::
321
322 sudo setenforce 0
323
324 To configure SELinux persistently (recommended if SELinux is an issue), modify
325 the configuration file at ``/etc/selinux/config``.
326
327
328 Priorities/Preferences
329 ----------------------
330
331 Ensure that your package manager has priority/preferences packages installed and
332 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
333 enable optional repositories. ::
334
335 sudo yum install yum-plugin-priorities
336
337 For example, on RHEL 7 server, execute the following to install
338 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
339 repository::
340
341 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
342
343
344 Summary
345 =======
346
347 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
348 Quick Start`_.
349
350 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
351 .. _OS Recommendations: ../os-recommendations
352 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
353 .. _Clock: ../../rados/configuration/mon-config-ref#clock
354 .. _NTP: http://www.ntp.org/
355 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
356 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL