]> git.proxmox.com Git - ceph.git/blob - ceph/doc/start/quick-start-preflight.rst
aea60bf12e796f4230568963912d6acb2b16f07d
[ceph.git] / ceph / doc / start / quick-start-preflight.rst
1 =====================
2 Preflight Checklist
3 =====================
4
5 .. versionadded:: 0.60
6
7 Thank you for trying Ceph! We recommend setting up a ``ceph-deploy`` admin
8 :term:`node` and a 3-node :term:`Ceph Storage Cluster` to explore the basics of
9 Ceph. This **Preflight Checklist** will help you prepare a ``ceph-deploy``
10 admin node and three Ceph Nodes (or virtual machines) that will host your Ceph
11 Storage Cluster. Before proceeding any further, see `OS Recommendations`_ to
12 verify that you have a supported distribution and version of Linux. When
13 you use a single Linux distribution and version across the cluster, it will
14 make it easier for you to troubleshoot issues that arise in production.
15
16 In the descriptions below, :term:`Node` refers to a single machine.
17
18 .. include:: quick-common.rst
19
20
21 Ceph Deploy Setup
22 =================
23
24 Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
25 ``ceph-deploy``.
26
27 Debian/Ubuntu
28 -------------
29
30 For Debian and Ubuntu distributions, perform the following steps:
31
32 #. Add the release key::
33
34 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
35
36 #. Add the Ceph packages to your repository. Replace ``{ceph-stable-release}``
37 with a stable Ceph release (e.g., ``hammer``, ``jewel``, etc.)
38 For example::
39
40 echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
41
42 #. Update your repository and install ``ceph-deploy``::
43
44 sudo apt-get update && sudo apt-get install ceph-deploy
45
46 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages.
47 Simply replace ``http://ceph.com/`` by ``http://eu.ceph.com/``
48
49
50 RHEL/CentOS
51 -----------
52
53 For CentOS 7, perform the following steps:
54
55 #. On Red Hat Enterprise Linux 7, register the target machine with ``subscription-manager``, verify your subscriptions, and enable the "Extras" repository for package dependencies. For example::
56
57 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
58
59 #. Install and enable the Extra Packages for Enterprise Linux (EPEL)
60 repository::
61
62 sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
63
64 Please see the `EPEL wiki`_ page for more information.
65
66
67 #. Add the package to your repository. Open a text editor and create a
68 Yellowdog Updater, Modified (YUM) entry. Use the file path
69 ``/etc/yum.repos.d/ceph.repo``. For example::
70
71 sudo vim /etc/yum.repos.d/ceph.repo
72
73 Paste the following example code. Replace ``{ceph-release}`` with
74 the recent major release of Ceph (e.g., ``jewel``). Replace ``{distro}``
75 with your Linux distribution (e.g., ``el7`` for CentOS 7). Finally, save the
76 contents to the
77 ``/etc/yum.repos.d/ceph.repo`` file. ::
78
79 [ceph-noarch]
80 name=Ceph noarch packages
81 baseurl=https://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
82 enabled=1
83 gpgcheck=1
84 type=rpm-md
85 gpgkey=https://download.ceph.com/keys/release.asc
86
87
88 #. Update your repository and install ``ceph-deploy``::
89
90 sudo yum update && sudo yum install ceph-deploy
91
92
93 .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages.
94 Simply replace ``http://ceph.com/`` by ``http://eu.ceph.com/``
95
96
97 openSUSE
98 --------
99
100 The Ceph project does not currently publish release RPMs for openSUSE, but
101 a stable version of Ceph is included in the default update repository, so
102 installing it is just a matter of::
103
104 sudo zypper install ceph
105 sudo zypper install ceph-deploy
106
107 If the distro version is out-of-date, open a bug at
108 https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
109 the following repositories:
110
111 #. Hammer::
112
113 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
114
115 #. Jewel::
116
117 https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
118
119
120 Ceph Node Setup
121 ===============
122
123 The admin node must be have password-less SSH access to Ceph nodes.
124 When ceph-deploy logs in to a Ceph node as a user, that particular
125 user must have passwordless ``sudo`` privileges.
126
127
128 Install NTP
129 -----------
130
131 We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
132 prevent issues arising from clock drift. See `Clock`_ for details.
133
134 On CentOS / RHEL, execute::
135
136 sudo yum install ntp ntpdate ntp-doc
137
138 On Debian / Ubuntu, execute::
139
140 sudo apt-get install ntp
141
142 Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
143 same NTP time server. See `NTP`_ for details.
144
145
146 Install SSH Server
147 ------------------
148
149 For **ALL** Ceph Nodes perform the following steps:
150
151 #. Install an SSH server (if necessary) on each Ceph Node::
152
153 sudo apt-get install openssh-server
154
155 or::
156
157 sudo yum install openssh-server
158
159
160 #. Ensure the SSH server is running on **ALL** Ceph Nodes.
161
162
163 Create a Ceph Deploy User
164 -------------------------
165
166 The ``ceph-deploy`` utility must login to a Ceph node as a user
167 that has passwordless ``sudo`` privileges, because it needs to install
168 software and configuration files without prompting for passwords.
169
170 Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
171 specify any user that has password-less ``sudo`` (including ``root``, although
172 this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
173 user you specify must have password-less SSH access to the Ceph node, as
174 ``ceph-deploy`` will not prompt you for a password.
175
176 We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
177 in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
178 name across the cluster may improve ease of use (not required), but you should
179 avoid obvious user names, because hackers typically use them with brute force
180 hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
181 substituting ``{username}`` for the user name you define, describes how to
182 create a user with passwordless ``sudo``.
183
184 .. note:: Starting with the `Infernalis release`_ the "ceph" user name is reserved
185 for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
186 removing the user must be done before attempting an upgrade.
187
188 #. Create a new user on each Ceph Node. ::
189
190 ssh user@ceph-server
191 sudo useradd -d /home/{username} -m {username}
192 sudo passwd {username}
193
194 #. For the new user you added to each Ceph node, ensure that the user has
195 ``sudo`` privileges. ::
196
197 echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
198 sudo chmod 0440 /etc/sudoers.d/{username}
199
200
201 Enable Password-less SSH
202 ------------------------
203
204 Since ``ceph-deploy`` will not prompt for a password, you must generate
205 SSH keys on the admin node and distribute the public key to each Ceph
206 node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
207 monitors.
208
209 #. Generate the SSH keys, but do not use ``sudo`` or the
210 ``root`` user. Leave the passphrase empty::
211
212 ssh-keygen
213
214 Generating public/private key pair.
215 Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
216 Enter passphrase (empty for no passphrase):
217 Enter same passphrase again:
218 Your identification has been saved in /ceph-admin/.ssh/id_rsa.
219 Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
220
221 #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
222 you created with `Create a Ceph Deploy User`_. ::
223
224 ssh-copy-id {username}@node1
225 ssh-copy-id {username}@node2
226 ssh-copy-id {username}@node3
227
228 #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
229 admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
230 created without requiring you to specify ``--username {username}`` each
231 time you execute ``ceph-deploy``. This has the added benefit of streamlining
232 ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
233 created::
234
235 Host node1
236 Hostname node1
237 User {username}
238 Host node2
239 Hostname node2
240 User {username}
241 Host node3
242 Hostname node3
243 User {username}
244
245
246 Enable Networking On Bootup
247 ---------------------------
248
249 Ceph OSDs peer with each other and report to Ceph Monitors over the network.
250 If networking is ``off`` by default, the Ceph cluster cannot come online
251 during bootup until you enable networking.
252
253 The default configuration on some distributions (e.g., CentOS) has the
254 networking interface(s) off by default. Ensure that, during boot up, your
255 network interface(s) turn(s) on so that your Ceph daemons can communicate over
256 the network. For example, on Red Hat and CentOS, navigate to
257 ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
258 has ``ONBOOT`` set to ``yes``.
259
260
261 Ensure Connectivity
262 -------------------
263
264 Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
265 Address hostname resolution issues as necessary.
266
267 .. note:: Hostnames should resolve to a network IP address, not to the
268 loopback IP address (e.g., hostnames should resolve to an IP address other
269 than ``127.0.0.1``). If you use your admin node as a Ceph node, you
270 should also ensure that it resolves to its hostname and IP address
271 (i.e., not its loopback IP address).
272
273
274 Open Required Ports
275 -------------------
276
277 Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
278 in a port range of ``6800:7300`` by default. See the `Network Configuration
279 Reference`_ for details. Ceph OSDs can use multiple network connections to
280 communicate with clients, monitors, other OSDs for replication, and other OSDs
281 for heartbeats.
282
283 On some distributions (e.g., RHEL), the default firewall configuration is fairly
284 strict. You may need to adjust your firewall settings allow inbound requests so
285 that clients in your network can communicate with daemons on your Ceph nodes.
286
287 For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
288 nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
289 ensure that you make the settings permanent so that they are enabled on reboot.
290
291 For example, on monitors::
292
293 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
294
295 and on OSDs and MDSs::
296
297 sudo firewall-cmd --zone=public --add-service=ceph --permanent
298
299 Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
300
301 sudo firewall-cmd --reload
302
303 For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
304 for Ceph OSDs. For example::
305
306 sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
307
308 Once you have finished configuring ``iptables``, ensure that you make the
309 changes persistent on each node so that they will be in effect when your nodes
310 reboot. For example::
311
312 /sbin/service iptables save
313
314 TTY
315 ---
316
317 On CentOS and RHEL, you may receive an error while trying to execute
318 ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
319 nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
320 requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
321 out to ensure that ``ceph-deploy`` can connect using the user you created with
322 `Create a Ceph Deploy User`_.
323
324 .. note:: If editing, ``/etc/sudoers``, ensure that you use
325 ``sudo visudo`` rather than a text editor.
326
327
328 SELinux
329 -------
330
331 On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
332 installation, we recommend setting SELinux to ``Permissive`` or disabling it
333 entirely and ensuring that your installation and cluster are working properly
334 before hardening your configuration. To set SELinux to ``Permissive``, execute the
335 following::
336
337 sudo setenforce 0
338
339 To configure SELinux persistently (recommended if SELinux is an issue), modify
340 the configuration file at ``/etc/selinux/config``.
341
342
343 Priorities/Preferences
344 ----------------------
345
346 Ensure that your package manager has priority/preferences packages installed and
347 enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
348 enable optional repositories. ::
349
350 sudo yum install yum-plugin-priorities
351
352 For example, on RHEL 7 server, execute the following to install
353 ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
354 repository::
355
356 sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
357
358
359 Summary
360 =======
361
362 This completes the Quick Start Preflight. Proceed to the `Storage Cluster
363 Quick Start`_.
364
365 .. _Storage Cluster Quick Start: ../quick-ceph-deploy
366 .. _OS Recommendations: ../os-recommendations
367 .. _Network Configuration Reference: ../../rados/configuration/network-config-ref
368 .. _Clock: ../../rados/configuration/mon-config-ref#clock
369 .. _NTP: http://www.ntp.org/
370 .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
371 .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL