]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ===================== |
2 | Preflight Checklist | |
3 | ===================== | |
4 | ||
224ce89b WB |
5 | The ``ceph-deploy`` tool operates out of a directory on an admin |
6 | :term:`node`. Any host with network connectivity and a modern python | |
7 | environment and ssh (such as Linux) should work. | |
7c673cae FG |
8 | |
9 | In the descriptions below, :term:`Node` refers to a single machine. | |
10 | ||
11 | .. include:: quick-common.rst | |
12 | ||
13 | ||
224ce89b | 14 | Ceph-deploy Setup |
7c673cae FG |
15 | ================= |
16 | ||
17 | Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install | |
18 | ``ceph-deploy``. | |
19 | ||
20 | Debian/Ubuntu | |
21 | ------------- | |
22 | ||
23 | For Debian and Ubuntu distributions, perform the following steps: | |
24 | ||
25 | #. Add the release key:: | |
26 | ||
27 | wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - | |
28 | ||
224ce89b WB |
29 | #. Add the Ceph packages to your repository:: |
30 | ||
31 | echo deb https://download.ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list | |
32 | ||
33 | The above URL contains the latest stable release of Ceph. If you | |
34 | would like to select a specific release, use the command below and | |
35 | replace ``{ceph-stable-release}`` with a stable Ceph release (e.g., | |
36 | ``luminous``.) For example:: | |
7c673cae FG |
37 | |
38 | echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list | |
39 | ||
40 | #. Update your repository and install ``ceph-deploy``:: | |
41 | ||
224ce89b WB |
42 | sudo apt update |
43 | sudo apt install ceph-deploy | |
7c673cae | 44 | |
224ce89b | 45 | .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/`` |
7c673cae FG |
46 | |
47 | ||
48 | RHEL/CentOS | |
49 | ----------- | |
50 | ||
51 | For CentOS 7, perform the following steps: | |
52 | ||
224ce89b WB |
53 | #. On Red Hat Enterprise Linux 7, register the target machine with |
54 | ``subscription-manager``, verify your subscriptions, and enable the | |
55 | "Extras" repository for package dependencies. For example:: | |
7c673cae FG |
56 | |
57 | sudo subscription-manager repos --enable=rhel-7-server-extras-rpms | |
58 | ||
59 | #. Install and enable the Extra Packages for Enterprise Linux (EPEL) | |
60 | repository:: | |
61 | ||
62 | sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm | |
63 | ||
64 | Please see the `EPEL wiki`_ page for more information. | |
65 | ||
224ce89b | 66 | #. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command:: |
7c673cae | 67 | |
224ce89b WB |
68 | cat >/etc/yum.repos.d/ceph.repro |
69 | [ceph-noarch] | |
70 | name=Ceph noarch packages | |
71 | baseurl=https://download.ceph.com/rpm/el7/noarch | |
72 | enabled=1 | |
73 | gpgcheck=1 | |
74 | type=rpm-md | |
75 | gpgkey=https://download.ceph.com/keys/release.asc | |
7c673cae | 76 | |
224ce89b | 77 | and then this *Control-D*. This will use the latest stable Ceph release. If you would like to install a different release, replace ``https://download.ceph.com/rpm/el7/noarch`` with ``https://download.ceph.com/rpm-{ceph-release}/el7/noarch`` where ``{ceph-release}`` is a release name like ``luminous``. |
7c673cae FG |
78 | |
79 | #. Update your repository and install ``ceph-deploy``:: | |
80 | ||
224ce89b WB |
81 | sudo yum update |
82 | sudo yum install ceph-deploy | |
7c673cae | 83 | |
224ce89b | 84 | .. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://ceph.com/`` by ``http://eu.ceph.com/`` |
7c673cae FG |
85 | |
86 | ||
87 | openSUSE | |
88 | -------- | |
89 | ||
90 | The Ceph project does not currently publish release RPMs for openSUSE, but | |
91 | a stable version of Ceph is included in the default update repository, so | |
92 | installing it is just a matter of:: | |
93 | ||
94 | sudo zypper install ceph | |
95 | sudo zypper install ceph-deploy | |
96 | ||
97 | If the distro version is out-of-date, open a bug at | |
98 | https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of | |
99 | the following repositories: | |
100 | ||
101 | #. Hammer:: | |
102 | ||
103 | https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph | |
104 | ||
105 | #. Jewel:: | |
106 | ||
107 | https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph | |
108 | ||
109 | ||
110 | Ceph Node Setup | |
111 | =============== | |
112 | ||
113 | The admin node must be have password-less SSH access to Ceph nodes. | |
114 | When ceph-deploy logs in to a Ceph node as a user, that particular | |
115 | user must have passwordless ``sudo`` privileges. | |
116 | ||
117 | ||
118 | Install NTP | |
119 | ----------- | |
120 | ||
121 | We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to | |
122 | prevent issues arising from clock drift. See `Clock`_ for details. | |
123 | ||
124 | On CentOS / RHEL, execute:: | |
125 | ||
126 | sudo yum install ntp ntpdate ntp-doc | |
127 | ||
128 | On Debian / Ubuntu, execute:: | |
129 | ||
224ce89b | 130 | sudo apt install ntp |
7c673cae FG |
131 | |
132 | Ensure that you enable the NTP service. Ensure that each Ceph Node uses the | |
133 | same NTP time server. See `NTP`_ for details. | |
134 | ||
135 | ||
136 | Install SSH Server | |
137 | ------------------ | |
138 | ||
139 | For **ALL** Ceph Nodes perform the following steps: | |
140 | ||
141 | #. Install an SSH server (if necessary) on each Ceph Node:: | |
142 | ||
224ce89b | 143 | sudo apt install openssh-server |
7c673cae FG |
144 | |
145 | or:: | |
146 | ||
147 | sudo yum install openssh-server | |
148 | ||
149 | ||
150 | #. Ensure the SSH server is running on **ALL** Ceph Nodes. | |
151 | ||
152 | ||
153 | Create a Ceph Deploy User | |
154 | ------------------------- | |
155 | ||
156 | The ``ceph-deploy`` utility must login to a Ceph node as a user | |
157 | that has passwordless ``sudo`` privileges, because it needs to install | |
158 | software and configuration files without prompting for passwords. | |
159 | ||
160 | Recent versions of ``ceph-deploy`` support a ``--username`` option so you can | |
161 | specify any user that has password-less ``sudo`` (including ``root``, although | |
162 | this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the | |
163 | user you specify must have password-less SSH access to the Ceph node, as | |
164 | ``ceph-deploy`` will not prompt you for a password. | |
165 | ||
166 | We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes | |
167 | in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user | |
168 | name across the cluster may improve ease of use (not required), but you should | |
169 | avoid obvious user names, because hackers typically use them with brute force | |
170 | hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure, | |
171 | substituting ``{username}`` for the user name you define, describes how to | |
172 | create a user with passwordless ``sudo``. | |
173 | ||
174 | .. note:: Starting with the `Infernalis release`_ the "ceph" user name is reserved | |
175 | for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes, | |
176 | removing the user must be done before attempting an upgrade. | |
177 | ||
178 | #. Create a new user on each Ceph Node. :: | |
179 | ||
180 | ssh user@ceph-server | |
181 | sudo useradd -d /home/{username} -m {username} | |
182 | sudo passwd {username} | |
183 | ||
184 | #. For the new user you added to each Ceph node, ensure that the user has | |
185 | ``sudo`` privileges. :: | |
186 | ||
187 | echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username} | |
188 | sudo chmod 0440 /etc/sudoers.d/{username} | |
189 | ||
190 | ||
191 | Enable Password-less SSH | |
192 | ------------------------ | |
193 | ||
194 | Since ``ceph-deploy`` will not prompt for a password, you must generate | |
195 | SSH keys on the admin node and distribute the public key to each Ceph | |
196 | node. ``ceph-deploy`` will attempt to generate the SSH keys for initial | |
197 | monitors. | |
198 | ||
199 | #. Generate the SSH keys, but do not use ``sudo`` or the | |
200 | ``root`` user. Leave the passphrase empty:: | |
201 | ||
202 | ssh-keygen | |
203 | ||
204 | Generating public/private key pair. | |
205 | Enter file in which to save the key (/ceph-admin/.ssh/id_rsa): | |
206 | Enter passphrase (empty for no passphrase): | |
207 | Enter same passphrase again: | |
208 | Your identification has been saved in /ceph-admin/.ssh/id_rsa. | |
209 | Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub. | |
210 | ||
211 | #. Copy the key to each Ceph Node, replacing ``{username}`` with the user name | |
212 | you created with `Create a Ceph Deploy User`_. :: | |
213 | ||
214 | ssh-copy-id {username}@node1 | |
215 | ssh-copy-id {username}@node2 | |
216 | ssh-copy-id {username}@node3 | |
217 | ||
218 | #. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy`` | |
219 | admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you | |
220 | created without requiring you to specify ``--username {username}`` each | |
221 | time you execute ``ceph-deploy``. This has the added benefit of streamlining | |
222 | ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you | |
223 | created:: | |
224 | ||
225 | Host node1 | |
226 | Hostname node1 | |
227 | User {username} | |
228 | Host node2 | |
229 | Hostname node2 | |
230 | User {username} | |
231 | Host node3 | |
232 | Hostname node3 | |
233 | User {username} | |
234 | ||
235 | ||
236 | Enable Networking On Bootup | |
237 | --------------------------- | |
238 | ||
239 | Ceph OSDs peer with each other and report to Ceph Monitors over the network. | |
240 | If networking is ``off`` by default, the Ceph cluster cannot come online | |
241 | during bootup until you enable networking. | |
242 | ||
243 | The default configuration on some distributions (e.g., CentOS) has the | |
244 | networking interface(s) off by default. Ensure that, during boot up, your | |
245 | network interface(s) turn(s) on so that your Ceph daemons can communicate over | |
246 | the network. For example, on Red Hat and CentOS, navigate to | |
247 | ``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file | |
248 | has ``ONBOOT`` set to ``yes``. | |
249 | ||
250 | ||
251 | Ensure Connectivity | |
252 | ------------------- | |
253 | ||
254 | Ensure connectivity using ``ping`` with short hostnames (``hostname -s``). | |
255 | Address hostname resolution issues as necessary. | |
256 | ||
257 | .. note:: Hostnames should resolve to a network IP address, not to the | |
258 | loopback IP address (e.g., hostnames should resolve to an IP address other | |
259 | than ``127.0.0.1``). If you use your admin node as a Ceph node, you | |
260 | should also ensure that it resolves to its hostname and IP address | |
261 | (i.e., not its loopback IP address). | |
262 | ||
263 | ||
264 | Open Required Ports | |
265 | ------------------- | |
266 | ||
267 | Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate | |
268 | in a port range of ``6800:7300`` by default. See the `Network Configuration | |
269 | Reference`_ for details. Ceph OSDs can use multiple network connections to | |
270 | communicate with clients, monitors, other OSDs for replication, and other OSDs | |
271 | for heartbeats. | |
272 | ||
273 | On some distributions (e.g., RHEL), the default firewall configuration is fairly | |
274 | strict. You may need to adjust your firewall settings allow inbound requests so | |
275 | that clients in your network can communicate with daemons on your Ceph nodes. | |
276 | ||
277 | For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor | |
278 | nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and | |
279 | ensure that you make the settings permanent so that they are enabled on reboot. | |
280 | ||
281 | For example, on monitors:: | |
282 | ||
283 | sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent | |
284 | ||
285 | and on OSDs and MDSs:: | |
286 | ||
287 | sudo firewall-cmd --zone=public --add-service=ceph --permanent | |
288 | ||
289 | Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting:: | |
290 | ||
291 | sudo firewall-cmd --reload | |
292 | ||
293 | For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300`` | |
294 | for Ceph OSDs. For example:: | |
295 | ||
296 | sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT | |
297 | ||
298 | Once you have finished configuring ``iptables``, ensure that you make the | |
299 | changes persistent on each node so that they will be in effect when your nodes | |
300 | reboot. For example:: | |
301 | ||
302 | /sbin/service iptables save | |
303 | ||
304 | TTY | |
305 | --- | |
306 | ||
307 | On CentOS and RHEL, you may receive an error while trying to execute | |
308 | ``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph | |
309 | nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults | |
310 | requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it | |
311 | out to ensure that ``ceph-deploy`` can connect using the user you created with | |
312 | `Create a Ceph Deploy User`_. | |
313 | ||
314 | .. note:: If editing, ``/etc/sudoers``, ensure that you use | |
315 | ``sudo visudo`` rather than a text editor. | |
316 | ||
317 | ||
318 | SELinux | |
319 | ------- | |
320 | ||
321 | On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your | |
322 | installation, we recommend setting SELinux to ``Permissive`` or disabling it | |
323 | entirely and ensuring that your installation and cluster are working properly | |
324 | before hardening your configuration. To set SELinux to ``Permissive``, execute the | |
325 | following:: | |
326 | ||
327 | sudo setenforce 0 | |
328 | ||
329 | To configure SELinux persistently (recommended if SELinux is an issue), modify | |
330 | the configuration file at ``/etc/selinux/config``. | |
331 | ||
332 | ||
333 | Priorities/Preferences | |
334 | ---------------------- | |
335 | ||
336 | Ensure that your package manager has priority/preferences packages installed and | |
337 | enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to | |
338 | enable optional repositories. :: | |
339 | ||
340 | sudo yum install yum-plugin-priorities | |
341 | ||
342 | For example, on RHEL 7 server, execute the following to install | |
343 | ``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms`` | |
344 | repository:: | |
345 | ||
346 | sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms | |
347 | ||
348 | ||
349 | Summary | |
350 | ======= | |
351 | ||
352 | This completes the Quick Start Preflight. Proceed to the `Storage Cluster | |
353 | Quick Start`_. | |
354 | ||
355 | .. _Storage Cluster Quick Start: ../quick-ceph-deploy | |
356 | .. _OS Recommendations: ../os-recommendations | |
357 | .. _Network Configuration Reference: ../../rados/configuration/network-config-ref | |
358 | .. _Clock: ../../rados/configuration/mon-config-ref#clock | |
359 | .. _NTP: http://www.ntp.org/ | |
360 | .. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate | |
361 | .. _EPEL wiki: https://fedoraproject.org/wiki/EPEL |