]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/install.rst
import 15.2.4
[ceph.git] / ceph / doc / cephadm / install.rst
1 ============================
2 Deploying a new Ceph cluster
3 ============================
4
5 Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6 host, expanding the cluster to encompass any additional hosts, and
7 then deploying the needed services.
8
9 .. highlight:: console
10
11 Requirements
12 ============
13
14 - Systemd
15 - Podman or Docker for running containers
16 - Time synchronization (such as chrony or NTP)
17 - LVM2 for provisioning storage devices
18
19 Any modern Linux distribution should be sufficient. Dependencies
20 are installed automatically by the bootstrap process below.
21
22 .. _get-cephadm:
23
24 Install cephadm
25 ===============
26
27 The ``cephadm`` command can (1) bootstrap a new cluster, (2)
28 launch a containerized shell with a working Ceph CLI, and (3) aid in
29 debugging containerized Ceph daemons.
30
31 There are a few ways to install cephadm:
32
33 * Use ``curl`` to fetch the most recent version of the
34 standalone script::
35
36 # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
37 # chmod +x cephadm
38
39 This script can be run directly from the current directory with::
40
41 # ./cephadm <arguments...>
42
43 * Although the standalone script is sufficient to get a cluster started, it is
44 convenient to have the ``cephadm`` command installed on the host. To install
45 these packages for the current Octopus release::
46
47 # ./cephadm add-repo --release octopus
48 # ./cephadm install
49
50 Confirm that ``cephadm`` is now in your PATH with::
51
52 # which cephadm
53
54 * Some commercial Linux distributions (e.g., RHEL, SLE) may already
55 include up-to-date Ceph packages. In that case, you can install
56 cephadm directly. For example::
57
58 # dnf install -y cephadm # or
59 # zypper install -y cephadm
60
61
62
63 Bootstrap a new cluster
64 =======================
65
66 You need to know which *IP address* to use for the cluster's first
67 monitor daemon. This is normally just the IP for the first host. If there
68 are multiple networks and interfaces, be sure to choose one that will
69 be accessible by any host accessing the Ceph cluster.
70
71 To bootstrap the cluster::
72
73 # mkdir -p /etc/ceph
74 # cephadm bootstrap --mon-ip *<mon-ip>*
75
76 This command will:
77
78 * Create a monitor and manager daemon for the new cluster on the local
79 host.
80 * Generate a new SSH key for the Ceph cluster and adds it to the root
81 user's ``/root/.ssh/authorized_keys`` file.
82 * Write a minimal configuration file needed to communicate with the
83 new cluster to ``/etc/ceph/ceph.conf``.
84 * Write a copy of the ``client.admin`` administrative (privileged!)
85 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
86 * Write a copy of the public key to
87 ``/etc/ceph/ceph.pub``.
88
89 The default bootstrap behavior will work for the vast majority of
90 users. See below for a few options that may be useful for some users,
91 or run ``cephadm bootstrap -h`` to see all available options:
92
93 * Bootstrap writes the files needed to access the new cluster to
94 ``/etc/ceph`` for convenience, so that any Ceph packages installed
95 on the host itself (e.g., to access the command line interface) can
96 easily find them.
97
98 Daemon containers deployed with cephadm, however, do not need
99 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
100 to put them in a different directory (like ``.``), avoiding any
101 potential conflicts with existing Ceph configuration (cephadm or
102 otherwise) on the same host.
103
104 * You can pass any initial Ceph configuration options to the new
105 cluster by putting them in a standard ini-style configuration file
106 and using the ``--config *<config-file>*`` option.
107
108
109 Enable Ceph CLI
110 ===============
111
112 Cephadm does not require any Ceph packages to be installed on the
113 host. However, we recommend enabling easy access to the ``ceph``
114 command. There are several ways to do this:
115
116 * The ``cephadm shell`` command launches a bash shell in a container
117 with all of the Ceph packages installed. By default, if
118 configuration and keyring files are found in ``/etc/ceph`` on the
119 host, they are passed into the container environment so that the
120 shell is fully functional. Note that when executed on a MON host,
121 ``cephadm shell`` will infer the ``config`` from the MON container
122 instead of using the default configuration. If ``--mount <path>``
123 is given, then the host ``<path>`` (file or directory) will appear
124 under ``/mnt`` inside the container::
125
126 # cephadm shell
127
128 * It may be helpful to create an alias::
129
130 # alias ceph='cephadm shell -- ceph'
131
132 * You can install the ``ceph-common`` package, which contains all of the
133 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
134 CephFS file systems), etc.::
135
136 # cephadm add-repo --release octopus
137 # cephadm install ceph-common
138
139 Confirm that the ``ceph`` command is accessible with::
140
141 # ceph -v
142
143 Confirm that the ``ceph`` command can connect to the cluster and also
144 its status with::
145
146 # ceph status
147
148
149 Add hosts to the cluster
150 ========================
151
152 To add each new host to the cluster, perform two steps:
153
154 #. Install the cluster's public SSH key in the new host's root user's
155 ``authorized_keys`` file::
156
157 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
158
159 For example::
160
161 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
162 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
163
164 #. Tell Ceph that the new node is part of the cluster::
165
166 # ceph orch host add *newhost*
167
168 For example::
169
170 # ceph orch host add host2
171 # ceph orch host add host3
172
173
174 .. _deploy_additional_monitors:
175
176 Deploy additional monitors (optional)
177 =====================================
178
179 A typical Ceph cluster has three or five monitor daemons spread
180 across different hosts. We recommend deploying five
181 monitors if there are five or more nodes in your cluster.
182
183 .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
184
185 When Ceph knows what IP subnet the monitors should use it can automatically
186 deploy and scale monitors as the cluster grows (or contracts). By default,
187 Ceph assumes that other monitors should use the same subnet as the first
188 monitor's IP.
189
190 If your Ceph monitors (or the entire cluster) live on a single subnet,
191 then by default cephadm automatically adds up to 5 monitors as you add new
192 hosts to the cluster. No further steps are necessary.
193
194 * If there is a specific IP subnet that should be used by monitors, you
195 can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with::
196
197 # ceph config set mon public_network *<mon-cidr-network>*
198
199 For example::
200
201 # ceph config set mon public_network 10.1.2.0/24
202
203 Cephadm only deploys new monitor daemons on hosts that have IPs
204 configured in the configured subnet.
205
206 * If you want to adjust the default of 5 monitors::
207
208 # ceph orch apply mon *<number-of-monitors>*
209
210 * To deploy monitors on a specific set of hosts::
211
212 # ceph orch apply mon *<host1,host2,host3,...>*
213
214 Be sure to include the first (bootstrap) host in this list.
215
216 * You can control which hosts the monitors run on by making use of
217 host labels. To set the ``mon`` label to the appropriate
218 hosts::
219
220 # ceph orch host label add *<hostname>* mon
221
222 To view the current hosts and labels::
223
224 # ceph orch host ls
225
226 For example::
227
228 # ceph orch host label add host1 mon
229 # ceph orch host label add host2 mon
230 # ceph orch host label add host3 mon
231 # ceph orch host ls
232 HOST ADDR LABELS STATUS
233 host1 mon
234 host2 mon
235 host3 mon
236 host4
237 host5
238
239 Tell cephadm to deploy monitors based on the label::
240
241 # ceph orch apply mon label:mon
242
243 * You can explicitly specify the IP address or CIDR network for each monitor
244 and control where it is placed. To disable automated monitor deployment::
245
246 # ceph orch apply mon --unmanaged
247
248 To deploy each additional monitor::
249
250 # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
251
252 For example, to deploy a second monitor on ``newhost1`` using an IP
253 address ``10.1.2.123`` and a third monitor on ``newhost2`` in
254 network ``10.1.2.0/24``::
255
256 # ceph orch apply mon --unmanaged
257 # ceph orch daemon add mon newhost1:10.1.2.123
258 # ceph orch daemon add mon newhost2:10.1.2.0/24
259
260
261 Deploy OSDs
262 ===========
263
264 An inventory of storage devices on all cluster hosts can be displayed with::
265
266 # ceph orch device ls
267
268 A storage device is considered *available* if all of the following
269 conditions are met:
270
271 * The device must have no partitions.
272 * The device must not have any LVM state.
273 * The device must not be mounted.
274 * The device must not contain a file system.
275 * The device must not contain a Ceph BlueStore OSD.
276 * The device must be larger than 5 GB.
277
278 Ceph refuses to provision an OSD on a device that is not available.
279
280 There are a few ways to create new OSDs:
281
282 * Tell Ceph to consume any available and unused storage device::
283
284 # ceph orch apply osd --all-available-devices
285
286 * Create an OSD from a specific device on a specific host::
287
288 # ceph orch daemon add osd *<host>*:*<device-path>*
289
290 For example::
291
292 # ceph orch daemon add osd host1:/dev/sdb
293
294 * Use :ref:`drivegroups` to describe device(s) to consume
295 based on their properties, such device type (SSD or HDD), device
296 model names, size, or the hosts on which the devices exist::
297
298 # ceph orch apply osd -i spec.yml
299
300
301 Deploy MDSs
302 ===========
303
304 One or more MDS daemons is required to use the CephFS file system.
305 These are created automatically if the newer ``ceph fs volume``
306 interface is used to create a new file system. For more information,
307 see :ref:`fs-volumes-and-subvolumes`.
308
309 To deploy metadata servers::
310
311 # ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
312
313 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
314
315 Deploy RGWs
316 ===========
317
318 Cephadm deploys radosgw as a collection of daemons that manage a
319 particular *realm* and *zone*. (For more information about realms and
320 zones, see :ref:`multisite`.)
321
322 Note that with cephadm, radosgw daemons are configured via the monitor
323 configuration database instead of via a `ceph.conf` or the command line. If
324 that configuration isn't already in place (usually in the
325 ``client.rgw.<realmname>.<zonename>`` section), then the radosgw
326 daemons will start up with default settings (e.g., binding to port
327 80).
328
329 If a realm has not been created yet, first create a realm::
330
331 # radosgw-admin realm create --rgw-realm=<realm-name> --default
332
333 Next create a new zonegroup::
334
335 # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
336
337 Next create a zone::
338
339 # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
340
341 To deploy a set of radosgw daemons for a particular realm and zone::
342
343 # ceph orch apply rgw *<realm-name>* *<zone-name>* --placement="*<num-daemons>* [*<host1>* ...]"
344
345 For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1*
346 zone on *myhost1* and *myhost2*::
347
348 # radosgw-admin realm create --rgw-realm=myorg --default
349 # radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
350 # radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=us-east-1 --master --default
351 # ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2"
352
353 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
354
355 Deploying NFS ganesha
356 =====================
357
358 Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
359 and optional *namespace*
360
361 To deploy a NFS Ganesha gateway,::
362
363 # ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]"
364
365 For example, to deploy NFS with a service id of *foo*, that will use the
366 RADOS pool *nfs-ganesha* and namespace *nfs-ns*,::
367
368 # ceph orch apply nfs foo nfs-ganesha nfs-ns
369
370 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
371
372 Deploying custom containers
373 ===========================
374 It is also possible to choose different containers than the default containers to deploy Ceph. See :ref:`containers` for information about your options in this regard.