]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
bump version to 15.2.4-pve1
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
11Requirements
12============
13
14- Systemd
15- Podman or Docker for running containers
16- Time synchronization (such as chrony or NTP)
17- LVM2 for provisioning storage devices
18
19Any modern Linux distribution should be sufficient. Dependencies
20are installed automatically by the bootstrap process below.
21
22.. _get-cephadm:
23
24Install cephadm
25===============
26
27The ``cephadm`` command can (1) bootstrap a new cluster, (2)
28launch a containerized shell with a working Ceph CLI, and (3) aid in
29debugging containerized Ceph daemons.
30
31There are a few ways to install cephadm:
32
33* Use ``curl`` to fetch the most recent version of the
34 standalone script::
35
36 # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
37 # chmod +x cephadm
38
39 This script can be run directly from the current directory with::
40
41 # ./cephadm <arguments...>
42
43* Although the standalone script is sufficient to get a cluster started, it is
44 convenient to have the ``cephadm`` command installed on the host. To install
45 these packages for the current Octopus release::
46
47 # ./cephadm add-repo --release octopus
48 # ./cephadm install
49
50 Confirm that ``cephadm`` is now in your PATH with::
51
52 # which cephadm
53
54* Some commercial Linux distributions (e.g., RHEL, SLE) may already
55 include up-to-date Ceph packages. In that case, you can install
56 cephadm directly. For example::
57
58 # dnf install -y cephadm # or
59 # zypper install -y cephadm
60
61
62
63Bootstrap a new cluster
64=======================
65
66You need to know which *IP address* to use for the cluster's first
67monitor daemon. This is normally just the IP for the first host. If there
68are multiple networks and interfaces, be sure to choose one that will
69be accessible by any host accessing the Ceph cluster.
70
71To bootstrap the cluster::
72
73 # mkdir -p /etc/ceph
74 # cephadm bootstrap --mon-ip *<mon-ip>*
75
76This command will:
77
78* Create a monitor and manager daemon for the new cluster on the local
79 host.
80* Generate a new SSH key for the Ceph cluster and adds it to the root
81 user's ``/root/.ssh/authorized_keys`` file.
82* Write a minimal configuration file needed to communicate with the
83 new cluster to ``/etc/ceph/ceph.conf``.
84* Write a copy of the ``client.admin`` administrative (privileged!)
85 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
86* Write a copy of the public key to
87 ``/etc/ceph/ceph.pub``.
88
89The default bootstrap behavior will work for the vast majority of
90users. See below for a few options that may be useful for some users,
91or run ``cephadm bootstrap -h`` to see all available options:
92
93* Bootstrap writes the files needed to access the new cluster to
94 ``/etc/ceph`` for convenience, so that any Ceph packages installed
95 on the host itself (e.g., to access the command line interface) can
96 easily find them.
97
98 Daemon containers deployed with cephadm, however, do not need
99 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
100 to put them in a different directory (like ``.``), avoiding any
101 potential conflicts with existing Ceph configuration (cephadm or
102 otherwise) on the same host.
103
104* You can pass any initial Ceph configuration options to the new
105 cluster by putting them in a standard ini-style configuration file
106 and using the ``--config *<config-file>*`` option.
107
108
109Enable Ceph CLI
110===============
111
112Cephadm does not require any Ceph packages to be installed on the
e306af50 113host. However, we recommend enabling easy access to the ``ceph``
9f95a23c
TL
114command. There are several ways to do this:
115
116* The ``cephadm shell`` command launches a bash shell in a container
e306af50 117 with all of the Ceph packages installed. By default, if
9f95a23c
TL
118 configuration and keyring files are found in ``/etc/ceph`` on the
119 host, they are passed into the container environment so that the
e306af50
TL
120 shell is fully functional. Note that when executed on a MON host,
121 ``cephadm shell`` will infer the ``config`` from the MON container
122 instead of using the default configuration. If ``--mount <path>``
123 is given, then the host ``<path>`` (file or directory) will appear
124 under ``/mnt`` inside the container::
9f95a23c
TL
125
126 # cephadm shell
127
128* It may be helpful to create an alias::
129
801d1391 130 # alias ceph='cephadm shell -- ceph'
9f95a23c
TL
131
132* You can install the ``ceph-common`` package, which contains all of the
133 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
134 CephFS file systems), etc.::
135
136 # cephadm add-repo --release octopus
137 # cephadm install ceph-common
138
139Confirm that the ``ceph`` command is accessible with::
140
141 # ceph -v
142
143Confirm that the ``ceph`` command can connect to the cluster and also
144its status with::
145
146 # ceph status
147
148
149Add hosts to the cluster
150========================
151
152To add each new host to the cluster, perform two steps:
153
154#. Install the cluster's public SSH key in the new host's root user's
155 ``authorized_keys`` file::
156
801d1391 157 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
9f95a23c
TL
158
159 For example::
160
801d1391
TL
161 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
162 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
9f95a23c
TL
163
164#. Tell Ceph that the new node is part of the cluster::
165
166 # ceph orch host add *newhost*
167
168 For example::
169
170 # ceph orch host add host2
171 # ceph orch host add host3
172
173
e306af50
TL
174.. _deploy_additional_monitors:
175
9f95a23c
TL
176Deploy additional monitors (optional)
177=====================================
178
179A typical Ceph cluster has three or five monitor daemons spread
180across different hosts. We recommend deploying five
181monitors if there are five or more nodes in your cluster.
182
183.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
184
185When Ceph knows what IP subnet the monitors should use it can automatically
186deploy and scale monitors as the cluster grows (or contracts). By default,
187Ceph assumes that other monitors should use the same subnet as the first
188monitor's IP.
189
190If your Ceph monitors (or the entire cluster) live on a single subnet,
191then by default cephadm automatically adds up to 5 monitors as you add new
192hosts to the cluster. No further steps are necessary.
193
194* If there is a specific IP subnet that should be used by monitors, you
195 can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with::
196
197 # ceph config set mon public_network *<mon-cidr-network>*
198
199 For example::
200
201 # ceph config set mon public_network 10.1.2.0/24
202
203 Cephadm only deploys new monitor daemons on hosts that have IPs
204 configured in the configured subnet.
205
206* If you want to adjust the default of 5 monitors::
207
208 # ceph orch apply mon *<number-of-monitors>*
209
210* To deploy monitors on a specific set of hosts::
211
212 # ceph orch apply mon *<host1,host2,host3,...>*
213
214 Be sure to include the first (bootstrap) host in this list.
215
216* You can control which hosts the monitors run on by making use of
217 host labels. To set the ``mon`` label to the appropriate
218 hosts::
219
220 # ceph orch host label add *<hostname>* mon
221
222 To view the current hosts and labels::
223
224 # ceph orch host ls
225
226 For example::
227
228 # ceph orch host label add host1 mon
229 # ceph orch host label add host2 mon
230 # ceph orch host label add host3 mon
231 # ceph orch host ls
232 HOST ADDR LABELS STATUS
233 host1 mon
234 host2 mon
235 host3 mon
236 host4
237 host5
238
239 Tell cephadm to deploy monitors based on the label::
240
241 # ceph orch apply mon label:mon
242
243* You can explicitly specify the IP address or CIDR network for each monitor
244 and control where it is placed. To disable automated monitor deployment::
245
246 # ceph orch apply mon --unmanaged
247
248 To deploy each additional monitor::
249
250 # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
251
252 For example, to deploy a second monitor on ``newhost1`` using an IP
253 address ``10.1.2.123`` and a third monitor on ``newhost2`` in
254 network ``10.1.2.0/24``::
255
256 # ceph orch apply mon --unmanaged
257 # ceph orch daemon add mon newhost1:10.1.2.123
258 # ceph orch daemon add mon newhost2:10.1.2.0/24
259
260
261Deploy OSDs
262===========
263
264An inventory of storage devices on all cluster hosts can be displayed with::
265
266 # ceph orch device ls
267
268A storage device is considered *available* if all of the following
269conditions are met:
270
271* The device must have no partitions.
272* The device must not have any LVM state.
273* The device must not be mounted.
274* The device must not contain a file system.
275* The device must not contain a Ceph BlueStore OSD.
276* The device must be larger than 5 GB.
277
278Ceph refuses to provision an OSD on a device that is not available.
279
280There are a few ways to create new OSDs:
281
282* Tell Ceph to consume any available and unused storage device::
283
284 # ceph orch apply osd --all-available-devices
285
286* Create an OSD from a specific device on a specific host::
287
288 # ceph orch daemon add osd *<host>*:*<device-path>*
289
290 For example::
291
292 # ceph orch daemon add osd host1:/dev/sdb
293
294* Use :ref:`drivegroups` to describe device(s) to consume
295 based on their properties, such device type (SSD or HDD), device
296 model names, size, or the hosts on which the devices exist::
297
801d1391 298 # ceph orch apply osd -i spec.yml
9f95a23c
TL
299
300
301Deploy MDSs
302===========
303
304One or more MDS daemons is required to use the CephFS file system.
305These are created automatically if the newer ``ceph fs volume``
306interface is used to create a new file system. For more information,
307see :ref:`fs-volumes-and-subvolumes`.
308
309To deploy metadata servers::
310
1911f103
TL
311 # ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
312
313See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
9f95a23c
TL
314
315Deploy RGWs
316===========
317
318Cephadm deploys radosgw as a collection of daemons that manage a
319particular *realm* and *zone*. (For more information about realms and
320zones, see :ref:`multisite`.)
321
322Note that with cephadm, radosgw daemons are configured via the monitor
323configuration database instead of via a `ceph.conf` or the command line. If
324that configuration isn't already in place (usually in the
325``client.rgw.<realmname>.<zonename>`` section), then the radosgw
326daemons will start up with default settings (e.g., binding to port
32780).
328
329If a realm has not been created yet, first create a realm::
330
331 # radosgw-admin realm create --rgw-realm=<realm-name> --default
332
333Next create a new zonegroup::
334
335 # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
336
337Next create a zone::
338
339 # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
340
341To deploy a set of radosgw daemons for a particular realm and zone::
342
1911f103 343 # ceph orch apply rgw *<realm-name>* *<zone-name>* --placement="*<num-daemons>* [*<host1>* ...]"
9f95a23c
TL
344
345For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1*
346zone on *myhost1* and *myhost2*::
347
348 # radosgw-admin realm create --rgw-realm=myorg --default
349 # radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
350 # radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=us-east-1 --master --default
1911f103
TL
351 # ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2"
352
353See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
801d1391
TL
354
355Deploying NFS ganesha
356=====================
357
358Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
359and optional *namespace*
360
361To deploy a NFS Ganesha gateway,::
362
1911f103 363 # ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]"
801d1391
TL
364
365For example, to deploy NFS with a service id of *foo*, that will use the
366RADOS pool *nfs-ganesha* and namespace *nfs-ns*,::
367
368 # ceph orch apply nfs foo nfs-ganesha nfs-ns
1911f103
TL
369
370See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
371
e306af50
TL
372Deploying custom containers
373===========================
374It is also possible to choose different containers than the default containers to deploy Ceph. See :ref:`containers` for information about your options in this regard.