]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
d/control: depend on python3-yaml for ceph-mgr
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
11Requirements
12============
13
14- Systemd
15- Podman or Docker for running containers
16- Time synchronization (such as chrony or NTP)
17- LVM2 for provisioning storage devices
18
19Any modern Linux distribution should be sufficient. Dependencies
20are installed automatically by the bootstrap process below.
21
22.. _get-cephadm:
23
24Install cephadm
25===============
26
27The ``cephadm`` command can (1) bootstrap a new cluster, (2)
28launch a containerized shell with a working Ceph CLI, and (3) aid in
29debugging containerized Ceph daemons.
30
31There are a few ways to install cephadm:
32
33* Use ``curl`` to fetch the most recent version of the
34 standalone script::
35
36 # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
37 # chmod +x cephadm
38
39 This script can be run directly from the current directory with::
40
41 # ./cephadm <arguments...>
42
43* Although the standalone script is sufficient to get a cluster started, it is
44 convenient to have the ``cephadm`` command installed on the host. To install
45 these packages for the current Octopus release::
46
47 # ./cephadm add-repo --release octopus
48 # ./cephadm install
49
50 Confirm that ``cephadm`` is now in your PATH with::
51
52 # which cephadm
53
54* Some commercial Linux distributions (e.g., RHEL, SLE) may already
55 include up-to-date Ceph packages. In that case, you can install
56 cephadm directly. For example::
57
58 # dnf install -y cephadm # or
59 # zypper install -y cephadm
60
61
62
63Bootstrap a new cluster
64=======================
65
66You need to know which *IP address* to use for the cluster's first
67monitor daemon. This is normally just the IP for the first host. If there
68are multiple networks and interfaces, be sure to choose one that will
69be accessible by any host accessing the Ceph cluster.
70
71To bootstrap the cluster::
72
73 # mkdir -p /etc/ceph
74 # cephadm bootstrap --mon-ip *<mon-ip>*
75
76This command will:
77
78* Create a monitor and manager daemon for the new cluster on the local
79 host.
80* Generate a new SSH key for the Ceph cluster and adds it to the root
81 user's ``/root/.ssh/authorized_keys`` file.
82* Write a minimal configuration file needed to communicate with the
83 new cluster to ``/etc/ceph/ceph.conf``.
84* Write a copy of the ``client.admin`` administrative (privileged!)
85 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
86* Write a copy of the public key to
87 ``/etc/ceph/ceph.pub``.
88
89The default bootstrap behavior will work for the vast majority of
90users. See below for a few options that may be useful for some users,
91or run ``cephadm bootstrap -h`` to see all available options:
92
93* Bootstrap writes the files needed to access the new cluster to
94 ``/etc/ceph`` for convenience, so that any Ceph packages installed
95 on the host itself (e.g., to access the command line interface) can
96 easily find them.
97
98 Daemon containers deployed with cephadm, however, do not need
99 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
100 to put them in a different directory (like ``.``), avoiding any
101 potential conflicts with existing Ceph configuration (cephadm or
102 otherwise) on the same host.
103
104* You can pass any initial Ceph configuration options to the new
105 cluster by putting them in a standard ini-style configuration file
106 and using the ``--config *<config-file>*`` option.
107
108
109Enable Ceph CLI
110===============
111
112Cephadm does not require any Ceph packages to be installed on the
113host. However, we recommend enabling easy access to the the ``ceph``
114command. There are several ways to do this:
115
116* The ``cephadm shell`` command launches a bash shell in a container
117 with all of the Ceph packages installed. By default, if
118 configuration and keyring files are found in ``/etc/ceph`` on the
119 host, they are passed into the container environment so that the
120 shell is fully functional::
121
122 # cephadm shell
123
124* It may be helpful to create an alias::
125
801d1391 126 # alias ceph='cephadm shell -- ceph'
9f95a23c
TL
127
128* You can install the ``ceph-common`` package, which contains all of the
129 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
130 CephFS file systems), etc.::
131
132 # cephadm add-repo --release octopus
133 # cephadm install ceph-common
134
135Confirm that the ``ceph`` command is accessible with::
136
137 # ceph -v
138
139Confirm that the ``ceph`` command can connect to the cluster and also
140its status with::
141
142 # ceph status
143
144
145Add hosts to the cluster
146========================
147
148To add each new host to the cluster, perform two steps:
149
150#. Install the cluster's public SSH key in the new host's root user's
151 ``authorized_keys`` file::
152
801d1391 153 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
9f95a23c
TL
154
155 For example::
156
801d1391
TL
157 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
158 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
9f95a23c
TL
159
160#. Tell Ceph that the new node is part of the cluster::
161
162 # ceph orch host add *newhost*
163
164 For example::
165
166 # ceph orch host add host2
167 # ceph orch host add host3
168
169
170Deploy additional monitors (optional)
171=====================================
172
173A typical Ceph cluster has three or five monitor daemons spread
174across different hosts. We recommend deploying five
175monitors if there are five or more nodes in your cluster.
176
177.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
178
179When Ceph knows what IP subnet the monitors should use it can automatically
180deploy and scale monitors as the cluster grows (or contracts). By default,
181Ceph assumes that other monitors should use the same subnet as the first
182monitor's IP.
183
184If your Ceph monitors (or the entire cluster) live on a single subnet,
185then by default cephadm automatically adds up to 5 monitors as you add new
186hosts to the cluster. No further steps are necessary.
187
188* If there is a specific IP subnet that should be used by monitors, you
189 can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with::
190
191 # ceph config set mon public_network *<mon-cidr-network>*
192
193 For example::
194
195 # ceph config set mon public_network 10.1.2.0/24
196
197 Cephadm only deploys new monitor daemons on hosts that have IPs
198 configured in the configured subnet.
199
200* If you want to adjust the default of 5 monitors::
201
202 # ceph orch apply mon *<number-of-monitors>*
203
204* To deploy monitors on a specific set of hosts::
205
206 # ceph orch apply mon *<host1,host2,host3,...>*
207
208 Be sure to include the first (bootstrap) host in this list.
209
210* You can control which hosts the monitors run on by making use of
211 host labels. To set the ``mon`` label to the appropriate
212 hosts::
213
214 # ceph orch host label add *<hostname>* mon
215
216 To view the current hosts and labels::
217
218 # ceph orch host ls
219
220 For example::
221
222 # ceph orch host label add host1 mon
223 # ceph orch host label add host2 mon
224 # ceph orch host label add host3 mon
225 # ceph orch host ls
226 HOST ADDR LABELS STATUS
227 host1 mon
228 host2 mon
229 host3 mon
230 host4
231 host5
232
233 Tell cephadm to deploy monitors based on the label::
234
235 # ceph orch apply mon label:mon
236
237* You can explicitly specify the IP address or CIDR network for each monitor
238 and control where it is placed. To disable automated monitor deployment::
239
240 # ceph orch apply mon --unmanaged
241
242 To deploy each additional monitor::
243
244 # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
245
246 For example, to deploy a second monitor on ``newhost1`` using an IP
247 address ``10.1.2.123`` and a third monitor on ``newhost2`` in
248 network ``10.1.2.0/24``::
249
250 # ceph orch apply mon --unmanaged
251 # ceph orch daemon add mon newhost1:10.1.2.123
252 # ceph orch daemon add mon newhost2:10.1.2.0/24
253
254
255Deploy OSDs
256===========
257
258An inventory of storage devices on all cluster hosts can be displayed with::
259
260 # ceph orch device ls
261
262A storage device is considered *available* if all of the following
263conditions are met:
264
265* The device must have no partitions.
266* The device must not have any LVM state.
267* The device must not be mounted.
268* The device must not contain a file system.
269* The device must not contain a Ceph BlueStore OSD.
270* The device must be larger than 5 GB.
271
272Ceph refuses to provision an OSD on a device that is not available.
273
274There are a few ways to create new OSDs:
275
276* Tell Ceph to consume any available and unused storage device::
277
278 # ceph orch apply osd --all-available-devices
279
280* Create an OSD from a specific device on a specific host::
281
282 # ceph orch daemon add osd *<host>*:*<device-path>*
283
284 For example::
285
286 # ceph orch daemon add osd host1:/dev/sdb
287
288* Use :ref:`drivegroups` to describe device(s) to consume
289 based on their properties, such device type (SSD or HDD), device
290 model names, size, or the hosts on which the devices exist::
291
801d1391 292 # ceph orch apply osd -i spec.yml
9f95a23c
TL
293
294
295Deploy MDSs
296===========
297
298One or more MDS daemons is required to use the CephFS file system.
299These are created automatically if the newer ``ceph fs volume``
300interface is used to create a new file system. For more information,
301see :ref:`fs-volumes-and-subvolumes`.
302
303To deploy metadata servers::
304
1911f103
TL
305 # ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
306
307See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
9f95a23c
TL
308
309Deploy RGWs
310===========
311
312Cephadm deploys radosgw as a collection of daemons that manage a
313particular *realm* and *zone*. (For more information about realms and
314zones, see :ref:`multisite`.)
315
316Note that with cephadm, radosgw daemons are configured via the monitor
317configuration database instead of via a `ceph.conf` or the command line. If
318that configuration isn't already in place (usually in the
319``client.rgw.<realmname>.<zonename>`` section), then the radosgw
320daemons will start up with default settings (e.g., binding to port
32180).
322
323If a realm has not been created yet, first create a realm::
324
325 # radosgw-admin realm create --rgw-realm=<realm-name> --default
326
327Next create a new zonegroup::
328
329 # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
330
331Next create a zone::
332
333 # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
334
335To deploy a set of radosgw daemons for a particular realm and zone::
336
1911f103 337 # ceph orch apply rgw *<realm-name>* *<zone-name>* --placement="*<num-daemons>* [*<host1>* ...]"
9f95a23c
TL
338
339For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1*
340zone on *myhost1* and *myhost2*::
341
342 # radosgw-admin realm create --rgw-realm=myorg --default
343 # radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
344 # radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=us-east-1 --master --default
1911f103
TL
345 # ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2"
346
347See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
801d1391
TL
348
349Deploying NFS ganesha
350=====================
351
352Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
353and optional *namespace*
354
355To deploy a NFS Ganesha gateway,::
356
1911f103 357 # ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]"
801d1391
TL
358
359For example, to deploy NFS with a service id of *foo*, that will use the
360RADOS pool *nfs-ganesha* and namespace *nfs-ns*,::
361
362 # ceph orch apply nfs foo nfs-ganesha nfs-ns
1911f103
TL
363
364See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
365