1 ============================
2 Deploying a new Ceph cluster
3 ============================
5 Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6 host, expanding the cluster to encompass any additional hosts, and
7 then deploying the needed services.
15 - Podman or Docker for running containers
16 - Time synchronization (such as chrony or NTP)
17 - LVM2 for provisioning storage devices
19 Any modern Linux distribution should be sufficient. Dependencies
20 are installed automatically by the bootstrap process below.
27 The ``cephadm`` command can (1) bootstrap a new cluster, (2)
28 launch a containerized shell with a working Ceph CLI, and (3) aid in
29 debugging containerized Ceph daemons.
31 There are a few ways to install cephadm:
33 * Use ``curl`` to fetch the most recent version of the
36 # curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
39 This script can be run directly from the current directory with::
41 # ./cephadm <arguments...>
43 * Although the standalone script is sufficient to get a cluster started, it is
44 convenient to have the ``cephadm`` command installed on the host. To install
45 these packages for the current Octopus release::
47 # ./cephadm add-repo --release octopus
50 Confirm that ``cephadm`` is now in your PATH with::
54 * Some commercial Linux distributions (e.g., RHEL, SLE) may already
55 include up-to-date Ceph packages. In that case, you can install
56 cephadm directly. For example::
58 # dnf install -y cephadm # or
59 # zypper install -y cephadm
63 Bootstrap a new cluster
64 =======================
66 You need to know which *IP address* to use for the cluster's first
67 monitor daemon. This is normally just the IP for the first host. If there
68 are multiple networks and interfaces, be sure to choose one that will
69 be accessible by any host accessing the Ceph cluster.
71 To bootstrap the cluster::
74 # cephadm bootstrap --mon-ip *<mon-ip>*
78 * Create a monitor and manager daemon for the new cluster on the local
80 * Generate a new SSH key for the Ceph cluster and adds it to the root
81 user's ``/root/.ssh/authorized_keys`` file.
82 * Write a minimal configuration file needed to communicate with the
83 new cluster to ``/etc/ceph/ceph.conf``.
84 * Write a copy of the ``client.admin`` administrative (privileged!)
85 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
86 * Write a copy of the public key to
87 ``/etc/ceph/ceph.pub``.
89 The default bootstrap behavior will work for the vast majority of
90 users. See below for a few options that may be useful for some users,
91 or run ``cephadm bootstrap -h`` to see all available options:
93 * Bootstrap writes the files needed to access the new cluster to
94 ``/etc/ceph`` for convenience, so that any Ceph packages installed
95 on the host itself (e.g., to access the command line interface) can
98 Daemon containers deployed with cephadm, however, do not need
99 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
100 to put them in a different directory (like ``.``), avoiding any
101 potential conflicts with existing Ceph configuration (cephadm or
102 otherwise) on the same host.
104 * You can pass any initial Ceph configuration options to the new
105 cluster by putting them in a standard ini-style configuration file
106 and using the ``--config *<config-file>*`` option.
112 Cephadm does not require any Ceph packages to be installed on the
113 host. However, we recommend enabling easy access to the the ``ceph``
114 command. There are several ways to do this:
116 * The ``cephadm shell`` command launches a bash shell in a container
117 with all of the Ceph packages installed. By default, if
118 configuration and keyring files are found in ``/etc/ceph`` on the
119 host, they are passed into the container environment so that the
120 shell is fully functional::
124 * It may be helpful to create an alias::
126 # alias ceph='cephadm shell -- ceph'
128 * You can install the ``ceph-common`` package, which contains all of the
129 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
130 CephFS file systems), etc.::
132 # cephadm add-repo --release octopus
133 # cephadm install ceph-common
135 Confirm that the ``ceph`` command is accessible with::
139 Confirm that the ``ceph`` command can connect to the cluster and also
145 Add hosts to the cluster
146 ========================
148 To add each new host to the cluster, perform two steps:
150 #. Install the cluster's public SSH key in the new host's root user's
151 ``authorized_keys`` file::
153 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
157 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host2
158 # ssh-copy-id -f -i /etc/ceph/ceph.pub root@host3
160 #. Tell Ceph that the new node is part of the cluster::
162 # ceph orch host add *newhost*
166 # ceph orch host add host2
167 # ceph orch host add host3
170 Deploy additional monitors (optional)
171 =====================================
173 A typical Ceph cluster has three or five monitor daemons spread
174 across different hosts. We recommend deploying five
175 monitors if there are five or more nodes in your cluster.
177 .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
179 When Ceph knows what IP subnet the monitors should use it can automatically
180 deploy and scale monitors as the cluster grows (or contracts). By default,
181 Ceph assumes that other monitors should use the same subnet as the first
184 If your Ceph monitors (or the entire cluster) live on a single subnet,
185 then by default cephadm automatically adds up to 5 monitors as you add new
186 hosts to the cluster. No further steps are necessary.
188 * If there is a specific IP subnet that should be used by monitors, you
189 can configure that in `CIDR`_ format (e.g., ``10.1.2.0/24``) with::
191 # ceph config set mon public_network *<mon-cidr-network>*
195 # ceph config set mon public_network 10.1.2.0/24
197 Cephadm only deploys new monitor daemons on hosts that have IPs
198 configured in the configured subnet.
200 * If you want to adjust the default of 5 monitors::
202 # ceph orch apply mon *<number-of-monitors>*
204 * To deploy monitors on a specific set of hosts::
206 # ceph orch apply mon *<host1,host2,host3,...>*
208 Be sure to include the first (bootstrap) host in this list.
210 * You can control which hosts the monitors run on by making use of
211 host labels. To set the ``mon`` label to the appropriate
214 # ceph orch host label add *<hostname>* mon
216 To view the current hosts and labels::
222 # ceph orch host label add host1 mon
223 # ceph orch host label add host2 mon
224 # ceph orch host label add host3 mon
226 HOST ADDR LABELS STATUS
233 Tell cephadm to deploy monitors based on the label::
235 # ceph orch apply mon label:mon
237 * You can explicitly specify the IP address or CIDR network for each monitor
238 and control where it is placed. To disable automated monitor deployment::
240 # ceph orch apply mon --unmanaged
242 To deploy each additional monitor::
244 # ceph orch daemon add mon *<host1:ip-or-network1> [<host1:ip-or-network-2>...]*
246 For example, to deploy a second monitor on ``newhost1`` using an IP
247 address ``10.1.2.123`` and a third monitor on ``newhost2`` in
248 network ``10.1.2.0/24``::
250 # ceph orch apply mon --unmanaged
251 # ceph orch daemon add mon newhost1:10.1.2.123
252 # ceph orch daemon add mon newhost2:10.1.2.0/24
258 An inventory of storage devices on all cluster hosts can be displayed with::
260 # ceph orch device ls
262 A storage device is considered *available* if all of the following
265 * The device must have no partitions.
266 * The device must not have any LVM state.
267 * The device must not be mounted.
268 * The device must not contain a file system.
269 * The device must not contain a Ceph BlueStore OSD.
270 * The device must be larger than 5 GB.
272 Ceph refuses to provision an OSD on a device that is not available.
274 There are a few ways to create new OSDs:
276 * Tell Ceph to consume any available and unused storage device::
278 # ceph orch apply osd --all-available-devices
280 * Create an OSD from a specific device on a specific host::
282 # ceph orch daemon add osd *<host>*:*<device-path>*
286 # ceph orch daemon add osd host1:/dev/sdb
288 * Use :ref:`drivegroups` to describe device(s) to consume
289 based on their properties, such device type (SSD or HDD), device
290 model names, size, or the hosts on which the devices exist::
292 # ceph orch apply osd -i spec.yml
298 One or more MDS daemons is required to use the CephFS file system.
299 These are created automatically if the newer ``ceph fs volume``
300 interface is used to create a new file system. For more information,
301 see :ref:`fs-volumes-and-subvolumes`.
303 To deploy metadata servers::
305 # ceph orch apply mds *<fs-name>* --placement="*<num-daemons>* [*<host1>* ...]"
307 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
312 Cephadm deploys radosgw as a collection of daemons that manage a
313 particular *realm* and *zone*. (For more information about realms and
314 zones, see :ref:`multisite`.)
316 Note that with cephadm, radosgw daemons are configured via the monitor
317 configuration database instead of via a `ceph.conf` or the command line. If
318 that configuration isn't already in place (usually in the
319 ``client.rgw.<realmname>.<zonename>`` section), then the radosgw
320 daemons will start up with default settings (e.g., binding to port
323 If a realm has not been created yet, first create a realm::
325 # radosgw-admin realm create --rgw-realm=<realm-name> --default
327 Next create a new zonegroup::
329 # radosgw-admin zonegroup create --rgw-zonegroup=<zonegroup-name> --master --default
333 # radosgw-admin zone create --rgw-zonegroup=<zonegroup-name> --rgw-zone=<zone-name> --master --default
335 To deploy a set of radosgw daemons for a particular realm and zone::
337 # ceph orch apply rgw *<realm-name>* *<zone-name>* --placement="*<num-daemons>* [*<host1>* ...]"
339 For example, to deploy 2 rgw daemons serving the *myorg* realm and the *us-east-1*
340 zone on *myhost1* and *myhost2*::
342 # radosgw-admin realm create --rgw-realm=myorg --default
343 # radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
344 # radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=us-east-1 --master --default
345 # ceph orch apply rgw myorg us-east-1 --placement="2 myhost1 myhost2"
347 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.
349 Deploying NFS ganesha
350 =====================
352 Cephadm deploys NFS Ganesha using a pre-defined RADOS *pool*
353 and optional *namespace*
355 To deploy a NFS Ganesha gateway,::
357 # ceph orch apply nfs *<svc_id>* *<pool>* *<namespace>* --placement="*<num-daemons>* [*<host1>* ...]"
359 For example, to deploy NFS with a service id of *foo*, that will use the
360 RADOS pool *nfs-ganesha* and namespace *nfs-ns*,::
362 # ceph orch apply nfs foo nfs-ganesha nfs-ns
364 See :ref:`orchestrator-cli-placement-spec` for details of the placement specification.