]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
bump version to 17.2.0
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
f67539c2
TL
11
12.. _cephadm-host-requirements:
13
9f95a23c
TL
14Requirements
15============
16
f67539c2 17- Python 3
9f95a23c
TL
18- Systemd
19- Podman or Docker for running containers
20- Time synchronization (such as chrony or NTP)
21- LVM2 for provisioning storage devices
22
23Any modern Linux distribution should be sufficient. Dependencies
24are installed automatically by the bootstrap process below.
25
b3b6e05e
TL
26See the section :ref:`Compatibility With Podman
27Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
28are compatible with Podman. Not every version of Podman is compatible with
29Ceph.
30
31
32
9f95a23c
TL
33.. _get-cephadm:
34
35Install cephadm
36===============
37
f67539c2 38The ``cephadm`` command can
9f95a23c 39
f67539c2
TL
40#. bootstrap a new cluster
41#. launch a containerized shell with a working Ceph CLI
42#. aid in debugging containerized Ceph daemons
9f95a23c 43
f67539c2 44There are two ways to install ``cephadm``:
9f95a23c 45
f67539c2
TL
46#. a :ref:`curl-based installation<cephadm_install_curl>` method
47#. :ref:`distribution-specific installation methods<cephadm_install_distros>`
9f95a23c 48
f67539c2 49.. _cephadm_install_curl:
9f95a23c 50
f67539c2
TL
51curl-based installation
52-----------------------
9f95a23c 53
f67539c2
TL
54* Use ``curl`` to fetch the most recent version of the
55 standalone script.
56
57 .. prompt:: bash #
58 :substitutions:
9f95a23c 59
f67539c2 60 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
9f95a23c 61
f67539c2 62 Make the ``cephadm`` script executable:
9f95a23c 63
f67539c2 64 .. prompt:: bash #
e306af50 65
f67539c2 66 chmod +x cephadm
9f95a23c 67
f67539c2 68 This script can be run directly from the current directory:
9f95a23c 69
f67539c2 70 .. prompt:: bash #
9f95a23c 71
f67539c2 72 ./cephadm <arguments...>
9f95a23c 73
f67539c2
TL
74* Although the standalone script is sufficient to get a cluster started, it is
75 convenient to have the ``cephadm`` command installed on the host. To install
b3b6e05e
TL
76 the packages that provide the ``cephadm`` command, run the following
77 commands:
9f95a23c 78
f67539c2
TL
79 .. prompt:: bash #
80 :substitutions:
9f95a23c 81
f67539c2
TL
82 ./cephadm add-repo --release |stable-release|
83 ./cephadm install
9f95a23c 84
f67539c2 85 Confirm that ``cephadm`` is now in your PATH by running ``which``:
9f95a23c 86
f67539c2 87 .. prompt:: bash #
9f95a23c 88
f67539c2 89 which cephadm
9f95a23c 90
f67539c2 91 A successful ``which cephadm`` command will return this:
9f95a23c 92
f67539c2 93 .. code-block:: bash
9f95a23c 94
f67539c2 95 /usr/sbin/cephadm
9f95a23c 96
f67539c2 97.. _cephadm_install_distros:
9f95a23c 98
f67539c2
TL
99distribution-specific installations
100-----------------------------------
9f95a23c 101
f67539c2 102.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
9f95a23c 103
f67539c2
TL
104Some Linux distributions may already include up-to-date Ceph packages. In
105that case, you can install cephadm directly. For example:
9f95a23c 106
f67539c2 107 In Ubuntu:
9f95a23c 108
f67539c2 109 .. prompt:: bash #
9f95a23c 110
f67539c2 111 apt install -y cephadm
9f95a23c 112
f67539c2 113 In Fedora:
9f95a23c 114
f67539c2 115 .. prompt:: bash #
9f95a23c 116
f67539c2 117 dnf -y install cephadm
9f95a23c 118
f67539c2 119 In SUSE:
9f95a23c 120
f67539c2 121 .. prompt:: bash #
9f95a23c 122
f67539c2 123 zypper install -y cephadm
9f95a23c 124
9f95a23c 125
9f95a23c 126
f67539c2
TL
127Bootstrap a new cluster
128=======================
9f95a23c 129
f67539c2
TL
130What to know before you bootstrap
131---------------------------------
f6b5b4d7 132
f67539c2
TL
133The first step in creating a new Ceph cluster is running the ``cephadm
134bootstrap`` command on the Ceph cluster's first host. The act of running the
135``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
136cluster's first "monitor daemon", and that monitor daemon needs an IP address.
137You must pass the IP address of the Ceph cluster's first host to the ``ceph
138bootstrap`` command, so you'll need to know the IP address of that host.
f6b5b4d7 139
f67539c2
TL
140.. note:: If there are multiple networks and interfaces, be sure to choose one
141 that will be accessible by any host accessing the Ceph cluster.
f6b5b4d7 142
f67539c2
TL
143Running the bootstrap command
144-----------------------------
f6b5b4d7 145
f67539c2 146Run the ``ceph bootstrap`` command:
f6b5b4d7 147
f67539c2 148.. prompt:: bash #
f6b5b4d7 149
f67539c2 150 cephadm bootstrap --mon-ip *<mon-ip>*
f6b5b4d7 151
f67539c2 152This command will:
f6b5b4d7 153
f67539c2
TL
154* Create a monitor and manager daemon for the new cluster on the local
155 host.
156* Generate a new SSH key for the Ceph cluster and add it to the root
157 user's ``/root/.ssh/authorized_keys`` file.
b3b6e05e 158* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
f67539c2
TL
159* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
160 file is needed to communicate with the new cluster.
161* Write a copy of the ``client.admin`` administrative (privileged!)
162 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
b3b6e05e
TL
163* Add the ``_admin`` label to the bootstrap host. By default, any host
164 with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
165 ``/etc/ceph/ceph.client.admin.keyring``.
f6b5b4d7 166
f67539c2
TL
167Further information about cephadm bootstrap
168-------------------------------------------
f6b5b4d7 169
f67539c2
TL
170The default bootstrap behavior will work for most users. But if you'd like
171immediately to know more about ``cephadm bootstrap``, read the list below.
9f95a23c 172
f67539c2
TL
173Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
174available options.
9f95a23c 175
522d829b
TL
176* By default, Ceph daemons send their log output to stdout/stderr, which is picked
177 up by the container runtime (docker or podman) and (on most systems) sent to
178 journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
a4b75251 179 use the ``--log-to-file`` option during bootstrap.
522d829b 180
f67539c2
TL
181* Larger Ceph clusters perform better when (external to the Ceph cluster)
182 public network traffic is separated from (internal to the Ceph cluster)
183 cluster traffic. The internal cluster traffic handles replication, recovery,
184 and heartbeats between OSD daemons. You can define the :ref:`cluster
185 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
186 subcommand. This parameter must define a subnet in CIDR notation (for example
187 ``10.90.90.0/24`` or ``fe80::/64``).
9f95a23c 188
f67539c2
TL
189* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
190 the new cluster. This central location makes it possible for Ceph
191 packages installed on the host (e.g., packages that give access to the
192 cephadm command line interface) to find these files.
9f95a23c 193
f67539c2
TL
194 Daemon containers deployed with cephadm, however, do not need
195 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
196 to put them in a different directory (for example, ``.``). This may help
197 avoid conflicts with an existing Ceph configuration (cephadm or
198 otherwise) on the same host.
9f95a23c 199
f67539c2
TL
200* You can pass any initial Ceph configuration options to the new
201 cluster by putting them in a standard ini-style configuration file
b3b6e05e
TL
202 and using the ``--config *<config-file>*`` option. For example::
203
204 $ cat <<EOF > initial-ceph.conf
205 [global]
206 osd crush chooseleaf type = 0
207 EOF
208 $ ./cephadm bootstrap --config initial-ceph.conf ...
9f95a23c 209
f67539c2
TL
210* The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
211 user cephadm will use to connect to hosts. The associated ssh key will be
212 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
213 designate with this option must have passwordless sudo access.
9f95a23c 214
f67539c2 215* If you are using a container on an authenticated registry that requires
20effc67 216 login, you may add the argument:
9f95a23c 217
20effc67 218 * ``--registry-json <path to json file>``
9f95a23c 219
20effc67 220 example contents of JSON file with login info::
9f95a23c 221
20effc67 222 {"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"}
f67539c2
TL
223
224 Cephadm will attempt to log in to this registry so it can pull your container
225 and then store the login info in its config database. Other hosts added to
226 the cluster will then also be able to make use of the authenticated registry.
9f95a23c 227
20effc67
TL
228* See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``.
229
f67539c2 230.. _cephadm-enable-cli:
9f95a23c 231
f67539c2
TL
232Enable Ceph CLI
233===============
9f95a23c 234
f67539c2
TL
235Cephadm does not require any Ceph packages to be installed on the
236host. However, we recommend enabling easy access to the ``ceph``
237command. There are several ways to do this:
9f95a23c 238
f67539c2
TL
239* The ``cephadm shell`` command launches a bash shell in a container
240 with all of the Ceph packages installed. By default, if
241 configuration and keyring files are found in ``/etc/ceph`` on the
242 host, they are passed into the container environment so that the
243 shell is fully functional. Note that when executed on a MON host,
244 ``cephadm shell`` will infer the ``config`` from the MON container
245 instead of using the default configuration. If ``--mount <path>``
246 is given, then the host ``<path>`` (file or directory) will appear
247 under ``/mnt`` inside the container:
9f95a23c 248
f67539c2 249 .. prompt:: bash #
9f95a23c 250
f67539c2 251 cephadm shell
9f95a23c 252
f67539c2 253* To execute ``ceph`` commands, you can also run commands like this:
9f95a23c 254
f67539c2 255 .. prompt:: bash #
9f95a23c 256
f67539c2 257 cephadm shell -- ceph -s
1911f103 258
f67539c2
TL
259* You can install the ``ceph-common`` package, which contains all of the
260 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
261 CephFS file systems), etc.:
9f95a23c 262
f67539c2
TL
263 .. prompt:: bash #
264 :substitutions:
9f95a23c 265
f67539c2
TL
266 cephadm add-repo --release |stable-release|
267 cephadm install ceph-common
9f95a23c 268
f67539c2 269Confirm that the ``ceph`` command is accessible with:
9f95a23c 270
f67539c2
TL
271.. prompt:: bash #
272
273 ceph -v
9f95a23c 274
9f95a23c 275
f67539c2
TL
276Confirm that the ``ceph`` command can connect to the cluster and also
277its status with:
9f95a23c 278
f67539c2 279.. prompt:: bash #
9f95a23c 280
f67539c2 281 ceph status
9f95a23c 282
f67539c2
TL
283Adding Hosts
284============
9f95a23c 285
f67539c2 286Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
9f95a23c 287
b3b6e05e
TL
288By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring
289are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially
290applied only to the bootstrap host. We usually recommend that one or more other hosts be
291given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily
292accessible on multiple hosts. To add the ``_admin`` label to additional host(s),
293
294 .. prompt:: bash #
295
296 ceph orch host label add *<host>* _admin
297
f67539c2
TL
298Adding additional MONs
299======================
9f95a23c 300
f67539c2
TL
301A typical Ceph cluster has three or five monitor daemons spread
302across different hosts. We recommend deploying five
303monitors if there are five or more nodes in your cluster.
9f95a23c 304
f67539c2 305Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
1911f103 306
f67539c2
TL
307Adding Storage
308==============
801d1391 309
f67539c2
TL
310To add storage to the cluster, either tell Ceph to consume any
311available and unused device:
f91f0fd5 312
f67539c2 313 .. prompt:: bash #
f91f0fd5 314
f67539c2 315 ceph orch apply osd --all-available-devices
801d1391 316
20effc67
TL
317See :ref:`cephadm-deploy-osds` for more detailed instructions.
318
319Enabling OSD memory autotuning
320------------------------------
321
322.. warning:: By default, cephadm enables ``osd_memory_target_autotune`` on bootstrap, with ``mgr/cephadm/autotune_memory_target_ratio`` set to ``.7`` of total host memory.
323
324See :ref:`osd_autotune`.
325
326To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph <https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cephadm.html#scenario-deploy-hyperconverged-ceph>`_
327
328In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged),
329reduce the memory consumption of Ceph like so:
330
331 .. prompt:: bash #
332
333 # hyperconverged only:
334 ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
335
336Then enable memory autotuning:
337
338 .. prompt:: bash #
339
340 ceph config set osd osd_memory_target_autotune true
341
801d1391 342
f67539c2
TL
343Using Ceph
344==========
801d1391 345
f67539c2 346To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
801d1391 347
f67539c2 348To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
801d1391 349
f67539c2 350To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
1911f103 351
f67539c2 352To use *iSCSI*, follow :ref:`cephadm-iscsi`
f6b5b4d7 353
20effc67
TL
354.. _cephadm-deployment-scenarios:
355
356Different deployment scenarios
357==============================
358
359Single host
360-----------
361
362To configure a Ceph cluster to run on a single host, use the ``--single-host-defaults`` flag when bootstrapping. For use cases of this, see :ref:`one-node-cluster`.
363
364The ``--single-host-defaults`` flag sets the following configuration options::
365
366 global/osd_crush_choose_leaf_type = 0
367 global/osd_pool_default_size = 2
368 mgr/mgr_standby_modules = False
369
370For more information on these options, see :ref:`one-node-cluster` and ``mgr_standby_modules`` in :ref:`mgr-administrator-guide`.
371
372Deployment in an isolated environment
373-------------------------------------
374
375You can install Cephadm in an isolated environment by using a custom container registry. You can either configure Podman or Docker to use an insecure registry, or make the registry secure. Ensure your container image is inside the registry and that you have access to all hosts you wish to add to the cluster.
376
377Run a local container registry:
378
379.. prompt:: bash #
380
381 podman run --privileged -d --name registry -p 5000:5000 -v /var/lib/registry:/var/lib/registry --restart=always registry:2
382
383If you are using an insecure registry, configure Podman or Docker with the hostname and port where the registry is running.
384
385.. note:: For every host which accesses the local insecure registry, you will need to repeat this step on the host.
386
387Next, push your container image to your local registry.
388
389Then run bootstrap using the ``--image`` flag with your container image. For example:
390
391.. prompt:: bash #
392
393 cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*
394
1911f103 395
20effc67 396.. _cluster network: ../rados/configuration/network-config-ref#cluster-network