]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
use the buster suite for getting the source package for now
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
f67539c2
TL
11
12.. _cephadm-host-requirements:
13
9f95a23c
TL
14Requirements
15============
16
f67539c2 17- Python 3
9f95a23c
TL
18- Systemd
19- Podman or Docker for running containers
20- Time synchronization (such as chrony or NTP)
21- LVM2 for provisioning storage devices
22
23Any modern Linux distribution should be sufficient. Dependencies
24are installed automatically by the bootstrap process below.
25
b3b6e05e
TL
26See the section :ref:`Compatibility With Podman
27Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
28are compatible with Podman. Not every version of Podman is compatible with
29Ceph.
30
31
32
9f95a23c
TL
33.. _get-cephadm:
34
35Install cephadm
36===============
37
f67539c2 38The ``cephadm`` command can
9f95a23c 39
f67539c2
TL
40#. bootstrap a new cluster
41#. launch a containerized shell with a working Ceph CLI
42#. aid in debugging containerized Ceph daemons
9f95a23c 43
f67539c2 44There are two ways to install ``cephadm``:
9f95a23c 45
f67539c2
TL
46#. a :ref:`curl-based installation<cephadm_install_curl>` method
47#. :ref:`distribution-specific installation methods<cephadm_install_distros>`
9f95a23c 48
f67539c2 49.. _cephadm_install_curl:
9f95a23c 50
f67539c2
TL
51curl-based installation
52-----------------------
9f95a23c 53
f67539c2
TL
54* Use ``curl`` to fetch the most recent version of the
55 standalone script.
56
57 .. prompt:: bash #
58 :substitutions:
9f95a23c 59
f67539c2 60 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
9f95a23c 61
f67539c2 62 Make the ``cephadm`` script executable:
9f95a23c 63
f67539c2 64 .. prompt:: bash #
e306af50 65
f67539c2 66 chmod +x cephadm
9f95a23c 67
f67539c2 68 This script can be run directly from the current directory:
9f95a23c 69
f67539c2 70 .. prompt:: bash #
9f95a23c 71
f67539c2 72 ./cephadm <arguments...>
9f95a23c 73
f67539c2
TL
74* Although the standalone script is sufficient to get a cluster started, it is
75 convenient to have the ``cephadm`` command installed on the host. To install
b3b6e05e
TL
76 the packages that provide the ``cephadm`` command, run the following
77 commands:
9f95a23c 78
f67539c2
TL
79 .. prompt:: bash #
80 :substitutions:
9f95a23c 81
f67539c2
TL
82 ./cephadm add-repo --release |stable-release|
83 ./cephadm install
9f95a23c 84
f67539c2 85 Confirm that ``cephadm`` is now in your PATH by running ``which``:
9f95a23c 86
f67539c2 87 .. prompt:: bash #
9f95a23c 88
f67539c2 89 which cephadm
9f95a23c 90
f67539c2 91 A successful ``which cephadm`` command will return this:
9f95a23c 92
f67539c2 93 .. code-block:: bash
9f95a23c 94
f67539c2 95 /usr/sbin/cephadm
9f95a23c 96
f67539c2 97.. _cephadm_install_distros:
9f95a23c 98
f67539c2
TL
99distribution-specific installations
100-----------------------------------
9f95a23c 101
f67539c2 102.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
9f95a23c 103
f67539c2
TL
104Some Linux distributions may already include up-to-date Ceph packages. In
105that case, you can install cephadm directly. For example:
9f95a23c 106
f67539c2 107 In Ubuntu:
9f95a23c 108
f67539c2 109 .. prompt:: bash #
9f95a23c 110
f67539c2 111 apt install -y cephadm
9f95a23c 112
f67539c2 113 In Fedora:
9f95a23c 114
f67539c2 115 .. prompt:: bash #
9f95a23c 116
f67539c2 117 dnf -y install cephadm
9f95a23c 118
f67539c2 119 In SUSE:
9f95a23c 120
f67539c2 121 .. prompt:: bash #
9f95a23c 122
f67539c2 123 zypper install -y cephadm
9f95a23c 124
9f95a23c 125
9f95a23c 126
f67539c2
TL
127Bootstrap a new cluster
128=======================
9f95a23c 129
f67539c2
TL
130What to know before you bootstrap
131---------------------------------
f6b5b4d7 132
f67539c2
TL
133The first step in creating a new Ceph cluster is running the ``cephadm
134bootstrap`` command on the Ceph cluster's first host. The act of running the
135``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
136cluster's first "monitor daemon", and that monitor daemon needs an IP address.
137You must pass the IP address of the Ceph cluster's first host to the ``ceph
138bootstrap`` command, so you'll need to know the IP address of that host.
f6b5b4d7 139
f67539c2
TL
140.. note:: If there are multiple networks and interfaces, be sure to choose one
141 that will be accessible by any host accessing the Ceph cluster.
f6b5b4d7 142
f67539c2
TL
143Running the bootstrap command
144-----------------------------
f6b5b4d7 145
f67539c2 146Run the ``ceph bootstrap`` command:
f6b5b4d7 147
f67539c2 148.. prompt:: bash #
f6b5b4d7 149
f67539c2 150 cephadm bootstrap --mon-ip *<mon-ip>*
f6b5b4d7 151
f67539c2 152This command will:
f6b5b4d7 153
f67539c2
TL
154* Create a monitor and manager daemon for the new cluster on the local
155 host.
156* Generate a new SSH key for the Ceph cluster and add it to the root
157 user's ``/root/.ssh/authorized_keys`` file.
b3b6e05e 158* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
f67539c2
TL
159* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
160 file is needed to communicate with the new cluster.
161* Write a copy of the ``client.admin`` administrative (privileged!)
162 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
b3b6e05e
TL
163* Add the ``_admin`` label to the bootstrap host. By default, any host
164 with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
165 ``/etc/ceph/ceph.client.admin.keyring``.
f6b5b4d7 166
f67539c2
TL
167Further information about cephadm bootstrap
168-------------------------------------------
f6b5b4d7 169
f67539c2
TL
170The default bootstrap behavior will work for most users. But if you'd like
171immediately to know more about ``cephadm bootstrap``, read the list below.
9f95a23c 172
f67539c2
TL
173Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
174available options.
9f95a23c 175
f67539c2
TL
176* Larger Ceph clusters perform better when (external to the Ceph cluster)
177 public network traffic is separated from (internal to the Ceph cluster)
178 cluster traffic. The internal cluster traffic handles replication, recovery,
179 and heartbeats between OSD daemons. You can define the :ref:`cluster
180 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
181 subcommand. This parameter must define a subnet in CIDR notation (for example
182 ``10.90.90.0/24`` or ``fe80::/64``).
9f95a23c 183
f67539c2
TL
184* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
185 the new cluster. This central location makes it possible for Ceph
186 packages installed on the host (e.g., packages that give access to the
187 cephadm command line interface) to find these files.
9f95a23c 188
f67539c2
TL
189 Daemon containers deployed with cephadm, however, do not need
190 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
191 to put them in a different directory (for example, ``.``). This may help
192 avoid conflicts with an existing Ceph configuration (cephadm or
193 otherwise) on the same host.
9f95a23c 194
f67539c2
TL
195* You can pass any initial Ceph configuration options to the new
196 cluster by putting them in a standard ini-style configuration file
b3b6e05e
TL
197 and using the ``--config *<config-file>*`` option. For example::
198
199 $ cat <<EOF > initial-ceph.conf
200 [global]
201 osd crush chooseleaf type = 0
202 EOF
203 $ ./cephadm bootstrap --config initial-ceph.conf ...
9f95a23c 204
f67539c2
TL
205* The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
206 user cephadm will use to connect to hosts. The associated ssh key will be
207 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
208 designate with this option must have passwordless sudo access.
9f95a23c 209
f67539c2
TL
210* If you are using a container on an authenticated registry that requires
211 login, you may add the three arguments:
212
213 #. ``--registry-url <url of registry>``
9f95a23c 214
f67539c2 215 #. ``--registry-username <username of account on registry>``
9f95a23c 216
f67539c2 217 #. ``--registry-password <password of account on registry>``
9f95a23c 218
f67539c2 219 OR
9f95a23c 220
f67539c2
TL
221 * ``--registry-json <json file with login info>``
222
223 Cephadm will attempt to log in to this registry so it can pull your container
224 and then store the login info in its config database. Other hosts added to
225 the cluster will then also be able to make use of the authenticated registry.
9f95a23c 226
f67539c2 227.. _cephadm-enable-cli:
9f95a23c 228
f67539c2
TL
229Enable Ceph CLI
230===============
9f95a23c 231
f67539c2
TL
232Cephadm does not require any Ceph packages to be installed on the
233host. However, we recommend enabling easy access to the ``ceph``
234command. There are several ways to do this:
9f95a23c 235
f67539c2
TL
236* The ``cephadm shell`` command launches a bash shell in a container
237 with all of the Ceph packages installed. By default, if
238 configuration and keyring files are found in ``/etc/ceph`` on the
239 host, they are passed into the container environment so that the
240 shell is fully functional. Note that when executed on a MON host,
241 ``cephadm shell`` will infer the ``config`` from the MON container
242 instead of using the default configuration. If ``--mount <path>``
243 is given, then the host ``<path>`` (file or directory) will appear
244 under ``/mnt`` inside the container:
9f95a23c 245
f67539c2 246 .. prompt:: bash #
9f95a23c 247
f67539c2 248 cephadm shell
9f95a23c 249
f67539c2 250* To execute ``ceph`` commands, you can also run commands like this:
9f95a23c 251
f67539c2 252 .. prompt:: bash #
9f95a23c 253
f67539c2 254 cephadm shell -- ceph -s
1911f103 255
f67539c2
TL
256* You can install the ``ceph-common`` package, which contains all of the
257 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
258 CephFS file systems), etc.:
9f95a23c 259
f67539c2
TL
260 .. prompt:: bash #
261 :substitutions:
9f95a23c 262
f67539c2
TL
263 cephadm add-repo --release |stable-release|
264 cephadm install ceph-common
9f95a23c 265
f67539c2 266Confirm that the ``ceph`` command is accessible with:
9f95a23c 267
f67539c2
TL
268.. prompt:: bash #
269
270 ceph -v
9f95a23c 271
9f95a23c 272
f67539c2
TL
273Confirm that the ``ceph`` command can connect to the cluster and also
274its status with:
9f95a23c 275
f67539c2 276.. prompt:: bash #
9f95a23c 277
f67539c2 278 ceph status
9f95a23c 279
f67539c2
TL
280Adding Hosts
281============
9f95a23c 282
f67539c2 283Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
9f95a23c 284
b3b6e05e
TL
285By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring
286are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially
287applied only to the bootstrap host. We usually recommend that one or more other hosts be
288given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily
289accessible on multiple hosts. To add the ``_admin`` label to additional host(s),
290
291 .. prompt:: bash #
292
293 ceph orch host label add *<host>* _admin
294
f67539c2
TL
295Adding additional MONs
296======================
9f95a23c 297
f67539c2
TL
298A typical Ceph cluster has three or five monitor daemons spread
299across different hosts. We recommend deploying five
300monitors if there are five or more nodes in your cluster.
9f95a23c 301
f67539c2 302Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
1911f103 303
f67539c2
TL
304Adding Storage
305==============
801d1391 306
f67539c2
TL
307To add storage to the cluster, either tell Ceph to consume any
308available and unused device:
f91f0fd5 309
f67539c2 310 .. prompt:: bash #
f91f0fd5 311
f67539c2 312 ceph orch apply osd --all-available-devices
801d1391 313
f67539c2 314Or See :ref:`cephadm-deploy-osds` for more detailed instructions.
801d1391 315
f67539c2
TL
316Using Ceph
317==========
801d1391 318
f67539c2 319To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
801d1391 320
f67539c2 321To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
801d1391 322
f67539c2 323To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
1911f103 324
f67539c2 325To use *iSCSI*, follow :ref:`cephadm-iscsi`
f6b5b4d7 326
1911f103 327
f67539c2 328.. _cluster network: ../rados/configuration/network-config-ref#cluster-network