]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
bump version to 16.2.6-pve2
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
f67539c2
TL
11
12.. _cephadm-host-requirements:
13
9f95a23c
TL
14Requirements
15============
16
f67539c2 17- Python 3
9f95a23c
TL
18- Systemd
19- Podman or Docker for running containers
20- Time synchronization (such as chrony or NTP)
21- LVM2 for provisioning storage devices
22
23Any modern Linux distribution should be sufficient. Dependencies
24are installed automatically by the bootstrap process below.
25
b3b6e05e
TL
26See the section :ref:`Compatibility With Podman
27Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
28are compatible with Podman. Not every version of Podman is compatible with
29Ceph.
30
31
32
9f95a23c
TL
33.. _get-cephadm:
34
35Install cephadm
36===============
37
f67539c2 38The ``cephadm`` command can
9f95a23c 39
f67539c2
TL
40#. bootstrap a new cluster
41#. launch a containerized shell with a working Ceph CLI
42#. aid in debugging containerized Ceph daemons
9f95a23c 43
f67539c2 44There are two ways to install ``cephadm``:
9f95a23c 45
f67539c2
TL
46#. a :ref:`curl-based installation<cephadm_install_curl>` method
47#. :ref:`distribution-specific installation methods<cephadm_install_distros>`
9f95a23c 48
f67539c2 49.. _cephadm_install_curl:
9f95a23c 50
f67539c2
TL
51curl-based installation
52-----------------------
9f95a23c 53
f67539c2
TL
54* Use ``curl`` to fetch the most recent version of the
55 standalone script.
56
57 .. prompt:: bash #
58 :substitutions:
9f95a23c 59
f67539c2 60 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
9f95a23c 61
f67539c2 62 Make the ``cephadm`` script executable:
9f95a23c 63
f67539c2 64 .. prompt:: bash #
e306af50 65
f67539c2 66 chmod +x cephadm
9f95a23c 67
f67539c2 68 This script can be run directly from the current directory:
9f95a23c 69
f67539c2 70 .. prompt:: bash #
9f95a23c 71
f67539c2 72 ./cephadm <arguments...>
9f95a23c 73
f67539c2
TL
74* Although the standalone script is sufficient to get a cluster started, it is
75 convenient to have the ``cephadm`` command installed on the host. To install
b3b6e05e
TL
76 the packages that provide the ``cephadm`` command, run the following
77 commands:
9f95a23c 78
f67539c2
TL
79 .. prompt:: bash #
80 :substitutions:
9f95a23c 81
f67539c2
TL
82 ./cephadm add-repo --release |stable-release|
83 ./cephadm install
9f95a23c 84
f67539c2 85 Confirm that ``cephadm`` is now in your PATH by running ``which``:
9f95a23c 86
f67539c2 87 .. prompt:: bash #
9f95a23c 88
f67539c2 89 which cephadm
9f95a23c 90
f67539c2 91 A successful ``which cephadm`` command will return this:
9f95a23c 92
f67539c2 93 .. code-block:: bash
9f95a23c 94
f67539c2 95 /usr/sbin/cephadm
9f95a23c 96
f67539c2 97.. _cephadm_install_distros:
9f95a23c 98
f67539c2
TL
99distribution-specific installations
100-----------------------------------
9f95a23c 101
f67539c2 102.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
9f95a23c 103
f67539c2
TL
104Some Linux distributions may already include up-to-date Ceph packages. In
105that case, you can install cephadm directly. For example:
9f95a23c 106
f67539c2 107 In Ubuntu:
9f95a23c 108
f67539c2 109 .. prompt:: bash #
9f95a23c 110
f67539c2 111 apt install -y cephadm
9f95a23c 112
f67539c2 113 In Fedora:
9f95a23c 114
f67539c2 115 .. prompt:: bash #
9f95a23c 116
f67539c2 117 dnf -y install cephadm
9f95a23c 118
f67539c2 119 In SUSE:
9f95a23c 120
f67539c2 121 .. prompt:: bash #
9f95a23c 122
f67539c2 123 zypper install -y cephadm
9f95a23c 124
9f95a23c 125
9f95a23c 126
f67539c2
TL
127Bootstrap a new cluster
128=======================
9f95a23c 129
f67539c2
TL
130What to know before you bootstrap
131---------------------------------
f6b5b4d7 132
f67539c2
TL
133The first step in creating a new Ceph cluster is running the ``cephadm
134bootstrap`` command on the Ceph cluster's first host. The act of running the
135``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
136cluster's first "monitor daemon", and that monitor daemon needs an IP address.
137You must pass the IP address of the Ceph cluster's first host to the ``ceph
138bootstrap`` command, so you'll need to know the IP address of that host.
f6b5b4d7 139
f67539c2
TL
140.. note:: If there are multiple networks and interfaces, be sure to choose one
141 that will be accessible by any host accessing the Ceph cluster.
f6b5b4d7 142
f67539c2
TL
143Running the bootstrap command
144-----------------------------
f6b5b4d7 145
f67539c2 146Run the ``ceph bootstrap`` command:
f6b5b4d7 147
f67539c2 148.. prompt:: bash #
f6b5b4d7 149
f67539c2 150 cephadm bootstrap --mon-ip *<mon-ip>*
f6b5b4d7 151
f67539c2 152This command will:
f6b5b4d7 153
f67539c2
TL
154* Create a monitor and manager daemon for the new cluster on the local
155 host.
156* Generate a new SSH key for the Ceph cluster and add it to the root
157 user's ``/root/.ssh/authorized_keys`` file.
b3b6e05e 158* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
f67539c2
TL
159* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
160 file is needed to communicate with the new cluster.
161* Write a copy of the ``client.admin`` administrative (privileged!)
162 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
b3b6e05e
TL
163* Add the ``_admin`` label to the bootstrap host. By default, any host
164 with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
165 ``/etc/ceph/ceph.client.admin.keyring``.
f6b5b4d7 166
f67539c2
TL
167Further information about cephadm bootstrap
168-------------------------------------------
f6b5b4d7 169
f67539c2
TL
170The default bootstrap behavior will work for most users. But if you'd like
171immediately to know more about ``cephadm bootstrap``, read the list below.
9f95a23c 172
f67539c2
TL
173Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
174available options.
9f95a23c 175
522d829b
TL
176* By default, Ceph daemons send their log output to stdout/stderr, which is picked
177 up by the container runtime (docker or podman) and (on most systems) sent to
178 journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
179 use ``--log-to-file`` option during bootstrap.
180
f67539c2
TL
181* Larger Ceph clusters perform better when (external to the Ceph cluster)
182 public network traffic is separated from (internal to the Ceph cluster)
183 cluster traffic. The internal cluster traffic handles replication, recovery,
184 and heartbeats between OSD daemons. You can define the :ref:`cluster
185 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
186 subcommand. This parameter must define a subnet in CIDR notation (for example
187 ``10.90.90.0/24`` or ``fe80::/64``).
9f95a23c 188
f67539c2
TL
189* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
190 the new cluster. This central location makes it possible for Ceph
191 packages installed on the host (e.g., packages that give access to the
192 cephadm command line interface) to find these files.
9f95a23c 193
f67539c2
TL
194 Daemon containers deployed with cephadm, however, do not need
195 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
196 to put them in a different directory (for example, ``.``). This may help
197 avoid conflicts with an existing Ceph configuration (cephadm or
198 otherwise) on the same host.
9f95a23c 199
f67539c2
TL
200* You can pass any initial Ceph configuration options to the new
201 cluster by putting them in a standard ini-style configuration file
b3b6e05e
TL
202 and using the ``--config *<config-file>*`` option. For example::
203
204 $ cat <<EOF > initial-ceph.conf
205 [global]
206 osd crush chooseleaf type = 0
207 EOF
208 $ ./cephadm bootstrap --config initial-ceph.conf ...
9f95a23c 209
f67539c2
TL
210* The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
211 user cephadm will use to connect to hosts. The associated ssh key will be
212 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
213 designate with this option must have passwordless sudo access.
9f95a23c 214
f67539c2
TL
215* If you are using a container on an authenticated registry that requires
216 login, you may add the three arguments:
217
218 #. ``--registry-url <url of registry>``
9f95a23c 219
f67539c2 220 #. ``--registry-username <username of account on registry>``
9f95a23c 221
f67539c2 222 #. ``--registry-password <password of account on registry>``
9f95a23c 223
f67539c2 224 OR
9f95a23c 225
f67539c2
TL
226 * ``--registry-json <json file with login info>``
227
228 Cephadm will attempt to log in to this registry so it can pull your container
229 and then store the login info in its config database. Other hosts added to
230 the cluster will then also be able to make use of the authenticated registry.
9f95a23c 231
f67539c2 232.. _cephadm-enable-cli:
9f95a23c 233
f67539c2
TL
234Enable Ceph CLI
235===============
9f95a23c 236
f67539c2
TL
237Cephadm does not require any Ceph packages to be installed on the
238host. However, we recommend enabling easy access to the ``ceph``
239command. There are several ways to do this:
9f95a23c 240
f67539c2
TL
241* The ``cephadm shell`` command launches a bash shell in a container
242 with all of the Ceph packages installed. By default, if
243 configuration and keyring files are found in ``/etc/ceph`` on the
244 host, they are passed into the container environment so that the
245 shell is fully functional. Note that when executed on a MON host,
246 ``cephadm shell`` will infer the ``config`` from the MON container
247 instead of using the default configuration. If ``--mount <path>``
248 is given, then the host ``<path>`` (file or directory) will appear
249 under ``/mnt`` inside the container:
9f95a23c 250
f67539c2 251 .. prompt:: bash #
9f95a23c 252
f67539c2 253 cephadm shell
9f95a23c 254
f67539c2 255* To execute ``ceph`` commands, you can also run commands like this:
9f95a23c 256
f67539c2 257 .. prompt:: bash #
9f95a23c 258
f67539c2 259 cephadm shell -- ceph -s
1911f103 260
f67539c2
TL
261* You can install the ``ceph-common`` package, which contains all of the
262 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
263 CephFS file systems), etc.:
9f95a23c 264
f67539c2
TL
265 .. prompt:: bash #
266 :substitutions:
9f95a23c 267
f67539c2
TL
268 cephadm add-repo --release |stable-release|
269 cephadm install ceph-common
9f95a23c 270
f67539c2 271Confirm that the ``ceph`` command is accessible with:
9f95a23c 272
f67539c2
TL
273.. prompt:: bash #
274
275 ceph -v
9f95a23c 276
9f95a23c 277
f67539c2
TL
278Confirm that the ``ceph`` command can connect to the cluster and also
279its status with:
9f95a23c 280
f67539c2 281.. prompt:: bash #
9f95a23c 282
f67539c2 283 ceph status
9f95a23c 284
f67539c2
TL
285Adding Hosts
286============
9f95a23c 287
f67539c2 288Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
9f95a23c 289
b3b6e05e
TL
290By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring
291are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially
292applied only to the bootstrap host. We usually recommend that one or more other hosts be
293given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily
294accessible on multiple hosts. To add the ``_admin`` label to additional host(s),
295
296 .. prompt:: bash #
297
298 ceph orch host label add *<host>* _admin
299
f67539c2
TL
300Adding additional MONs
301======================
9f95a23c 302
f67539c2
TL
303A typical Ceph cluster has three or five monitor daemons spread
304across different hosts. We recommend deploying five
305monitors if there are five or more nodes in your cluster.
9f95a23c 306
f67539c2 307Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
1911f103 308
f67539c2
TL
309Adding Storage
310==============
801d1391 311
f67539c2
TL
312To add storage to the cluster, either tell Ceph to consume any
313available and unused device:
f91f0fd5 314
f67539c2 315 .. prompt:: bash #
f91f0fd5 316
f67539c2 317 ceph orch apply osd --all-available-devices
801d1391 318
f67539c2 319Or See :ref:`cephadm-deploy-osds` for more detailed instructions.
801d1391 320
f67539c2
TL
321Using Ceph
322==========
801d1391 323
f67539c2 324To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
801d1391 325
f67539c2 326To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
801d1391 327
f67539c2 328To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
1911f103 329
f67539c2 330To use *iSCSI*, follow :ref:`cephadm-iscsi`
f6b5b4d7 331
1911f103 332
f67539c2 333.. _cluster network: ../rados/configuration/network-config-ref#cluster-network