]> git.proxmox.com Git - ceph.git/blame - ceph/doc/cephadm/install.rst
update dh_systemd restart patch for pacific
[ceph.git] / ceph / doc / cephadm / install.rst
CommitLineData
9f95a23c
TL
1============================
2Deploying a new Ceph cluster
3============================
4
5Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6host, expanding the cluster to encompass any additional hosts, and
7then deploying the needed services.
8
9.. highlight:: console
10
f67539c2
TL
11
12.. _cephadm-host-requirements:
13
9f95a23c
TL
14Requirements
15============
16
f67539c2 17- Python 3
9f95a23c
TL
18- Systemd
19- Podman or Docker for running containers
20- Time synchronization (such as chrony or NTP)
21- LVM2 for provisioning storage devices
22
23Any modern Linux distribution should be sufficient. Dependencies
24are installed automatically by the bootstrap process below.
25
26.. _get-cephadm:
27
28Install cephadm
29===============
30
f67539c2 31The ``cephadm`` command can
9f95a23c 32
f67539c2
TL
33#. bootstrap a new cluster
34#. launch a containerized shell with a working Ceph CLI
35#. aid in debugging containerized Ceph daemons
9f95a23c 36
f67539c2 37There are two ways to install ``cephadm``:
9f95a23c 38
f67539c2
TL
39#. a :ref:`curl-based installation<cephadm_install_curl>` method
40#. :ref:`distribution-specific installation methods<cephadm_install_distros>`
9f95a23c 41
f67539c2 42.. _cephadm_install_curl:
9f95a23c 43
f67539c2
TL
44curl-based installation
45-----------------------
9f95a23c 46
f67539c2
TL
47* Use ``curl`` to fetch the most recent version of the
48 standalone script.
49
50 .. prompt:: bash #
51 :substitutions:
9f95a23c 52
f67539c2 53 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
9f95a23c 54
f67539c2 55 Make the ``cephadm`` script executable:
9f95a23c 56
f67539c2 57 .. prompt:: bash #
e306af50 58
f67539c2 59 chmod +x cephadm
9f95a23c 60
f67539c2 61 This script can be run directly from the current directory:
9f95a23c 62
f67539c2 63 .. prompt:: bash #
9f95a23c 64
f67539c2 65 ./cephadm <arguments...>
9f95a23c 66
f67539c2
TL
67* Although the standalone script is sufficient to get a cluster started, it is
68 convenient to have the ``cephadm`` command installed on the host. To install
69 the packages that provide the ``cephadm`` command for the Octopus release,
70 run the following commands:
9f95a23c 71
f67539c2
TL
72 .. prompt:: bash #
73 :substitutions:
9f95a23c 74
f67539c2
TL
75 ./cephadm add-repo --release |stable-release|
76 ./cephadm install
9f95a23c 77
f67539c2 78 Confirm that ``cephadm`` is now in your PATH by running ``which``:
9f95a23c 79
f67539c2 80 .. prompt:: bash #
9f95a23c 81
f67539c2 82 which cephadm
9f95a23c 83
f67539c2 84 A successful ``which cephadm`` command will return this:
9f95a23c 85
f67539c2 86 .. code-block:: bash
9f95a23c 87
f67539c2 88 /usr/sbin/cephadm
9f95a23c 89
f67539c2 90.. _cephadm_install_distros:
9f95a23c 91
f67539c2
TL
92distribution-specific installations
93-----------------------------------
9f95a23c 94
f67539c2 95.. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
9f95a23c 96
f67539c2
TL
97Some Linux distributions may already include up-to-date Ceph packages. In
98that case, you can install cephadm directly. For example:
9f95a23c 99
f67539c2 100 In Ubuntu:
9f95a23c 101
f67539c2 102 .. prompt:: bash #
9f95a23c 103
f67539c2 104 apt install -y cephadm
9f95a23c 105
f67539c2 106 In Fedora:
9f95a23c 107
f67539c2 108 .. prompt:: bash #
9f95a23c 109
f67539c2 110 dnf -y install cephadm
9f95a23c 111
f67539c2 112 In SUSE:
9f95a23c 113
f67539c2 114 .. prompt:: bash #
9f95a23c 115
f67539c2 116 zypper install -y cephadm
9f95a23c 117
9f95a23c 118
9f95a23c 119
f67539c2
TL
120Bootstrap a new cluster
121=======================
9f95a23c 122
f67539c2
TL
123What to know before you bootstrap
124---------------------------------
f6b5b4d7 125
f67539c2
TL
126The first step in creating a new Ceph cluster is running the ``cephadm
127bootstrap`` command on the Ceph cluster's first host. The act of running the
128``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
129cluster's first "monitor daemon", and that monitor daemon needs an IP address.
130You must pass the IP address of the Ceph cluster's first host to the ``ceph
131bootstrap`` command, so you'll need to know the IP address of that host.
f6b5b4d7 132
f67539c2
TL
133.. note:: If there are multiple networks and interfaces, be sure to choose one
134 that will be accessible by any host accessing the Ceph cluster.
f6b5b4d7 135
f67539c2
TL
136Running the bootstrap command
137-----------------------------
f6b5b4d7 138
f67539c2 139Run the ``ceph bootstrap`` command:
f6b5b4d7 140
f67539c2 141.. prompt:: bash #
f6b5b4d7 142
f67539c2 143 cephadm bootstrap --mon-ip *<mon-ip>*
f6b5b4d7 144
f67539c2 145This command will:
f6b5b4d7 146
f67539c2
TL
147* Create a monitor and manager daemon for the new cluster on the local
148 host.
149* Generate a new SSH key for the Ceph cluster and add it to the root
150 user's ``/root/.ssh/authorized_keys`` file.
151* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
152 file is needed to communicate with the new cluster.
153* Write a copy of the ``client.admin`` administrative (privileged!)
154 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
155* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
f6b5b4d7 156
f67539c2
TL
157Further information about cephadm bootstrap
158-------------------------------------------
f6b5b4d7 159
f67539c2
TL
160The default bootstrap behavior will work for most users. But if you'd like
161immediately to know more about ``cephadm bootstrap``, read the list below.
9f95a23c 162
f67539c2
TL
163Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
164available options.
9f95a23c 165
f67539c2
TL
166* Larger Ceph clusters perform better when (external to the Ceph cluster)
167 public network traffic is separated from (internal to the Ceph cluster)
168 cluster traffic. The internal cluster traffic handles replication, recovery,
169 and heartbeats between OSD daemons. You can define the :ref:`cluster
170 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
171 subcommand. This parameter must define a subnet in CIDR notation (for example
172 ``10.90.90.0/24`` or ``fe80::/64``).
9f95a23c 173
f67539c2
TL
174* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
175 the new cluster. This central location makes it possible for Ceph
176 packages installed on the host (e.g., packages that give access to the
177 cephadm command line interface) to find these files.
9f95a23c 178
f67539c2
TL
179 Daemon containers deployed with cephadm, however, do not need
180 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
181 to put them in a different directory (for example, ``.``). This may help
182 avoid conflicts with an existing Ceph configuration (cephadm or
183 otherwise) on the same host.
9f95a23c 184
f67539c2
TL
185* You can pass any initial Ceph configuration options to the new
186 cluster by putting them in a standard ini-style configuration file
187 and using the ``--config *<config-file>*`` option.
9f95a23c 188
f67539c2
TL
189* The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
190 user cephadm will use to connect to hosts. The associated ssh key will be
191 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
192 designate with this option must have passwordless sudo access.
9f95a23c 193
f67539c2
TL
194* If you are using a container on an authenticated registry that requires
195 login, you may add the three arguments:
196
197 #. ``--registry-url <url of registry>``
9f95a23c 198
f67539c2 199 #. ``--registry-username <username of account on registry>``
9f95a23c 200
f67539c2 201 #. ``--registry-password <password of account on registry>``
9f95a23c 202
f67539c2 203 OR
9f95a23c 204
f67539c2
TL
205 * ``--registry-json <json file with login info>``
206
207 Cephadm will attempt to log in to this registry so it can pull your container
208 and then store the login info in its config database. Other hosts added to
209 the cluster will then also be able to make use of the authenticated registry.
9f95a23c 210
f67539c2 211.. _cephadm-enable-cli:
9f95a23c 212
f67539c2
TL
213Enable Ceph CLI
214===============
9f95a23c 215
f67539c2
TL
216Cephadm does not require any Ceph packages to be installed on the
217host. However, we recommend enabling easy access to the ``ceph``
218command. There are several ways to do this:
9f95a23c 219
f67539c2
TL
220* The ``cephadm shell`` command launches a bash shell in a container
221 with all of the Ceph packages installed. By default, if
222 configuration and keyring files are found in ``/etc/ceph`` on the
223 host, they are passed into the container environment so that the
224 shell is fully functional. Note that when executed on a MON host,
225 ``cephadm shell`` will infer the ``config`` from the MON container
226 instead of using the default configuration. If ``--mount <path>``
227 is given, then the host ``<path>`` (file or directory) will appear
228 under ``/mnt`` inside the container:
9f95a23c 229
f67539c2 230 .. prompt:: bash #
9f95a23c 231
f67539c2 232 cephadm shell
9f95a23c 233
f67539c2 234* To execute ``ceph`` commands, you can also run commands like this:
9f95a23c 235
f67539c2 236 .. prompt:: bash #
9f95a23c 237
f67539c2 238 cephadm shell -- ceph -s
1911f103 239
f67539c2
TL
240* You can install the ``ceph-common`` package, which contains all of the
241 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
242 CephFS file systems), etc.:
9f95a23c 243
f67539c2
TL
244 .. prompt:: bash #
245 :substitutions:
9f95a23c 246
f67539c2
TL
247 cephadm add-repo --release |stable-release|
248 cephadm install ceph-common
9f95a23c 249
f67539c2 250Confirm that the ``ceph`` command is accessible with:
9f95a23c 251
f67539c2
TL
252.. prompt:: bash #
253
254 ceph -v
9f95a23c 255
9f95a23c 256
f67539c2
TL
257Confirm that the ``ceph`` command can connect to the cluster and also
258its status with:
9f95a23c 259
f67539c2 260.. prompt:: bash #
9f95a23c 261
f67539c2 262 ceph status
9f95a23c 263
f67539c2
TL
264Adding Hosts
265============
9f95a23c 266
f67539c2 267Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
9f95a23c 268
f67539c2
TL
269Adding additional MONs
270======================
9f95a23c 271
f67539c2
TL
272A typical Ceph cluster has three or five monitor daemons spread
273across different hosts. We recommend deploying five
274monitors if there are five or more nodes in your cluster.
9f95a23c 275
f67539c2 276Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
1911f103 277
f67539c2
TL
278Adding Storage
279==============
801d1391 280
f67539c2
TL
281To add storage to the cluster, either tell Ceph to consume any
282available and unused device:
f91f0fd5 283
f67539c2 284 .. prompt:: bash #
f91f0fd5 285
f67539c2 286 ceph orch apply osd --all-available-devices
801d1391 287
f67539c2 288Or See :ref:`cephadm-deploy-osds` for more detailed instructions.
801d1391 289
f67539c2
TL
290Using Ceph
291==========
801d1391 292
f67539c2 293To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
801d1391 294
f67539c2 295To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
801d1391 296
f67539c2 297To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
1911f103 298
f67539c2 299To use *iSCSI*, follow :ref:`cephadm-iscsi`
f6b5b4d7 300
1911f103 301
f67539c2 302.. _cluster network: ../rados/configuration/network-config-ref#cluster-network