]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/install.rst
import ceph quincy 17.2.1
[ceph.git] / ceph / doc / cephadm / install.rst
1 ============================
2 Deploying a new Ceph cluster
3 ============================
4
5 Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6 host, expanding the cluster to encompass any additional hosts, and
7 then deploying the needed services.
8
9 .. highlight:: console
10
11
12 .. _cephadm-host-requirements:
13
14 Requirements
15 ============
16
17 - Python 3
18 - Systemd
19 - Podman or Docker for running containers
20 - Time synchronization (such as chrony or NTP)
21 - LVM2 for provisioning storage devices
22
23 Any modern Linux distribution should be sufficient. Dependencies
24 are installed automatically by the bootstrap process below.
25
26 See the section :ref:`Compatibility With Podman
27 Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
28 are compatible with Podman. Not every version of Podman is compatible with
29 Ceph.
30
31
32
33 .. _get-cephadm:
34
35 Install cephadm
36 ===============
37
38 The ``cephadm`` command can
39
40 #. bootstrap a new cluster
41 #. launch a containerized shell with a working Ceph CLI
42 #. aid in debugging containerized Ceph daemons
43
44 There are two ways to install ``cephadm``:
45
46 #. a :ref:`curl-based installation<cephadm_install_curl>` method
47 #. :ref:`distribution-specific installation methods<cephadm_install_distros>`
48
49 .. _cephadm_install_curl:
50
51 curl-based installation
52 -----------------------
53
54 * Use ``curl`` to fetch the most recent version of the
55 standalone script.
56
57 .. prompt:: bash #
58 :substitutions:
59
60 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
61
62 Make the ``cephadm`` script executable:
63
64 .. prompt:: bash #
65
66 chmod +x cephadm
67
68 This script can be run directly from the current directory:
69
70 .. prompt:: bash #
71
72 ./cephadm <arguments...>
73
74 * Although the standalone script is sufficient to get a cluster started, it is
75 convenient to have the ``cephadm`` command installed on the host. To install
76 the packages that provide the ``cephadm`` command, run the following
77 commands:
78
79 .. prompt:: bash #
80 :substitutions:
81
82 ./cephadm add-repo --release |stable-release|
83 ./cephadm install
84
85 Confirm that ``cephadm`` is now in your PATH by running ``which``:
86
87 .. prompt:: bash #
88
89 which cephadm
90
91 A successful ``which cephadm`` command will return this:
92
93 .. code-block:: bash
94
95 /usr/sbin/cephadm
96
97 .. _cephadm_install_distros:
98
99 distribution-specific installations
100 -----------------------------------
101
102 .. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
103
104 Some Linux distributions may already include up-to-date Ceph packages. In
105 that case, you can install cephadm directly. For example:
106
107 In Ubuntu:
108
109 .. prompt:: bash #
110
111 apt install -y cephadm
112
113 In CentOS Stream:
114
115 .. prompt:: bash #
116 :substitutions:
117
118 dnf search release-ceph
119 dnf install --assumeyes centos-release-ceph-|stable-release|
120 dnf install --assumeyes cephadm
121
122 In Fedora:
123
124 .. prompt:: bash #
125
126 dnf -y install cephadm
127
128 In SUSE:
129
130 .. prompt:: bash #
131
132 zypper install -y cephadm
133
134
135
136 Bootstrap a new cluster
137 =======================
138
139 What to know before you bootstrap
140 ---------------------------------
141
142 The first step in creating a new Ceph cluster is running the ``cephadm
143 bootstrap`` command on the Ceph cluster's first host. The act of running the
144 ``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
145 cluster's first "monitor daemon", and that monitor daemon needs an IP address.
146 You must pass the IP address of the Ceph cluster's first host to the ``ceph
147 bootstrap`` command, so you'll need to know the IP address of that host.
148
149 .. note:: If there are multiple networks and interfaces, be sure to choose one
150 that will be accessible by any host accessing the Ceph cluster.
151
152 Running the bootstrap command
153 -----------------------------
154
155 Run the ``ceph bootstrap`` command:
156
157 .. prompt:: bash #
158
159 cephadm bootstrap --mon-ip *<mon-ip>*
160
161 This command will:
162
163 * Create a monitor and manager daemon for the new cluster on the local
164 host.
165 * Generate a new SSH key for the Ceph cluster and add it to the root
166 user's ``/root/.ssh/authorized_keys`` file.
167 * Write a copy of the public key to ``/etc/ceph/ceph.pub``.
168 * Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
169 file is needed to communicate with the new cluster.
170 * Write a copy of the ``client.admin`` administrative (privileged!)
171 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
172 * Add the ``_admin`` label to the bootstrap host. By default, any host
173 with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
174 ``/etc/ceph/ceph.client.admin.keyring``.
175
176 Further information about cephadm bootstrap
177 -------------------------------------------
178
179 The default bootstrap behavior will work for most users. But if you'd like
180 immediately to know more about ``cephadm bootstrap``, read the list below.
181
182 Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
183 available options.
184
185 * By default, Ceph daemons send their log output to stdout/stderr, which is picked
186 up by the container runtime (docker or podman) and (on most systems) sent to
187 journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
188 use the ``--log-to-file`` option during bootstrap.
189
190 * Larger Ceph clusters perform better when (external to the Ceph cluster)
191 public network traffic is separated from (internal to the Ceph cluster)
192 cluster traffic. The internal cluster traffic handles replication, recovery,
193 and heartbeats between OSD daemons. You can define the :ref:`cluster
194 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
195 subcommand. This parameter must define a subnet in CIDR notation (for example
196 ``10.90.90.0/24`` or ``fe80::/64``).
197
198 * ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
199 the new cluster. This central location makes it possible for Ceph
200 packages installed on the host (e.g., packages that give access to the
201 cephadm command line interface) to find these files.
202
203 Daemon containers deployed with cephadm, however, do not need
204 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
205 to put them in a different directory (for example, ``.``). This may help
206 avoid conflicts with an existing Ceph configuration (cephadm or
207 otherwise) on the same host.
208
209 * You can pass any initial Ceph configuration options to the new
210 cluster by putting them in a standard ini-style configuration file
211 and using the ``--config *<config-file>*`` option. For example::
212
213 $ cat <<EOF > initial-ceph.conf
214 [global]
215 osd crush chooseleaf type = 0
216 EOF
217 $ ./cephadm bootstrap --config initial-ceph.conf ...
218
219 * The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
220 user cephadm will use to connect to hosts. The associated ssh key will be
221 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
222 designate with this option must have passwordless sudo access.
223
224 * If you are using a container on an authenticated registry that requires
225 login, you may add the argument:
226
227 * ``--registry-json <path to json file>``
228
229 example contents of JSON file with login info::
230
231 {"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"}
232
233 Cephadm will attempt to log in to this registry so it can pull your container
234 and then store the login info in its config database. Other hosts added to
235 the cluster will then also be able to make use of the authenticated registry.
236
237 * See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``.
238
239 .. _cephadm-enable-cli:
240
241 Enable Ceph CLI
242 ===============
243
244 Cephadm does not require any Ceph packages to be installed on the
245 host. However, we recommend enabling easy access to the ``ceph``
246 command. There are several ways to do this:
247
248 * The ``cephadm shell`` command launches a bash shell in a container
249 with all of the Ceph packages installed. By default, if
250 configuration and keyring files are found in ``/etc/ceph`` on the
251 host, they are passed into the container environment so that the
252 shell is fully functional. Note that when executed on a MON host,
253 ``cephadm shell`` will infer the ``config`` from the MON container
254 instead of using the default configuration. If ``--mount <path>``
255 is given, then the host ``<path>`` (file or directory) will appear
256 under ``/mnt`` inside the container:
257
258 .. prompt:: bash #
259
260 cephadm shell
261
262 * To execute ``ceph`` commands, you can also run commands like this:
263
264 .. prompt:: bash #
265
266 cephadm shell -- ceph -s
267
268 * You can install the ``ceph-common`` package, which contains all of the
269 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
270 CephFS file systems), etc.:
271
272 .. prompt:: bash #
273 :substitutions:
274
275 cephadm add-repo --release |stable-release|
276 cephadm install ceph-common
277
278 Confirm that the ``ceph`` command is accessible with:
279
280 .. prompt:: bash #
281
282 ceph -v
283
284
285 Confirm that the ``ceph`` command can connect to the cluster and also
286 its status with:
287
288 .. prompt:: bash #
289
290 ceph status
291
292 Adding Hosts
293 ============
294
295 Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
296
297 By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring
298 are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially
299 applied only to the bootstrap host. We usually recommend that one or more other hosts be
300 given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily
301 accessible on multiple hosts. To add the ``_admin`` label to additional host(s),
302
303 .. prompt:: bash #
304
305 ceph orch host label add *<host>* _admin
306
307 Adding additional MONs
308 ======================
309
310 A typical Ceph cluster has three or five monitor daemons spread
311 across different hosts. We recommend deploying five
312 monitors if there are five or more nodes in your cluster.
313
314 Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
315
316 Adding Storage
317 ==============
318
319 To add storage to the cluster, either tell Ceph to consume any
320 available and unused device:
321
322 .. prompt:: bash #
323
324 ceph orch apply osd --all-available-devices
325
326 See :ref:`cephadm-deploy-osds` for more detailed instructions.
327
328 Enabling OSD memory autotuning
329 ------------------------------
330
331 .. warning:: By default, cephadm enables ``osd_memory_target_autotune`` on bootstrap, with ``mgr/cephadm/autotune_memory_target_ratio`` set to ``.7`` of total host memory.
332
333 See :ref:`osd_autotune`.
334
335 To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph <https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cephadm.html#scenario-deploy-hyperconverged-ceph>`_
336
337 In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged),
338 reduce the memory consumption of Ceph like so:
339
340 .. prompt:: bash #
341
342 # hyperconverged only:
343 ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
344
345 Then enable memory autotuning:
346
347 .. prompt:: bash #
348
349 ceph config set osd osd_memory_target_autotune true
350
351
352 Using Ceph
353 ==========
354
355 To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
356
357 To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
358
359 To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
360
361 To use *iSCSI*, follow :ref:`cephadm-iscsi`
362
363 .. _cephadm-deployment-scenarios:
364
365 Different deployment scenarios
366 ==============================
367
368 Single host
369 -----------
370
371 To configure a Ceph cluster to run on a single host, use the ``--single-host-defaults`` flag when bootstrapping. For use cases of this, see :ref:`one-node-cluster`.
372
373 The ``--single-host-defaults`` flag sets the following configuration options::
374
375 global/osd_crush_chooseleaf_type = 0
376 global/osd_pool_default_size = 2
377 mgr/mgr_standby_modules = False
378
379 For more information on these options, see :ref:`one-node-cluster` and ``mgr_standby_modules`` in :ref:`mgr-administrator-guide`.
380
381 Deployment in an isolated environment
382 -------------------------------------
383
384 You can install Cephadm in an isolated environment by using a custom container registry. You can either configure Podman or Docker to use an insecure registry, or make the registry secure. Ensure your container image is inside the registry and that you have access to all hosts you wish to add to the cluster.
385
386 Run a local container registry:
387
388 .. prompt:: bash #
389
390 podman run --privileged -d --name registry -p 5000:5000 -v /var/lib/registry:/var/lib/registry --restart=always registry:2
391
392 If you are using an insecure registry, configure Podman or Docker with the hostname and port where the registry is running.
393
394 .. note:: For every host which accesses the local insecure registry, you will need to repeat this step on the host.
395
396 Next, push your container image to your local registry.
397
398 Then run bootstrap using the ``--image`` flag with your container image. For example:
399
400 .. prompt:: bash #
401
402 cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*
403
404
405 .. _cluster network: ../rados/configuration/network-config-ref#cluster-network