]> git.proxmox.com Git - ceph.git/blob - ceph/doc/cephadm/install.rst
import ceph 16.2.7
[ceph.git] / ceph / doc / cephadm / install.rst
1 ============================
2 Deploying a new Ceph cluster
3 ============================
4
5 Cephadm creates a new Ceph cluster by "bootstrapping" on a single
6 host, expanding the cluster to encompass any additional hosts, and
7 then deploying the needed services.
8
9 .. highlight:: console
10
11
12 .. _cephadm-host-requirements:
13
14 Requirements
15 ============
16
17 - Python 3
18 - Systemd
19 - Podman or Docker for running containers
20 - Time synchronization (such as chrony or NTP)
21 - LVM2 for provisioning storage devices
22
23 Any modern Linux distribution should be sufficient. Dependencies
24 are installed automatically by the bootstrap process below.
25
26 See the section :ref:`Compatibility With Podman
27 Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
28 are compatible with Podman. Not every version of Podman is compatible with
29 Ceph.
30
31
32
33 .. _get-cephadm:
34
35 Install cephadm
36 ===============
37
38 The ``cephadm`` command can
39
40 #. bootstrap a new cluster
41 #. launch a containerized shell with a working Ceph CLI
42 #. aid in debugging containerized Ceph daemons
43
44 There are two ways to install ``cephadm``:
45
46 #. a :ref:`curl-based installation<cephadm_install_curl>` method
47 #. :ref:`distribution-specific installation methods<cephadm_install_distros>`
48
49 .. _cephadm_install_curl:
50
51 curl-based installation
52 -----------------------
53
54 * Use ``curl`` to fetch the most recent version of the
55 standalone script.
56
57 .. prompt:: bash #
58 :substitutions:
59
60 curl --silent --remote-name --location https://github.com/ceph/ceph/raw/|stable-release|/src/cephadm/cephadm
61
62 Make the ``cephadm`` script executable:
63
64 .. prompt:: bash #
65
66 chmod +x cephadm
67
68 This script can be run directly from the current directory:
69
70 .. prompt:: bash #
71
72 ./cephadm <arguments...>
73
74 * Although the standalone script is sufficient to get a cluster started, it is
75 convenient to have the ``cephadm`` command installed on the host. To install
76 the packages that provide the ``cephadm`` command, run the following
77 commands:
78
79 .. prompt:: bash #
80 :substitutions:
81
82 ./cephadm add-repo --release |stable-release|
83 ./cephadm install
84
85 Confirm that ``cephadm`` is now in your PATH by running ``which``:
86
87 .. prompt:: bash #
88
89 which cephadm
90
91 A successful ``which cephadm`` command will return this:
92
93 .. code-block:: bash
94
95 /usr/sbin/cephadm
96
97 .. _cephadm_install_distros:
98
99 distribution-specific installations
100 -----------------------------------
101
102 .. important:: The methods of installing ``cephadm`` in this section are distinct from the curl-based method above. Use either the curl-based method above or one of the methods in this section, but not both the curl-based method and one of these.
103
104 Some Linux distributions may already include up-to-date Ceph packages. In
105 that case, you can install cephadm directly. For example:
106
107 In Ubuntu:
108
109 .. prompt:: bash #
110
111 apt install -y cephadm
112
113 In Fedora:
114
115 .. prompt:: bash #
116
117 dnf -y install cephadm
118
119 In SUSE:
120
121 .. prompt:: bash #
122
123 zypper install -y cephadm
124
125
126
127 Bootstrap a new cluster
128 =======================
129
130 What to know before you bootstrap
131 ---------------------------------
132
133 The first step in creating a new Ceph cluster is running the ``cephadm
134 bootstrap`` command on the Ceph cluster's first host. The act of running the
135 ``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
136 cluster's first "monitor daemon", and that monitor daemon needs an IP address.
137 You must pass the IP address of the Ceph cluster's first host to the ``ceph
138 bootstrap`` command, so you'll need to know the IP address of that host.
139
140 .. note:: If there are multiple networks and interfaces, be sure to choose one
141 that will be accessible by any host accessing the Ceph cluster.
142
143 Running the bootstrap command
144 -----------------------------
145
146 Run the ``ceph bootstrap`` command:
147
148 .. prompt:: bash #
149
150 cephadm bootstrap --mon-ip *<mon-ip>*
151
152 This command will:
153
154 * Create a monitor and manager daemon for the new cluster on the local
155 host.
156 * Generate a new SSH key for the Ceph cluster and add it to the root
157 user's ``/root/.ssh/authorized_keys`` file.
158 * Write a copy of the public key to ``/etc/ceph/ceph.pub``.
159 * Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
160 file is needed to communicate with the new cluster.
161 * Write a copy of the ``client.admin`` administrative (privileged!)
162 secret key to ``/etc/ceph/ceph.client.admin.keyring``.
163 * Add the ``_admin`` label to the bootstrap host. By default, any host
164 with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
165 ``/etc/ceph/ceph.client.admin.keyring``.
166
167 Further information about cephadm bootstrap
168 -------------------------------------------
169
170 The default bootstrap behavior will work for most users. But if you'd like
171 immediately to know more about ``cephadm bootstrap``, read the list below.
172
173 Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
174 available options.
175
176 * By default, Ceph daemons send their log output to stdout/stderr, which is picked
177 up by the container runtime (docker or podman) and (on most systems) sent to
178 journald. If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
179 use the ``--log-to-file`` option during bootstrap.
180
181 * Larger Ceph clusters perform better when (external to the Ceph cluster)
182 public network traffic is separated from (internal to the Ceph cluster)
183 cluster traffic. The internal cluster traffic handles replication, recovery,
184 and heartbeats between OSD daemons. You can define the :ref:`cluster
185 network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
186 subcommand. This parameter must define a subnet in CIDR notation (for example
187 ``10.90.90.0/24`` or ``fe80::/64``).
188
189 * ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
190 the new cluster. This central location makes it possible for Ceph
191 packages installed on the host (e.g., packages that give access to the
192 cephadm command line interface) to find these files.
193
194 Daemon containers deployed with cephadm, however, do not need
195 ``/etc/ceph`` at all. Use the ``--output-dir *<directory>*`` option
196 to put them in a different directory (for example, ``.``). This may help
197 avoid conflicts with an existing Ceph configuration (cephadm or
198 otherwise) on the same host.
199
200 * You can pass any initial Ceph configuration options to the new
201 cluster by putting them in a standard ini-style configuration file
202 and using the ``--config *<config-file>*`` option. For example::
203
204 $ cat <<EOF > initial-ceph.conf
205 [global]
206 osd crush chooseleaf type = 0
207 EOF
208 $ ./cephadm bootstrap --config initial-ceph.conf ...
209
210 * The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
211 user cephadm will use to connect to hosts. The associated ssh key will be
212 added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
213 designate with this option must have passwordless sudo access.
214
215 * If you are using a container on an authenticated registry that requires
216 login, you may add the three arguments:
217
218 #. ``--registry-url <url of registry>``
219
220 #. ``--registry-username <username of account on registry>``
221
222 #. ``--registry-password <password of account on registry>``
223
224 OR
225
226 * ``--registry-json <json file with login info>``
227
228 Cephadm will attempt to log in to this registry so it can pull your container
229 and then store the login info in its config database. Other hosts added to
230 the cluster will then also be able to make use of the authenticated registry.
231
232 .. _cephadm-enable-cli:
233
234 Enable Ceph CLI
235 ===============
236
237 Cephadm does not require any Ceph packages to be installed on the
238 host. However, we recommend enabling easy access to the ``ceph``
239 command. There are several ways to do this:
240
241 * The ``cephadm shell`` command launches a bash shell in a container
242 with all of the Ceph packages installed. By default, if
243 configuration and keyring files are found in ``/etc/ceph`` on the
244 host, they are passed into the container environment so that the
245 shell is fully functional. Note that when executed on a MON host,
246 ``cephadm shell`` will infer the ``config`` from the MON container
247 instead of using the default configuration. If ``--mount <path>``
248 is given, then the host ``<path>`` (file or directory) will appear
249 under ``/mnt`` inside the container:
250
251 .. prompt:: bash #
252
253 cephadm shell
254
255 * To execute ``ceph`` commands, you can also run commands like this:
256
257 .. prompt:: bash #
258
259 cephadm shell -- ceph -s
260
261 * You can install the ``ceph-common`` package, which contains all of the
262 ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
263 CephFS file systems), etc.:
264
265 .. prompt:: bash #
266 :substitutions:
267
268 cephadm add-repo --release |stable-release|
269 cephadm install ceph-common
270
271 Confirm that the ``ceph`` command is accessible with:
272
273 .. prompt:: bash #
274
275 ceph -v
276
277
278 Confirm that the ``ceph`` command can connect to the cluster and also
279 its status with:
280
281 .. prompt:: bash #
282
283 ceph status
284
285 Adding Hosts
286 ============
287
288 Next, add all hosts to the cluster by following :ref:`cephadm-adding-hosts`.
289
290 By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring
291 are maintained in ``/etc/ceph`` on all hosts with the ``_admin`` label, which is initially
292 applied only to the bootstrap host. We usually recommend that one or more other hosts be
293 given the ``_admin`` label so that the Ceph CLI (e.g., via ``cephadm shell``) is easily
294 accessible on multiple hosts. To add the ``_admin`` label to additional host(s),
295
296 .. prompt:: bash #
297
298 ceph orch host label add *<host>* _admin
299
300 Adding additional MONs
301 ======================
302
303 A typical Ceph cluster has three or five monitor daemons spread
304 across different hosts. We recommend deploying five
305 monitors if there are five or more nodes in your cluster.
306
307 Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.
308
309 Adding Storage
310 ==============
311
312 To add storage to the cluster, either tell Ceph to consume any
313 available and unused device:
314
315 .. prompt:: bash #
316
317 ceph orch apply osd --all-available-devices
318
319 Or See :ref:`cephadm-deploy-osds` for more detailed instructions.
320
321 Using Ceph
322 ==========
323
324 To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.
325
326 To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.
327
328 To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`
329
330 To use *iSCSI*, follow :ref:`cephadm-iscsi`
331
332
333 .. _cluster network: ../rados/configuration/network-config-ref#cluster-network