]> git.proxmox.com Git - ceph.git/blame - ceph/doc/install/manual-deployment.rst
update sources to v12.2.5
[ceph.git] / ceph / doc / install / manual-deployment.rst
CommitLineData
7c673cae
FG
1===================
2 Manual Deployment
3===================
4
5All Ceph clusters require at least one monitor, and at least as many OSDs as
6copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
7is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
8sets important criteria for the entire cluster, such as the number of replicas
9for pools, the number of placement groups per OSD, the heartbeat intervals,
10whether authentication is required, etc. Most of these values are set by
11default, so it's useful to know about them when setting up your cluster for
12production.
13
14Following the same configuration as `Installation (Quick)`_, we will set up a
b5b8bbf5 15cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
7c673cae
FG
16OSD nodes.
17
18
19
b5b8bbf5 20.. ditaa::
7c673cae
FG
21 /------------------\ /----------------\
22 | Admin Node | | node1 |
23 | +-------->+ |
24 | | | cCCC |
25 \---------+--------/ \----------------/
26 |
27 | /----------------\
28 | | node2 |
29 +----------------->+ |
30 | | cCCC |
31 | \----------------/
32 |
33 | /----------------\
34 | | node3 |
35 +----------------->| |
36 | cCCC |
37 \----------------/
38
39
40Monitor Bootstrapping
41=====================
42
43Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
44a number of things:
45
b5b8bbf5
FG
46- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
47 and stands for File System ID from the days when the Ceph Storage Cluster was
48 principally for the Ceph Filesystem. Ceph now supports native interfaces,
49 block devices, and object storage gateway interfaces too, so ``fsid`` is a
7c673cae
FG
50 bit of a misnomer.
51
52- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
53 without spaces. The default cluster name is ``ceph``, but you may specify
b5b8bbf5
FG
54 a different cluster name. Overriding the default cluster name is
55 especially useful when you are working with multiple clusters and you need to
56 clearly understand which cluster your are working with.
57
58 For example, when you run multiple clusters in a `federated architecture`_,
7c673cae 59 the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
b5b8bbf5
FG
60 the current CLI session. **Note:** To identify the cluster name on the
61 command line interface, specify the Ceph configuration file with the
7c673cae
FG
62 cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
63 Also see CLI usage (``ceph --cluster {cluster-name}``).
b5b8bbf5
FG
64
65- **Monitor Name:** Each monitor instance within a cluster has a unique name.
7c673cae 66 In common practice, the Ceph Monitor name is the host name (we recommend one
b5b8bbf5 67 Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
7c673cae
FG
68 Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
69
b5b8bbf5
FG
70- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
71 generate a monitor map. The monitor map requires the ``fsid``, the cluster
7c673cae
FG
72 name (or uses the default), and at least one host name and its IP address.
73
b5b8bbf5
FG
74- **Monitor Keyring**: Monitors communicate with each other via a
75 secret key. You must generate a keyring with a monitor secret and provide
7c673cae 76 it when bootstrapping the initial monitor(s).
b5b8bbf5 77
7c673cae
FG
78- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
79 a ``client.admin`` user. So you must generate the admin user and keyring,
80 and you must also add the ``client.admin`` user to the monitor keyring.
81
b5b8bbf5
FG
82The foregoing requirements do not imply the creation of a Ceph Configuration
83file. However, as a best practice, we recommend creating a Ceph configuration
7c673cae
FG
84file and populating it with the ``fsid``, the ``mon initial members`` and the
85``mon host`` settings.
86
87You can get and set all of the monitor settings at runtime as well. However,
b5b8bbf5 88a Ceph Configuration file may contain only those settings that override the
7c673cae 89default values. When you add settings to a Ceph configuration file, these
b5b8bbf5 90settings override the default settings. Maintaining those settings in a
7c673cae
FG
91Ceph configuration file makes it easier to maintain your cluster.
92
93The procedure is as follows:
94
95
96#. Log in to the initial monitor node(s)::
97
98 ssh {hostname}
99
b5b8bbf5 100 For example::
7c673cae
FG
101
102 ssh node1
103
104
b5b8bbf5
FG
105#. Ensure you have a directory for the Ceph configuration file. By default,
106 Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
7c673cae
FG
107 create the ``/etc/ceph`` directory automatically. ::
108
b5b8bbf5 109 ls /etc/ceph
7c673cae
FG
110
111 **Note:** Deployment tools may remove this directory when purging a
112 cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
113 {node-name}``).
114
b5b8bbf5 115#. Create a Ceph configuration file. By default, Ceph uses
7c673cae
FG
116 ``ceph.conf``, where ``ceph`` reflects the cluster name. ::
117
118 sudo vim /etc/ceph/ceph.conf
119
120
b5b8bbf5 121#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
7c673cae
FG
122
123 uuidgen
7c673cae 124
b5b8bbf5
FG
125
126#. Add the unique ID to your Ceph configuration file. ::
7c673cae
FG
127
128 fsid = {UUID}
129
b5b8bbf5 130 For example::
7c673cae
FG
131
132 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
133
134
b5b8bbf5 135#. Add the initial monitor(s) to your Ceph configuration file. ::
7c673cae
FG
136
137 mon initial members = {hostname}[,{hostname}]
138
b5b8bbf5 139 For example::
7c673cae
FG
140
141 mon initial members = node1
142
143
b5b8bbf5
FG
144#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
145 file and save the file. ::
7c673cae
FG
146
147 mon host = {ip-address}[,{ip-address}]
148
149 For example::
150
151 mon host = 192.168.0.1
152
153 **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
154 you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
155 Reference`_ for details about network configuration.
156
157#. Create a keyring for your cluster and generate a monitor secret key. ::
158
159 ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
160
161
162#. Generate an administrator keyring, generate a ``client.admin`` user and add
b5b8bbf5 163 the user to the keyring. ::
7c673cae 164
d2e6a577 165 sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
7c673cae
FG
166
167
b5b8bbf5 168#. Add the ``client.admin`` key to the ``ceph.mon.keyring``. ::
7c673cae
FG
169
170 ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
171
172
b5b8bbf5
FG
173#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
174 Save it as ``/tmp/monmap``::
7c673cae
FG
175
176 monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
177
178 For example::
179
180 monmaptool --create --add node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
181
182
183#. Create a default data directory (or directories) on the monitor host(s). ::
184
185 sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
186
187 For example::
188
189 sudo mkdir /var/lib/ceph/mon/ceph-node1
190
191 See `Monitor Config Reference - Data`_ for details.
192
193#. Populate the monitor daemon(s) with the monitor map and keyring. ::
194
195 sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
196
197 For example::
198
199 sudo -u ceph ceph-mon --mkfs -i node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
200
201
b5b8bbf5 202#. Consider settings for a Ceph configuration file. Common settings include
7c673cae
FG
203 the following::
204
205 [global]
206 fsid = {cluster-id}
207 mon initial members = {hostname}[, {hostname}]
208 mon host = {ip-address}[, {ip-address}]
209 public network = {network}[, {network}]
210 cluster network = {network}[, {network}]
211 auth cluster required = cephx
212 auth service required = cephx
213 auth client required = cephx
214 osd journal size = {n}
215 osd pool default size = {n} # Write an object n times.
216 osd pool default min size = {n} # Allow writing n copy in a degraded state.
217 osd pool default pg num = {n}
b5b8bbf5 218 osd pool default pgp num = {n}
7c673cae
FG
219 osd crush chooseleaf type = {n}
220
221 In the foregoing example, the ``[global]`` section of the configuration might
222 look like this::
223
224 [global]
225 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
226 mon initial members = node1
227 mon host = 192.168.0.1
228 public network = 192.168.0.0/24
229 auth cluster required = cephx
230 auth service required = cephx
231 auth client required = cephx
232 osd journal size = 1024
94b18763
FG
233 osd pool default size = 3
234 osd pool default min size = 2
7c673cae 235 osd pool default pg num = 333
b5b8bbf5 236 osd pool default pgp num = 333
7c673cae
FG
237 osd crush chooseleaf type = 1
238
239#. Touch the ``done`` file.
240
241 Mark that the monitor is created and ready to be started::
242
243 sudo touch /var/lib/ceph/mon/ceph-node1/done
244
245#. Start the monitor(s).
246
247 For Ubuntu, use Upstart::
248
249 sudo start ceph-mon id=node1 [cluster={cluster-name}]
250
251 In this case, to allow the start of the daemon at each reboot you
252 must create two empty files like this::
253
254 sudo touch /var/lib/ceph/mon/{cluster-name}-{hostname}/upstart
255
256 For example::
257
258 sudo touch /var/lib/ceph/mon/ceph-node1/upstart
259
260 For Debian/CentOS/RHEL, use sysvinit::
261
262 sudo /etc/init.d/ceph start mon.node1
263
264
265#. Verify that Ceph created the default pools. ::
266
267 ceph osd lspools
268
269 You should see output like this::
270
271 0 data,1 metadata,2 rbd,
272
273
b5b8bbf5 274#. Verify that the monitor is running. ::
7c673cae
FG
275
276 ceph -s
277
278 You should see output that the monitor you started is up and running, and
279 you should see a health error indicating that placement groups are stuck
b5b8bbf5 280 inactive. It should look something like this::
7c673cae
FG
281
282 cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
283 health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
284 monmap e1: 1 mons at {node1=192.168.0.1:6789/0}, election epoch 1, quorum 0 node1
285 osdmap e1: 0 osds: 0 up, 0 in
286 pgmap v2: 192 pgs, 3 pools, 0 bytes data, 0 objects
287 0 kB used, 0 kB / 0 kB avail
288 192 creating
289
290 **Note:** Once you add OSDs and start them, the placement group health errors
291 should disappear. See the next section for details.
292
31f18b77
FG
293Manager daemon configuration
294============================
295
296On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
297
b5b8bbf5 298See :ref:`mgr-administrator-guide`
7c673cae
FG
299
300Adding OSDs
301===========
302
303Once you have your initial monitor(s) running, you should add OSDs. Your cluster
304cannot reach an ``active + clean`` state until you have enough OSDs to handle the
305number of copies of an object (e.g., ``osd pool default size = 2`` requires at
306least two OSDs). After bootstrapping your monitor, your cluster has a default
b5b8bbf5 307CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
7c673cae
FG
308a Ceph Node.
309
310
311Short Form
312----------
313
94b18763
FG
314Ceph provides the ``ceph-volume`` utility, which can prepare a logical volume, disk, or partition
315for use with Ceph. The ``ceph-volume`` utility creates the OSD ID by
316incrementing the index. Additionally, ``ceph-volume`` will add the new OSD to the
317CRUSH map under the host for you. Execute ``ceph-volume -h`` for CLI details.
318The ``ceph-volume`` utility automates the steps of the `Long Form`_ below. To
7c673cae
FG
319create the first two OSDs with the short form procedure, execute the following
320on ``node2`` and ``node3``:
321
94b18763
FG
322bluestore
323^^^^^^^^^
324#. Create the OSD. ::
325
326 ssh {node-name}
327 sudo ceph-volume lvm create --data {data-path}
328
329 For example::
330
331 ssh node1
332 sudo ceph-volume lvm create --data /dev/hdd1
333
334Alternatively, the creation process can be split in two phases (prepare, and
335activate):
7c673cae
FG
336
337#. Prepare the OSD. ::
338
339 ssh {node-name}
94b18763 340 sudo ceph-volume lvm prepare --data {data-path} {data-path}
7c673cae
FG
341
342 For example::
343
344 ssh node1
94b18763
FG
345 sudo ceph-volume lvm prepare --data /dev/hdd1
346
347 Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
348 activation. These can be obtained by listing OSDs in the current server::
7c673cae 349
94b18763 350 sudo ceph-volume lvm list
7c673cae
FG
351
352#. Activate the OSD::
353
94b18763
FG
354 sudo ceph-volume lvm activate {ID} {FSID}
355
356 For example::
357
358 sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
359
360
361filestore
362^^^^^^^^^
363#. Create the OSD. ::
364
365 ssh {node-name}
366 sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path}
367
368 For example::
369
370 ssh node1
371 sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2
372
373Alternatively, the creation process can be split in two phases (prepare, and
374activate):
375
376#. Prepare the OSD. ::
377
378 ssh {node-name}
379 sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
7c673cae 380
b5b8bbf5 381 For example::
7c673cae 382
94b18763
FG
383 ssh node1
384 sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2
385
386 Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
387 activation. These can be obtained by listing OSDs in the current server::
388
389 sudo ceph-volume lvm list
390
391#. Activate the OSD::
392
393 sudo ceph-volume lvm activate --filestore {ID} {FSID}
394
395 For example::
7c673cae 396
94b18763 397 sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
7c673cae
FG
398
399
400Long Form
401---------
402
403Without the benefit of any helper utilities, create an OSD and add it to the
404cluster and CRUSH map with the following procedure. To create the first two
c07f9fc5 405OSDs with the long form procedure, execute the following steps for each OSD.
7c673cae 406
c07f9fc5
FG
407.. note:: This procedure does not describe deployment on top of dm-crypt
408 making use of the dm-crypt 'lockbox'.
7c673cae 409
c07f9fc5 410#. Connect to the OSD host and become root. ::
7c673cae 411
c07f9fc5
FG
412 ssh {node-name}
413 sudo bash
7c673cae 414
c07f9fc5 415#. Generate a UUID for the OSD. ::
7c673cae 416
c07f9fc5 417 UUID=$(uuidgen)
7c673cae 418
c07f9fc5 419#. Generate a cephx key for the OSD. ::
7c673cae 420
c07f9fc5 421 OSD_SECRET=$(ceph-authtool --gen-print-key)
7c673cae 422
c07f9fc5
FG
423#. Create the OSD. Note that an OSD ID can be provided as an
424 additional argument to ``ceph osd new`` if you need to reuse a
425 previously-destroyed OSD id. We assume that the
426 ``client.bootstrap-osd`` key is present on the machine. You may
427 alternatively execute this command as ``client.admin`` on a
428 different host where that key is present.::
b5b8bbf5 429
c07f9fc5
FG
430 ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
431 ceph osd new $UUID -i - \
432 -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
7c673cae 433
3a9019d9
FG
434 It is also possible to include a ``crush_device_class`` property in the JSON
435 to set an initial class other than the default (``ssd`` or ``hdd`` based on
436 the auto-detected device type).
437
c07f9fc5 438#. Create the default directory on your new OSD. ::
7c673cae 439
c07f9fc5 440 mkdir /var/lib/ceph/osd/ceph-$ID
7c673cae 441
b5b8bbf5 442#. If the OSD is for a drive other than the OS drive, prepare it
c07f9fc5 443 for use with Ceph, and mount it to the directory you just created. ::
7c673cae 444
c07f9fc5
FG
445 mkfs.xfs /dev/{DEV}
446 mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID
7c673cae 447
c07f9fc5 448#. Write the secret to the OSD keyring file. ::
7c673cae 449
c07f9fc5
FG
450 ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
451 --name osd.$ID --add-key $OSD_SECRET
7c673cae 452
c07f9fc5 453#. Initialize the OSD data directory. ::
7c673cae 454
c07f9fc5 455 ceph-osd -i $ID --mkfs --osd-uuid $UUID
7c673cae 456
c07f9fc5 457#. Fix ownership. ::
7c673cae 458
c07f9fc5 459 chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
7c673cae 460
b5b8bbf5
FG
461#. After you add an OSD to Ceph, the OSD is in your configuration. However,
462 it is not yet running. You must start
7c673cae
FG
463 your new OSD before it can begin receiving data.
464
c07f9fc5 465 For modern systemd distributions::
7c673cae 466
c07f9fc5
FG
467 systemctl enable ceph-osd@$ID
468 systemctl start ceph-osd@$ID
b5b8bbf5 469
7c673cae
FG
470 For example::
471
c07f9fc5
FG
472 systemctl enable ceph-osd@12
473 systemctl start ceph-osd@12
7c673cae
FG
474
475
476Adding MDS
477==========
478
479In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
480
481#. Create the mds data directory.::
482
483 mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
484
485#. Create a keyring.::
486
487 ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
b5b8bbf5 488
7c673cae
FG
489#. Import the keyring and set caps.::
490
491 ceph auth add mds.{id} osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
b5b8bbf5 492
7c673cae
FG
493#. Add to ceph.conf.::
494
495 [mds.{id}]
496 host = {id}
497
498#. Start the daemon the manual way.::
499
500 ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
501
502#. Start the daemon the right way (using ceph.conf entry).::
503
504 service ceph start
505
506#. If starting the daemon fails with this error::
507
508 mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
509
510 Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
511
512#. Now you are ready to `create a Ceph filesystem`_.
513
514
515Summary
516=======
517
518Once you have your monitor and two OSDs up and running, you can watch the
b5b8bbf5 519placement groups peer by executing the following::
7c673cae
FG
520
521 ceph -w
522
b5b8bbf5 523To view the tree, execute the following::
7c673cae
FG
524
525 ceph osd tree
b5b8bbf5
FG
526
527You should see output that looks something like this::
7c673cae
FG
528
529 # id weight type name up/down reweight
530 -1 2 root default
531 -2 2 host node1
532 0 1 osd.0 up 1
533 -3 1 host node2
b5b8bbf5 534 1 1 osd.1 up 1
7c673cae 535
b5b8bbf5 536To add (or remove) additional monitors, see `Add/Remove Monitors`_.
7c673cae
FG
537To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
538
539
540.. _federated architecture: ../../radosgw/federated-config
541.. _Installation (Quick): ../../start
542.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
543.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
544.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
545.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
546.. _create a Ceph filesystem: ../../cephfs/createfs