]> git.proxmox.com Git - ceph.git/blame - ceph/doc/install/manual-deployment.rst
import quincy beta 17.1.0
[ceph.git] / ceph / doc / install / manual-deployment.rst
CommitLineData
7c673cae
FG
1===================
2 Manual Deployment
3===================
4
5All Ceph clusters require at least one monitor, and at least as many OSDs as
6copies of an object stored on the cluster. Bootstrapping the initial monitor(s)
7is the first step in deploying a Ceph Storage Cluster. Monitor deployment also
8sets important criteria for the entire cluster, such as the number of replicas
9for pools, the number of placement groups per OSD, the heartbeat intervals,
10whether authentication is required, etc. Most of these values are set by
11default, so it's useful to know about them when setting up your cluster for
12production.
13
20effc67
TL
14We will set up a cluster with ``mon-node1`` as the monitor node, and ``osd-node1`` and
15``osd-node2`` for OSD nodes.
7c673cae
FG
16
17
18
b5b8bbf5 19.. ditaa::
f91f0fd5 20
7c673cae 21 /------------------\ /----------------\
20effc67 22 | Admin Node | | mon-node1 |
7c673cae
FG
23 | +-------->+ |
24 | | | cCCC |
25 \---------+--------/ \----------------/
26 |
27 | /----------------\
20effc67 28 | | osd-node1 |
7c673cae
FG
29 +----------------->+ |
30 | | cCCC |
31 | \----------------/
32 |
33 | /----------------\
20effc67 34 | | osd-node2 |
7c673cae
FG
35 +----------------->| |
36 | cCCC |
37 \----------------/
38
39
40Monitor Bootstrapping
41=====================
42
43Bootstrapping a monitor (a Ceph Storage Cluster, in theory) requires
44a number of things:
45
b5b8bbf5
FG
46- **Unique Identifier:** The ``fsid`` is a unique identifier for the cluster,
47 and stands for File System ID from the days when the Ceph Storage Cluster was
9f95a23c 48 principally for the Ceph File System. Ceph now supports native interfaces,
b5b8bbf5 49 block devices, and object storage gateway interfaces too, so ``fsid`` is a
7c673cae
FG
50 bit of a misnomer.
51
52- **Cluster Name:** Ceph clusters have a cluster name, which is a simple string
53 without spaces. The default cluster name is ``ceph``, but you may specify
b5b8bbf5
FG
54 a different cluster name. Overriding the default cluster name is
55 especially useful when you are working with multiple clusters and you need to
56 clearly understand which cluster your are working with.
57
11fdf7f2 58 For example, when you run multiple clusters in a :ref:`multisite configuration <multisite>`,
7c673cae 59 the cluster name (e.g., ``us-west``, ``us-east``) identifies the cluster for
b5b8bbf5
FG
60 the current CLI session. **Note:** To identify the cluster name on the
61 command line interface, specify the Ceph configuration file with the
7c673cae
FG
62 cluster name (e.g., ``ceph.conf``, ``us-west.conf``, ``us-east.conf``, etc.).
63 Also see CLI usage (``ceph --cluster {cluster-name}``).
b5b8bbf5
FG
64
65- **Monitor Name:** Each monitor instance within a cluster has a unique name.
7c673cae 66 In common practice, the Ceph Monitor name is the host name (we recommend one
b5b8bbf5 67 Ceph Monitor per host, and no commingling of Ceph OSD Daemons with
7c673cae
FG
68 Ceph Monitors). You may retrieve the short hostname with ``hostname -s``.
69
b5b8bbf5
FG
70- **Monitor Map:** Bootstrapping the initial monitor(s) requires you to
71 generate a monitor map. The monitor map requires the ``fsid``, the cluster
7c673cae
FG
72 name (or uses the default), and at least one host name and its IP address.
73
b5b8bbf5
FG
74- **Monitor Keyring**: Monitors communicate with each other via a
75 secret key. You must generate a keyring with a monitor secret and provide
7c673cae 76 it when bootstrapping the initial monitor(s).
b5b8bbf5 77
7c673cae
FG
78- **Administrator Keyring**: To use the ``ceph`` CLI tools, you must have
79 a ``client.admin`` user. So you must generate the admin user and keyring,
80 and you must also add the ``client.admin`` user to the monitor keyring.
81
b5b8bbf5
FG
82The foregoing requirements do not imply the creation of a Ceph Configuration
83file. However, as a best practice, we recommend creating a Ceph configuration
7c673cae
FG
84file and populating it with the ``fsid``, the ``mon initial members`` and the
85``mon host`` settings.
86
87You can get and set all of the monitor settings at runtime as well. However,
b5b8bbf5 88a Ceph Configuration file may contain only those settings that override the
7c673cae 89default values. When you add settings to a Ceph configuration file, these
b5b8bbf5 90settings override the default settings. Maintaining those settings in a
7c673cae
FG
91Ceph configuration file makes it easier to maintain your cluster.
92
93The procedure is as follows:
94
95
96#. Log in to the initial monitor node(s)::
97
98 ssh {hostname}
99
b5b8bbf5 100 For example::
7c673cae 101
20effc67 102 ssh mon-node1
7c673cae
FG
103
104
b5b8bbf5
FG
105#. Ensure you have a directory for the Ceph configuration file. By default,
106 Ceph uses ``/etc/ceph``. When you install ``ceph``, the installer will
7c673cae
FG
107 create the ``/etc/ceph`` directory automatically. ::
108
b5b8bbf5 109 ls /etc/ceph
7c673cae 110
7c673cae 111
b5b8bbf5 112#. Create a Ceph configuration file. By default, Ceph uses
20effc67
TL
113 ``ceph.conf``, where ``ceph`` reflects the cluster name. Add a line
114 containing "[global]" to the configuration file. ::
7c673cae
FG
115
116 sudo vim /etc/ceph/ceph.conf
117
118
b5b8bbf5 119#. Generate a unique ID (i.e., ``fsid``) for your cluster. ::
7c673cae
FG
120
121 uuidgen
7c673cae 122
b5b8bbf5
FG
123
124#. Add the unique ID to your Ceph configuration file. ::
7c673cae
FG
125
126 fsid = {UUID}
127
b5b8bbf5 128 For example::
7c673cae
FG
129
130 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
131
132
b5b8bbf5 133#. Add the initial monitor(s) to your Ceph configuration file. ::
7c673cae
FG
134
135 mon initial members = {hostname}[,{hostname}]
136
b5b8bbf5 137 For example::
7c673cae 138
20effc67 139 mon initial members = mon-node1
7c673cae
FG
140
141
b5b8bbf5
FG
142#. Add the IP address(es) of the initial monitor(s) to your Ceph configuration
143 file and save the file. ::
7c673cae
FG
144
145 mon host = {ip-address}[,{ip-address}]
146
147 For example::
148
149 mon host = 192.168.0.1
150
151 **Note:** You may use IPv6 addresses instead of IPv4 addresses, but
152 you must set ``ms bind ipv6`` to ``true``. See `Network Configuration
153 Reference`_ for details about network configuration.
154
155#. Create a keyring for your cluster and generate a monitor secret key. ::
156
f67539c2 157 sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
7c673cae
FG
158
159
160#. Generate an administrator keyring, generate a ``client.admin`` user and add
b5b8bbf5 161 the user to the keyring. ::
7c673cae 162
11fdf7f2 163 sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
7c673cae 164
11fdf7f2
TL
165#. Generate a bootstrap-osd keyring, generate a ``client.bootstrap-osd`` user and add
166 the user to the keyring. ::
7c673cae 167
9f95a23c 168 sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
7c673cae 169
11fdf7f2 170#. Add the generated keys to the ``ceph.mon.keyring``. ::
7c673cae 171
11fdf7f2
TL
172 sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
173 sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
7c673cae 174
9f95a23c
TL
175#. Change the owner for ``ceph.mon.keyring``. ::
176
177 sudo chown ceph:ceph /tmp/ceph.mon.keyring
178
b5b8bbf5
FG
179#. Generate a monitor map using the hostname(s), host IP address(es) and the FSID.
180 Save it as ``/tmp/monmap``::
7c673cae
FG
181
182 monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
183
184 For example::
185
20effc67 186 monmaptool --create --add mon-node1 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap
7c673cae
FG
187
188
189#. Create a default data directory (or directories) on the monitor host(s). ::
190
191 sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
192
193 For example::
194
20effc67 195 sudo -u ceph mkdir /var/lib/ceph/mon/ceph-mon-node1
7c673cae
FG
196
197 See `Monitor Config Reference - Data`_ for details.
198
199#. Populate the monitor daemon(s) with the monitor map and keyring. ::
200
201 sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
202
203 For example::
204
20effc67 205 sudo -u ceph ceph-mon --mkfs -i mon-node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
7c673cae
FG
206
207
b5b8bbf5 208#. Consider settings for a Ceph configuration file. Common settings include
7c673cae
FG
209 the following::
210
211 [global]
212 fsid = {cluster-id}
213 mon initial members = {hostname}[, {hostname}]
214 mon host = {ip-address}[, {ip-address}]
215 public network = {network}[, {network}]
216 cluster network = {network}[, {network}]
217 auth cluster required = cephx
218 auth service required = cephx
219 auth client required = cephx
220 osd journal size = {n}
221 osd pool default size = {n} # Write an object n times.
11fdf7f2 222 osd pool default min size = {n} # Allow writing n copies in a degraded state.
7c673cae 223 osd pool default pg num = {n}
b5b8bbf5 224 osd pool default pgp num = {n}
7c673cae
FG
225 osd crush chooseleaf type = {n}
226
227 In the foregoing example, the ``[global]`` section of the configuration might
228 look like this::
229
230 [global]
231 fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993
20effc67 232 mon initial members = mon-node1
7c673cae
FG
233 mon host = 192.168.0.1
234 public network = 192.168.0.0/24
235 auth cluster required = cephx
236 auth service required = cephx
237 auth client required = cephx
238 osd journal size = 1024
94b18763
FG
239 osd pool default size = 3
240 osd pool default min size = 2
7c673cae 241 osd pool default pg num = 333
b5b8bbf5 242 osd pool default pgp num = 333
7c673cae
FG
243 osd crush chooseleaf type = 1
244
7c673cae
FG
245
246#. Start the monitor(s).
247
f67539c2 248 Start the service with systemd::
7c673cae 249
20effc67
TL
250 sudo systemctl start ceph-mon@mon-node1
251
252#. Ensure to open firewall ports for ceph-mon.
253
254 Open the ports with firewalld::
255
256 sudo firewall-cmd --zone=public --add-service=ceph-mon
257 sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
7c673cae 258
b5b8bbf5 259#. Verify that the monitor is running. ::
7c673cae 260
f67539c2 261 sudo ceph -s
7c673cae
FG
262
263 You should see output that the monitor you started is up and running, and
264 you should see a health error indicating that placement groups are stuck
b5b8bbf5 265 inactive. It should look something like this::
7c673cae 266
11fdf7f2
TL
267 cluster:
268 id: a7f64266-0894-4f1e-a635-d0aeaca0e993
269 health: HEALTH_OK
270
271 services:
20effc67
TL
272 mon: 1 daemons, quorum mon-node1
273 mgr: mon-node1(active)
11fdf7f2
TL
274 osd: 0 osds: 0 up, 0 in
275
276 data:
277 pools: 0 pools, 0 pgs
278 objects: 0 objects, 0 bytes
279 usage: 0 kB used, 0 kB / 0 kB avail
280 pgs:
281
7c673cae
FG
282
283 **Note:** Once you add OSDs and start them, the placement group health errors
11fdf7f2 284 should disappear. See `Adding OSDs`_ for details.
7c673cae 285
31f18b77
FG
286Manager daemon configuration
287============================
288
289On each node where you run a ceph-mon daemon, you should also set up a ceph-mgr daemon.
290
b5b8bbf5 291See :ref:`mgr-administrator-guide`
7c673cae
FG
292
293Adding OSDs
294===========
295
296Once you have your initial monitor(s) running, you should add OSDs. Your cluster
297cannot reach an ``active + clean`` state until you have enough OSDs to handle the
298number of copies of an object (e.g., ``osd pool default size = 2`` requires at
299least two OSDs). After bootstrapping your monitor, your cluster has a default
b5b8bbf5 300CRUSH map; however, the CRUSH map doesn't have any Ceph OSD Daemons mapped to
7c673cae
FG
301a Ceph Node.
302
303
304Short Form
305----------
306
94b18763
FG
307Ceph provides the ``ceph-volume`` utility, which can prepare a logical volume, disk, or partition
308for use with Ceph. The ``ceph-volume`` utility creates the OSD ID by
309incrementing the index. Additionally, ``ceph-volume`` will add the new OSD to the
310CRUSH map under the host for you. Execute ``ceph-volume -h`` for CLI details.
311The ``ceph-volume`` utility automates the steps of the `Long Form`_ below. To
20effc67 312create the first two OSDs with the short form procedure, execute the following for each OSD:
7c673cae 313
94b18763
FG
314bluestore
315^^^^^^^^^
316#. Create the OSD. ::
317
20effc67
TL
318 copy /var/lib/ceph/bootstrap-osd/ceph.keyring from monitor node (mon-node1) to /var/lib/ceph/bootstrap-osd/ceph.keyring on osd node (osd-node1)
319 ssh {osd node}
94b18763
FG
320 sudo ceph-volume lvm create --data {data-path}
321
322 For example::
323
20effc67
TL
324 scp -3 root@mon-node1:/var/lib/ceph/bootstrap-osd/ceph.keyring root@osd-node1:/var/lib/ceph/bootstrap-osd/ceph.keyring
325
326 ssh osd-node1
94b18763
FG
327 sudo ceph-volume lvm create --data /dev/hdd1
328
329Alternatively, the creation process can be split in two phases (prepare, and
330activate):
7c673cae
FG
331
332#. Prepare the OSD. ::
333
20effc67 334 ssh {osd node}
94b18763 335 sudo ceph-volume lvm prepare --data {data-path} {data-path}
7c673cae
FG
336
337 For example::
338
20effc67 339 ssh osd-node1
94b18763
FG
340 sudo ceph-volume lvm prepare --data /dev/hdd1
341
342 Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
343 activation. These can be obtained by listing OSDs in the current server::
7c673cae 344
94b18763 345 sudo ceph-volume lvm list
7c673cae
FG
346
347#. Activate the OSD::
348
94b18763
FG
349 sudo ceph-volume lvm activate {ID} {FSID}
350
351 For example::
352
353 sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
354
355
356filestore
357^^^^^^^^^
358#. Create the OSD. ::
359
20effc67 360 ssh {osd node}
94b18763
FG
361 sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path}
362
363 For example::
364
20effc67 365 ssh osd-node1
94b18763
FG
366 sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2
367
368Alternatively, the creation process can be split in two phases (prepare, and
369activate):
370
371#. Prepare the OSD. ::
372
373 ssh {node-name}
374 sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
7c673cae 375
b5b8bbf5 376 For example::
7c673cae 377
20effc67 378 ssh osd-node1
94b18763
FG
379 sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2
380
381 Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
382 activation. These can be obtained by listing OSDs in the current server::
383
384 sudo ceph-volume lvm list
385
386#. Activate the OSD::
387
388 sudo ceph-volume lvm activate --filestore {ID} {FSID}
389
390 For example::
7c673cae 391
94b18763 392 sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
7c673cae
FG
393
394
395Long Form
396---------
397
398Without the benefit of any helper utilities, create an OSD and add it to the
399cluster and CRUSH map with the following procedure. To create the first two
c07f9fc5 400OSDs with the long form procedure, execute the following steps for each OSD.
7c673cae 401
c07f9fc5
FG
402.. note:: This procedure does not describe deployment on top of dm-crypt
403 making use of the dm-crypt 'lockbox'.
7c673cae 404
c07f9fc5 405#. Connect to the OSD host and become root. ::
7c673cae 406
c07f9fc5
FG
407 ssh {node-name}
408 sudo bash
7c673cae 409
c07f9fc5 410#. Generate a UUID for the OSD. ::
7c673cae 411
c07f9fc5 412 UUID=$(uuidgen)
7c673cae 413
c07f9fc5 414#. Generate a cephx key for the OSD. ::
7c673cae 415
c07f9fc5 416 OSD_SECRET=$(ceph-authtool --gen-print-key)
7c673cae 417
c07f9fc5
FG
418#. Create the OSD. Note that an OSD ID can be provided as an
419 additional argument to ``ceph osd new`` if you need to reuse a
420 previously-destroyed OSD id. We assume that the
421 ``client.bootstrap-osd`` key is present on the machine. You may
422 alternatively execute this command as ``client.admin`` on a
423 different host where that key is present.::
b5b8bbf5 424
c07f9fc5
FG
425 ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
426 ceph osd new $UUID -i - \
427 -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
7c673cae 428
3a9019d9
FG
429 It is also possible to include a ``crush_device_class`` property in the JSON
430 to set an initial class other than the default (``ssd`` or ``hdd`` based on
431 the auto-detected device type).
432
c07f9fc5 433#. Create the default directory on your new OSD. ::
7c673cae 434
c07f9fc5 435 mkdir /var/lib/ceph/osd/ceph-$ID
7c673cae 436
b5b8bbf5 437#. If the OSD is for a drive other than the OS drive, prepare it
c07f9fc5 438 for use with Ceph, and mount it to the directory you just created. ::
7c673cae 439
c07f9fc5
FG
440 mkfs.xfs /dev/{DEV}
441 mount /dev/{DEV} /var/lib/ceph/osd/ceph-$ID
7c673cae 442
c07f9fc5 443#. Write the secret to the OSD keyring file. ::
7c673cae 444
c07f9fc5
FG
445 ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
446 --name osd.$ID --add-key $OSD_SECRET
7c673cae 447
c07f9fc5 448#. Initialize the OSD data directory. ::
7c673cae 449
c07f9fc5 450 ceph-osd -i $ID --mkfs --osd-uuid $UUID
7c673cae 451
c07f9fc5 452#. Fix ownership. ::
7c673cae 453
c07f9fc5 454 chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
7c673cae 455
b5b8bbf5
FG
456#. After you add an OSD to Ceph, the OSD is in your configuration. However,
457 it is not yet running. You must start
7c673cae
FG
458 your new OSD before it can begin receiving data.
459
c07f9fc5 460 For modern systemd distributions::
7c673cae 461
c07f9fc5
FG
462 systemctl enable ceph-osd@$ID
463 systemctl start ceph-osd@$ID
b5b8bbf5 464
7c673cae
FG
465 For example::
466
c07f9fc5
FG
467 systemctl enable ceph-osd@12
468 systemctl start ceph-osd@12
7c673cae
FG
469
470
471Adding MDS
472==========
473
474In the below instructions, ``{id}`` is an arbitrary name, such as the hostname of the machine.
475
476#. Create the mds data directory.::
477
478 mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
479
480#. Create a keyring.::
481
482 ceph-authtool --create-keyring /var/lib/ceph/mds/{cluster-name}-{id}/keyring --gen-key -n mds.{id}
b5b8bbf5 483
7c673cae
FG
484#. Import the keyring and set caps.::
485
f67539c2 486 ceph auth add mds.{id} osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
b5b8bbf5 487
7c673cae
FG
488#. Add to ceph.conf.::
489
490 [mds.{id}]
491 host = {id}
492
493#. Start the daemon the manual way.::
494
495 ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
496
497#. Start the daemon the right way (using ceph.conf entry).::
498
499 service ceph start
500
501#. If starting the daemon fails with this error::
502
503 mds.-1.0 ERROR: failed to authenticate: (22) Invalid argument
504
505 Then make sure you do not have a keyring set in ceph.conf in the global section; move it to the client section; or add a keyring setting specific to this mds daemon. And verify that you see the same key in the mds data directory and ``ceph auth get mds.{id}`` output.
506
9f95a23c 507#. Now you are ready to `create a Ceph file system`_.
7c673cae
FG
508
509
510Summary
511=======
512
513Once you have your monitor and two OSDs up and running, you can watch the
b5b8bbf5 514placement groups peer by executing the following::
7c673cae
FG
515
516 ceph -w
517
b5b8bbf5 518To view the tree, execute the following::
7c673cae
FG
519
520 ceph osd tree
b5b8bbf5
FG
521
522You should see output that looks something like this::
7c673cae
FG
523
524 # id weight type name up/down reweight
525 -1 2 root default
20effc67 526 -2 2 host osd-node1
7c673cae 527 0 1 osd.0 up 1
20effc67 528 -3 1 host osd-node2
b5b8bbf5 529 1 1 osd.1 up 1
7c673cae 530
b5b8bbf5 531To add (or remove) additional monitors, see `Add/Remove Monitors`_.
7c673cae
FG
532To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
533
534
7c673cae
FG
535.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
536.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
537.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
538.. _Monitor Config Reference - Data: ../../rados/configuration/mon-config-ref#data
9f95a23c 539.. _create a Ceph file system: ../../cephfs/createfs