]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | ================= |
2 | Add/Remove OSDs | |
3 | ================= | |
4 | ||
5 | Adding and removing Ceph OSD Daemons to your cluster may involve a few more | |
6 | steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons | |
7 | write data to the disk and to journals. So you need to provide a disk for the | |
8 | OSD and a path to the journal partition (i.e., this is the most common | |
9 | configuration, but you may configure your system to your own needs). | |
10 | ||
11 | In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption. | |
12 | You may specify the ``--dmcrypt`` argument when preparing an OSD to tell | |
13 | ``ceph-deploy`` that you want to use encryption. You may also specify the | |
14 | ``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt`` | |
15 | encryption keys. | |
16 | ||
17 | You should test various drive configurations to gauge their throughput before | |
18 | before building out a large cluster. See `Data Storage`_ for additional details. | |
19 | ||
20 | ||
21 | List Disks | |
22 | ========== | |
23 | ||
11fdf7f2 | 24 | To list the disks on a node, execute the following command:: |
7c673cae FG |
25 | |
26 | ceph-deploy disk list {node-name [node-name]...} | |
27 | ||
28 | ||
29 | Zap Disks | |
30 | ========= | |
31 | ||
32 | To zap a disk (delete its partition table) in preparation for use with Ceph, | |
33 | execute the following:: | |
34 | ||
9f95a23c TL |
35 | ceph-deploy disk zap {osd-server-name} {disk-name} |
36 | ceph-deploy disk zap osdserver1 /dev/sdb /dev/sdc | |
7c673cae FG |
37 | |
38 | .. important:: This will delete all data. | |
39 | ||
40 | ||
11fdf7f2 TL |
41 | Create OSDs |
42 | =========== | |
7c673cae FG |
43 | |
44 | Once you create a cluster, install Ceph packages, and gather keys, you | |
11fdf7f2 TL |
45 | may create the OSDs and deploy them to the OSD node(s). If you need to |
46 | identify a disk or zap it prior to preparing it for use as an OSD, | |
7c673cae FG |
47 | see `List Disks`_ and `Zap Disks`_. :: |
48 | ||
11fdf7f2 | 49 | ceph-deploy osd create --data {data-disk} {node-name} |
7c673cae | 50 | |
11fdf7f2 | 51 | For example:: |
7c673cae | 52 | |
11fdf7f2 | 53 | ceph-deploy osd create --data /dev/ssd osd-server1 |
7c673cae | 54 | |
11fdf7f2 TL |
55 | For bluestore (the default) the example assumes a disk dedicated to one Ceph |
56 | OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in | |
57 | addition to ``--filestore`` needs to be used to define the Journal device on | |
58 | the remote host. | |
7c673cae | 59 | |
11fdf7f2 | 60 | .. note:: When running multiple Ceph OSD daemons on a single node, and |
7c673cae FG |
61 | sharing a partioned journal with each OSD daemon, you should consider |
62 | the entire node the minimum failure domain for CRUSH purposes, because | |
63 | if the SSD drive fails, all of the Ceph OSD daemons that journal to it | |
64 | will fail too. | |
65 | ||
66 | ||
11fdf7f2 TL |
67 | List OSDs |
68 | ========= | |
7c673cae | 69 | |
11fdf7f2 | 70 | To list the OSDs deployed on a node(s), execute the following command:: |
7c673cae | 71 | |
11fdf7f2 | 72 | ceph-deploy osd list {node-name} |
7c673cae FG |
73 | |
74 | ||
75 | Destroy OSDs | |
76 | ============ | |
77 | ||
78 | .. note:: Coming soon. See `Remove OSDs`_ for manual procedures. | |
79 | ||
11fdf7f2 | 80 | .. To destroy an OSD, execute the following command:: |
7c673cae FG |
81 | |
82 | .. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}] | |
83 | ||
84 | .. Destroying an OSD will take it ``down`` and ``out`` of the cluster. | |
85 | ||
86 | .. _Data Storage: ../../../start/hardware-recommendations#data-storage | |
87 | .. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual |