]> git.proxmox.com Git - ceph.git/blob - ceph/doc/rados/deployment/ceph-deploy-osd.rst
import 15.2.0 Octopus source
[ceph.git] / ceph / doc / rados / deployment / ceph-deploy-osd.rst
1 =================
2 Add/Remove OSDs
3 =================
4
5 Adding and removing Ceph OSD Daemons to your cluster may involve a few more
6 steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
7 write data to the disk and to journals. So you need to provide a disk for the
8 OSD and a path to the journal partition (i.e., this is the most common
9 configuration, but you may configure your system to your own needs).
10
11 In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
12 You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
13 ``ceph-deploy`` that you want to use encryption. You may also specify the
14 ``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
15 encryption keys.
16
17 You should test various drive configurations to gauge their throughput before
18 before building out a large cluster. See `Data Storage`_ for additional details.
19
20
21 List Disks
22 ==========
23
24 To list the disks on a node, execute the following command::
25
26 ceph-deploy disk list {node-name [node-name]...}
27
28
29 Zap Disks
30 =========
31
32 To zap a disk (delete its partition table) in preparation for use with Ceph,
33 execute the following::
34
35 ceph-deploy disk zap {osd-server-name} {disk-name}
36 ceph-deploy disk zap osdserver1 /dev/sdb /dev/sdc
37
38 .. important:: This will delete all data.
39
40
41 Create OSDs
42 ===========
43
44 Once you create a cluster, install Ceph packages, and gather keys, you
45 may create the OSDs and deploy them to the OSD node(s). If you need to
46 identify a disk or zap it prior to preparing it for use as an OSD,
47 see `List Disks`_ and `Zap Disks`_. ::
48
49 ceph-deploy osd create --data {data-disk} {node-name}
50
51 For example::
52
53 ceph-deploy osd create --data /dev/ssd osd-server1
54
55 For bluestore (the default) the example assumes a disk dedicated to one Ceph
56 OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
57 addition to ``--filestore`` needs to be used to define the Journal device on
58 the remote host.
59
60 .. note:: When running multiple Ceph OSD daemons on a single node, and
61 sharing a partioned journal with each OSD daemon, you should consider
62 the entire node the minimum failure domain for CRUSH purposes, because
63 if the SSD drive fails, all of the Ceph OSD daemons that journal to it
64 will fail too.
65
66
67 List OSDs
68 =========
69
70 To list the OSDs deployed on a node(s), execute the following command::
71
72 ceph-deploy osd list {node-name}
73
74
75 Destroy OSDs
76 ============
77
78 .. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
79
80 .. To destroy an OSD, execute the following command::
81
82 .. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
83
84 .. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
85
86 .. _Data Storage: ../../../start/hardware-recommendations#data-storage
87 .. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual