]> git.proxmox.com Git - ceph.git/blame - ceph/doc/rados/deployment/ceph-deploy-osd.rst
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / doc / rados / deployment / ceph-deploy-osd.rst
CommitLineData
7c673cae
FG
1=================
2 Add/Remove OSDs
3=================
4
5Adding and removing Ceph OSD Daemons to your cluster may involve a few more
6steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
7write data to the disk and to journals. So you need to provide a disk for the
8OSD and a path to the journal partition (i.e., this is the most common
9configuration, but you may configure your system to your own needs).
10
11In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
12You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
13``ceph-deploy`` that you want to use encryption. You may also specify the
14``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
15encryption keys.
16
17You should test various drive configurations to gauge their throughput before
18before building out a large cluster. See `Data Storage`_ for additional details.
19
20
21List Disks
22==========
23
11fdf7f2 24To list the disks on a node, execute the following command::
7c673cae
FG
25
26 ceph-deploy disk list {node-name [node-name]...}
27
28
29Zap Disks
30=========
31
32To zap a disk (delete its partition table) in preparation for use with Ceph,
33execute the following::
34
35 ceph-deploy disk zap {osd-server-name}:{disk-name}
36 ceph-deploy disk zap osdserver1:sdb
37
38.. important:: This will delete all data.
39
40
11fdf7f2
TL
41Create OSDs
42===========
7c673cae
FG
43
44Once you create a cluster, install Ceph packages, and gather keys, you
11fdf7f2
TL
45may create the OSDs and deploy them to the OSD node(s). If you need to
46identify a disk or zap it prior to preparing it for use as an OSD,
7c673cae
FG
47see `List Disks`_ and `Zap Disks`_. ::
48
11fdf7f2 49 ceph-deploy osd create --data {data-disk} {node-name}
7c673cae 50
11fdf7f2 51For example::
7c673cae 52
11fdf7f2 53 ceph-deploy osd create --data /dev/ssd osd-server1
7c673cae 54
11fdf7f2
TL
55For bluestore (the default) the example assumes a disk dedicated to one Ceph
56OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
57addition to ``--filestore`` needs to be used to define the Journal device on
58the remote host.
7c673cae 59
11fdf7f2 60.. note:: When running multiple Ceph OSD daemons on a single node, and
7c673cae
FG
61 sharing a partioned journal with each OSD daemon, you should consider
62 the entire node the minimum failure domain for CRUSH purposes, because
63 if the SSD drive fails, all of the Ceph OSD daemons that journal to it
64 will fail too.
65
66
11fdf7f2
TL
67List OSDs
68=========
7c673cae 69
11fdf7f2 70To list the OSDs deployed on a node(s), execute the following command::
7c673cae 71
11fdf7f2 72 ceph-deploy osd list {node-name}
7c673cae
FG
73
74
75Destroy OSDs
76============
77
78.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
79
11fdf7f2 80.. To destroy an OSD, execute the following command::
7c673cae
FG
81
82.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
83
84.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
85
86.. _Data Storage: ../../../start/hardware-recommendations#data-storage
87.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual