]> git.proxmox.com Git - pve-docs.git/blame - ceph-server.adoc
add ceph macro
[pve-docs.git] / ceph-server.adoc
CommitLineData
cdcc8703 1Ceph Server on Proxmox VE Cluster
950229ff
FR
2---------------------------------
3
cdcc8703
DM
4It is possible to install the Ceph storage server directly on the
5Proxmox VE cluster nodes. The VMs and Containers can access that
6storage using the
7xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)] storage
8driver.
950229ff
FR
9
10
11Precondition
12~~~~~~~~~~~~
13
14There should be at least three (preferably) identical servers for setup which build together a Proxmox Cluster.
15
16
17A 10Gb network is recommmended, exclusively used for Ceph. If there are no 10Gb switches available meshed network is
18also an option, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
19
20
21Check also the recommendations from http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's Website].
22
23
24Installation of Ceph Packages
25~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
26
27
28On each node run the installation script as follows:
29
30[source,bash]
31----
32pveceph install -version jewel
33----
34
35
36This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.
37
38
39Creating initial Ceph configuration
40~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
41
42After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (10.10.10.0/24 in the following example) dedicated for Ceph:
43
44[source,bash]
45----
46pveceph init --network 10.10.10.0/24
47----
48
2409e808 49This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
950229ff
FR
50
51
52Creating Ceph Monitors
53~~~~~~~~~~~~~~~~~~~~~~
54
55On each node where a monitor is requested (at least 3 are recommended) create it by using the "Ceph" item in the GUI or run
56
57
58[source,bash]
59----
60pveceph createmon
61----
62
63
64Creating Ceph OSDs
65~~~~~~~~~~~~~~~~~~
66
67
68via GUI or via CLI as follows:
69
70[source,bash]
71----
72pveceph createosd /dev/sd[X]
73----
74
75If you want to use a dedicated SSD journal disk:
76
77NOTE: In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.
78
79
80[source,bash]
81----
82pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
83----
84
85Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk
86
87[source,bash]
88----
89pveceph createosd /dev/sdf -journal_dev /dev/sdb
90----
91
92
93This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node).
94
95It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using:
96
97[source,bash]
98----
99ceph-disk zap /dev/sd[X]
100----
101
102
103You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance.
104
105
106
107Ceph Pools
108~~~~~~~~~~
109
110
111The standard installation creates per default the pool 'rbd', additional pools can be created via GUI.
112
113
114
115Ceph Client
116~~~~~~~~~~~
117
118
119You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD, see section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
120
121
122You also need to copy the keyring to a predefined location.
123NOTE: that the file name needs to be storage id + .keyring - storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is 'my-ceph-storage' in the following example:
124
125[source,bash]
126----
127mkdir /etc/pve/priv/ceph
128cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
129----
130
131
132
133
134
135
136
137
138
139
140