]> git.proxmox.com Git - pve-docs.git/blame - ceph-server.adoc
create and use chapter ref chapter_pmxcfs
[pve-docs.git] / ceph-server.adoc
CommitLineData
950229ff
FR
1Ceph Server in Proxmox VE Cluster
2---------------------------------
3
4
5It is possible to install at Proxmox VE cluster nodes a Ceph Server for RADOS Block Devices (RBD), see
6xref:ceph_rados_block_devices[chapter Ceph RADOS Block Devices (RBD)]
7
8
9Precondition
10~~~~~~~~~~~~
11
12There should be at least three (preferably) identical servers for setup which build together a Proxmox Cluster.
13
14
15A 10Gb network is recommmended, exclusively used for Ceph. If there are no 10Gb switches available meshed network is
16also an option, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
17
18
19Check also the recommendations from http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's Website].
20
21
22Installation of Ceph Packages
23~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
24
25
26On each node run the installation script as follows:
27
28[source,bash]
29----
30pveceph install -version jewel
31----
32
33
34This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list and installs the required software.
35
36
37Creating initial Ceph configuration
38~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
39
40After installation of packages, you need to create an initial Ceph configuration on just one node, based on your network (10.10.10.0/24 in the following example) dedicated for Ceph:
41
42[source,bash]
43----
44pveceph init --network 10.10.10.0/24
45----
46
2409e808 47This creates an initial config at /etc/pve/ceph.conf. That file is automatically distributed to all Proxmox VE nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph commands without the need to specify a configuration file.
950229ff
FR
48
49
50Creating Ceph Monitors
51~~~~~~~~~~~~~~~~~~~~~~
52
53On each node where a monitor is requested (at least 3 are recommended) create it by using the "Ceph" item in the GUI or run
54
55
56[source,bash]
57----
58pveceph createmon
59----
60
61
62Creating Ceph OSDs
63~~~~~~~~~~~~~~~~~~
64
65
66via GUI or via CLI as follows:
67
68[source,bash]
69----
70pveceph createosd /dev/sd[X]
71----
72
73If you want to use a dedicated SSD journal disk:
74
75NOTE: In order to use a dedicated journal disk (SSD), the disk needs to have a GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is no GPT, you cannot select the disk as journal. Currently the journal size is fixed to 5 GB.
76
77
78[source,bash]
79----
80pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
81----
82
83Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal disk
84
85[source,bash]
86----
87pveceph createosd /dev/sdf -journal_dev /dev/sdb
88----
89
90
91This partitions the disk (data and journal partition), creates filesystems and starts the OSD, afterwards it is running and fully functional. Please create at least 12 OSDs, distributed among your nodes (4 on each node).
92
93It should be noted that this command refuses to initialize disk when it detects existing data. So if you want to overwrite a disk you should remove existing data first. You can do that using:
94
95[source,bash]
96----
97ceph-disk zap /dev/sd[X]
98----
99
100
101You can create OSDs containing both journal and data partitions or you can place the journal on a dedicated SSD. Using a SSD journal disk is highly recommended if you expect good performance.
102
103
104
105Ceph Pools
106~~~~~~~~~~
107
108
109The standard installation creates per default the pool 'rbd', additional pools can be created via GUI.
110
111
112
113Ceph Client
114~~~~~~~~~~~
115
116
117You can then configure Proxmox VE to use such pools to store VM images, just use the GUI ("Add Storage": RBD, see section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
118
119
120You also need to copy the keyring to a predefined location.
121NOTE: that the file name needs to be storage id + .keyring - storage id is the expression after 'rbd:' in /etc/pve/storage.cfg which is 'my-ceph-storage' in the following example:
122
123[source,bash]
124----
125mkdir /etc/pve/priv/ceph
126cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
127----
128
129
130
131
132
133
134
135
136
137
138