]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
hyper-converged-infrastructure.adoc: improved text
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
21394e70 21pveceph - Manage Ceph Services on Proxmox VE Nodes
e8b392d3 22==================================================
0840a663
DM
23endif::manvolnum[]
24
21394e70
DM
25It is possible to install the {ceph} storage server directly on the
26Proxmox VE cluster nodes. The VMs and Containers can access that
27storage using the xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]
28storage driver.
29
30To simplify management, we provide 'pveceph' - a tool to install and
31manage {ceph} services on {pve} nodes.
32
33
34Precondition
35------------
36
37There should be at least three (preferably) identical servers for
38setup which build together a Proxmox Cluster.
39
40A 10Gb network is recommmended, exclusively used for Ceph. If there
41are no 10Gb switches available meshed network is also an option, see
42{webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
43
44
45Check also the recommendations from
46http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's website].
47
48
49Installation of Ceph Packages
50-----------------------------
51
52On each node run the installation script as follows:
53
54[source,bash]
55----
56pveceph install -version jewel
57----
58
59This sets up an `apt` package repository in
60`/etc/apt/sources.list.d/ceph.list` and installs the required software.
61
62
63Creating initial Ceph configuration
64-----------------------------------
65
66After installation of packages, you need to create an initial Ceph
67configuration on just one node, based on your network (`10.10.10.0/24`
68in the following example) dedicated for Ceph:
69
70[source,bash]
71----
72pveceph init --network 10.10.10.0/24
73----
74
75This creates an initial config at `/etc/pve/ceph.conf`. That file is
76automatically distributed to all Proxmox VE nodes by using
77xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
78from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
79Ceph commands without the need to specify a configuration file.
80
81
82Creating Ceph Monitors
83----------------------
84
85On each node where a monitor is requested (three monitors are recommended)
86create it by using the "Ceph" item in the GUI or run.
87
88
89[source,bash]
90----
91pveceph createmon
92----
93
94
95Creating Ceph OSDs
96------------------
97
98via GUI or via CLI as follows:
99
100[source,bash]
101----
102pveceph createosd /dev/sd[X]
103----
104
105If you want to use a dedicated SSD journal disk:
106
107NOTE: In order to use a dedicated journal disk (SSD), the disk needs
108to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
109partition table. You can create this with `gdisk /dev/sd(x)`. If there
110is no GPT, you cannot select the disk as journal. Currently the
111journal size is fixed to 5 GB.
112
113[source,bash]
114----
115pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
116----
117
118Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
119journal disk.
120
121[source,bash]
122----
123pveceph createosd /dev/sdf -journal_dev /dev/sdb
124----
125
126This partitions the disk (data and journal partition), creates
127filesystems and starts the OSD, afterwards it is running and fully
128functional. Please create at least 12 OSDs, distributed among your
129nodes (4 OSDs on each node).
130
131It should be noted that this command refuses to initialize disk when
132it detects existing data. So if you want to overwrite a disk you
133should remove existing data first. You can do that using:
134
135[source,bash]
136----
137ceph-disk zap /dev/sd[X]
138----
139
140You can create OSDs containing both journal and data partitions or you
141can place the journal on a dedicated SSD. Using a SSD journal disk is
142highly recommended if you expect good performance.
143
144
145Ceph Pools
146----------
147
148The standard installation creates per default the pool 'rbd',
149additional pools can be created via GUI.
150
151
152Ceph Client
153-----------
154
155You can then configure {pve} to use such pools to store VM or
156Container images. Simply use the GUI too add a new `RBD` storage (see
157section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
158
159You also need to copy the keyring to a predefined location.
160
161NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
162the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
163`my-ceph-storage` in the following example:
164
165[source,bash]
166----
167mkdir /etc/pve/priv/ceph
168cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
169----
0840a663
DM
170
171
172ifdef::manvolnum[]
173include::pve-copyright.adoc[]
174endif::manvolnum[]