]> git.proxmox.com Git - pve-docs.git/blob - pveceph.adoc
Fix qm create syntax example
[pve-docs.git] / pveceph.adoc
1 [[chapter_pveceph]]
2 ifdef::manvolnum[]
3 pveceph(1)
4 ==========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12 SYNOPSIS
13 --------
14
15 include::pveceph.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 pveceph - Manage Ceph Services on Proxmox VE Nodes
22 ==================================================
23 endif::manvolnum[]
24
25 It is possible to install the {ceph} storage server directly on the
26 Proxmox VE cluster nodes. The VMs and Containers can access that
27 storage using the xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]
28 storage driver.
29
30 To simplify management, we provide 'pveceph' - a tool to install and
31 manage {ceph} services on {pve} nodes.
32
33
34 Precondition
35 ------------
36
37 There should be at least three (preferably) identical servers for
38 setup which build together a Proxmox Cluster.
39
40 A 10Gb network is recommmended, exclusively used for Ceph. If there
41 are no 10Gb switches available meshed network is also an option, see
42 {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki].
43
44
45 Check also the recommendations from
46 http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's website].
47
48
49 Installation of Ceph Packages
50 -----------------------------
51
52 On each node run the installation script as follows:
53
54 [source,bash]
55 ----
56 pveceph install -version jewel
57 ----
58
59 This sets up an `apt` package repository in
60 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
61
62
63 Creating initial Ceph configuration
64 -----------------------------------
65
66 After installation of packages, you need to create an initial Ceph
67 configuration on just one node, based on your network (`10.10.10.0/24`
68 in the following example) dedicated for Ceph:
69
70 [source,bash]
71 ----
72 pveceph init --network 10.10.10.0/24
73 ----
74
75 This creates an initial config at `/etc/pve/ceph.conf`. That file is
76 automatically distributed to all Proxmox VE nodes by using
77 xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
78 from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
79 Ceph commands without the need to specify a configuration file.
80
81
82 Creating Ceph Monitors
83 ----------------------
84
85 On each node where a monitor is requested (three monitors are recommended)
86 create it by using the "Ceph" item in the GUI or run.
87
88
89 [source,bash]
90 ----
91 pveceph createmon
92 ----
93
94
95 Creating Ceph OSDs
96 ------------------
97
98 via GUI or via CLI as follows:
99
100 [source,bash]
101 ----
102 pveceph createosd /dev/sd[X]
103 ----
104
105 If you want to use a dedicated SSD journal disk:
106
107 NOTE: In order to use a dedicated journal disk (SSD), the disk needs
108 to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
109 partition table. You can create this with `gdisk /dev/sd(x)`. If there
110 is no GPT, you cannot select the disk as journal. Currently the
111 journal size is fixed to 5 GB.
112
113 [source,bash]
114 ----
115 pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
116 ----
117
118 Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
119 journal disk.
120
121 [source,bash]
122 ----
123 pveceph createosd /dev/sdf -journal_dev /dev/sdb
124 ----
125
126 This partitions the disk (data and journal partition), creates
127 filesystems and starts the OSD, afterwards it is running and fully
128 functional. Please create at least 12 OSDs, distributed among your
129 nodes (4 OSDs on each node).
130
131 It should be noted that this command refuses to initialize disk when
132 it detects existing data. So if you want to overwrite a disk you
133 should remove existing data first. You can do that using:
134
135 [source,bash]
136 ----
137 ceph-disk zap /dev/sd[X]
138 ----
139
140 You can create OSDs containing both journal and data partitions or you
141 can place the journal on a dedicated SSD. Using a SSD journal disk is
142 highly recommended if you expect good performance.
143
144
145 Ceph Pools
146 ----------
147
148 The standard installation creates per default the pool 'rbd',
149 additional pools can be created via GUI.
150
151
152 Ceph Client
153 -----------
154
155 You can then configure {pve} to use such pools to store VM or
156 Container images. Simply use the GUI too add a new `RBD` storage (see
157 section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
158
159 You also need to copy the keyring to a predefined location.
160
161 NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
162 the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
163 `my-ceph-storage` in the following example:
164
165 [source,bash]
166 ----
167 mkdir /etc/pve/priv/ceph
168 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
169 ----
170
171
172 ifdef::manvolnum[]
173 include::pve-copyright.adoc[]
174 endif::manvolnum[]