]> git.proxmox.com Git - pve-docs.git/blob - pveceph.adoc
a8068d06200bc3c34feb1c2abea927d8fe2df486
[pve-docs.git] / pveceph.adoc
1 [[chapter_pveceph]]
2 ifdef::manvolnum[]
3 pveceph(1)
4 ==========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pveceph - Manage Ceph Services on Proxmox VE Nodes
11
12 SYNOPSIS
13 --------
14
15 include::pveceph.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20 ifndef::manvolnum[]
21 Manage Ceph Services on Proxmox VE Nodes
22 ========================================
23 :pve-toplevel:
24 endif::manvolnum[]
25
26 [thumbnail="gui-ceph-status.png"]
27
28 {pve} unifies your compute and storage systems, i.e. you can use the
29 same physical nodes within a cluster for both computing (processing
30 VMs and containers) and replicated storage. The traditional silos of
31 compute and storage resources can be wrapped up into a single
32 hyper-converged appliance. Separate storage networks (SANs) and
33 connections via network (NAS) disappear. With the integration of Ceph,
34 an open source software-defined storage platform, {pve} has the
35 ability to run and manage Ceph storage directly on the hypervisor
36 nodes.
37
38 Ceph is a distributed object store and file system designed to provide
39 excellent performance, reliability and scalability. For smaller
40 deployments, it is possible to install a Ceph server for RADOS Block
41 Devices (RBD) directly on your {pve} cluster nodes, see
42 xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
43 hardware has plenty of CPU power and RAM, so running storage services
44 and VMs on the same node is possible.
45
46 To simplify management, we provide 'pveceph' - a tool to install and
47 manage {ceph} services on {pve} nodes.
48
49
50 Precondition
51 ------------
52
53 To build a Proxmox Ceph Cluster there should be at least three (preferably)
54 identical servers for the setup.
55
56 A 10Gb network, exclusively used for Ceph, is recommended. A meshed
57 network setup is also an option if there are no 10Gb switches
58 available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
59
60 Check also the recommendations from
61 http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
62
63
64 Installation of Ceph Packages
65 -----------------------------
66
67 On each node run the installation script as follows:
68
69 [source,bash]
70 ----
71 pveceph install
72 ----
73
74 This sets up an `apt` package repository in
75 `/etc/apt/sources.list.d/ceph.list` and installs the required software.
76
77
78 Creating initial Ceph configuration
79 -----------------------------------
80
81 [thumbnail="gui-ceph-config.png"]
82
83 After installation of packages, you need to create an initial Ceph
84 configuration on just one node, based on your network (`10.10.10.0/24`
85 in the following example) dedicated for Ceph:
86
87 [source,bash]
88 ----
89 pveceph init --network 10.10.10.0/24
90 ----
91
92 This creates an initial config at `/etc/pve/ceph.conf`. That file is
93 automatically distributed to all {pve} nodes by using
94 xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
95 from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
96 Ceph commands without the need to specify a configuration file.
97
98
99 [[pve_ceph_monitors]]
100 Creating Ceph Monitors
101 ----------------------
102
103 [thumbnail="gui-ceph-monitor.png"]
104
105 On each node where a monitor is requested (three monitors are recommended)
106 create it by using the "Ceph" item in the GUI or run.
107
108
109 [source,bash]
110 ----
111 pveceph createmon
112 ----
113
114
115 [[pve_ceph_osds]]
116 Creating Ceph OSDs
117 ------------------
118
119 [thumbnail="gui-ceph-osd-status.png"]
120
121 via GUI or via CLI as follows:
122
123 [source,bash]
124 ----
125 pveceph createosd /dev/sd[X]
126 ----
127
128 If you want to use a dedicated SSD journal disk:
129
130 NOTE: In order to use a dedicated journal disk (SSD), the disk needs
131 to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
132 partition table. You can create this with `gdisk /dev/sd(x)`. If there
133 is no GPT, you cannot select the disk as journal. Currently the
134 journal size is fixed to 5 GB.
135
136 [source,bash]
137 ----
138 pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
139 ----
140
141 Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
142 journal disk.
143
144 [source,bash]
145 ----
146 pveceph createosd /dev/sdf -journal_dev /dev/sdb
147 ----
148
149 This partitions the disk (data and journal partition), creates
150 filesystems and starts the OSD, afterwards it is running and fully
151 functional. Please create at least 12 OSDs, distributed among your
152 nodes (4 OSDs on each node).
153
154 It should be noted that this command refuses to initialize disk when
155 it detects existing data. So if you want to overwrite a disk you
156 should remove existing data first. You can do that using:
157
158 [source,bash]
159 ----
160 ceph-disk zap /dev/sd[X]
161 ----
162
163 You can create OSDs containing both journal and data partitions or you
164 can place the journal on a dedicated SSD. Using a SSD journal disk is
165 highly recommended if you expect good performance.
166
167
168 [[pve_ceph_pools]]
169 Ceph Pools
170 ----------
171
172 [thumbnail="gui-ceph-pools.png"]
173
174 The standard installation creates per default the pool 'rbd',
175 additional pools can be created via GUI.
176
177
178 Ceph Client
179 -----------
180
181 [thumbnail="gui-ceph-log.png"]
182
183 You can then configure {pve} to use such pools to store VM or
184 Container images. Simply use the GUI too add a new `RBD` storage (see
185 section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
186
187 You also need to copy the keyring to a predefined location.
188
189 NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
190 the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
191 `my-ceph-storage` in the following example:
192
193 [source,bash]
194 ----
195 mkdir /etc/pve/priv/ceph
196 cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
197 ----
198
199
200 ifdef::manvolnum[]
201 include::pve-copyright.adoc[]
202 endif::manvolnum[]