]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
update generated docs
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
8997dd6e
DM
26[thumbnail="gui-ceph-status.png"]
27
c994e4e5
DM
28{pve} unifies your compute and storage systems, i.e. you can use the
29same physical nodes within a cluster for both computing (processing
30VMs and containers) and replicated storage. The traditional silos of
31compute and storage resources can be wrapped up into a single
32hyper-converged appliance. Separate storage networks (SANs) and
33connections via network (NAS) disappear. With the integration of Ceph,
34an open source software-defined storage platform, {pve} has the
35ability to run and manage Ceph storage directly on the hypervisor
36nodes.
37
38Ceph is a distributed object store and file system designed to provide
39excellent performance, reliability and scalability. For smaller
40deployments, it is possible to install a Ceph server for RADOS Block
41Devices (RBD) directly on your {pve} cluster nodes, see
42xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
43hardware has plenty of CPU power and RAM, so running storage services
44and VMs on the same node is possible.
21394e70
DM
45
46To simplify management, we provide 'pveceph' - a tool to install and
47manage {ceph} services on {pve} nodes.
48
49
50Precondition
51------------
52
c994e4e5
DM
53To build a Proxmox Ceph Cluster there should be at least three (preferably)
54identical servers for the setup.
21394e70 55
470d4313 56A 10Gb network, exclusively used for Ceph, is recommended. A meshed
c994e4e5
DM
57network setup is also an option if there are no 10Gb switches
58available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
21394e70
DM
59
60Check also the recommendations from
19920184 61http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
21394e70
DM
62
63
64Installation of Ceph Packages
65-----------------------------
66
67On each node run the installation script as follows:
68
69[source,bash]
70----
19920184 71pveceph install
21394e70
DM
72----
73
74This sets up an `apt` package repository in
75`/etc/apt/sources.list.d/ceph.list` and installs the required software.
76
77
78Creating initial Ceph configuration
79-----------------------------------
80
8997dd6e
DM
81[thumbnail="gui-ceph-config.png"]
82
21394e70
DM
83After installation of packages, you need to create an initial Ceph
84configuration on just one node, based on your network (`10.10.10.0/24`
85in the following example) dedicated for Ceph:
86
87[source,bash]
88----
89pveceph init --network 10.10.10.0/24
90----
91
92This creates an initial config at `/etc/pve/ceph.conf`. That file is
c994e4e5 93automatically distributed to all {pve} nodes by using
21394e70
DM
94xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
95from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
96Ceph commands without the need to specify a configuration file.
97
98
d9a27ee1 99[[pve_ceph_monitors]]
21394e70
DM
100Creating Ceph Monitors
101----------------------
102
8997dd6e
DM
103[thumbnail="gui-ceph-monitor.png"]
104
21394e70
DM
105On each node where a monitor is requested (three monitors are recommended)
106create it by using the "Ceph" item in the GUI or run.
107
108
109[source,bash]
110----
111pveceph createmon
112----
113
114
d9a27ee1 115[[pve_ceph_osds]]
21394e70
DM
116Creating Ceph OSDs
117------------------
118
8997dd6e
DM
119[thumbnail="gui-ceph-osd-status.png"]
120
21394e70
DM
121via GUI or via CLI as follows:
122
123[source,bash]
124----
125pveceph createosd /dev/sd[X]
126----
127
128If you want to use a dedicated SSD journal disk:
129
130NOTE: In order to use a dedicated journal disk (SSD), the disk needs
131to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
132partition table. You can create this with `gdisk /dev/sd(x)`. If there
133is no GPT, you cannot select the disk as journal. Currently the
134journal size is fixed to 5 GB.
135
136[source,bash]
137----
138pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
139----
140
141Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
142journal disk.
143
144[source,bash]
145----
146pveceph createosd /dev/sdf -journal_dev /dev/sdb
147----
148
149This partitions the disk (data and journal partition), creates
150filesystems and starts the OSD, afterwards it is running and fully
151functional. Please create at least 12 OSDs, distributed among your
152nodes (4 OSDs on each node).
153
154It should be noted that this command refuses to initialize disk when
155it detects existing data. So if you want to overwrite a disk you
156should remove existing data first. You can do that using:
157
158[source,bash]
159----
160ceph-disk zap /dev/sd[X]
161----
162
163You can create OSDs containing both journal and data partitions or you
164can place the journal on a dedicated SSD. Using a SSD journal disk is
165highly recommended if you expect good performance.
166
167
d9a27ee1 168[[pve_ceph_pools]]
21394e70
DM
169Ceph Pools
170----------
171
8997dd6e
DM
172[thumbnail="gui-ceph-pools.png"]
173
21394e70
DM
174The standard installation creates per default the pool 'rbd',
175additional pools can be created via GUI.
176
177
178Ceph Client
179-----------
180
8997dd6e
DM
181[thumbnail="gui-ceph-log.png"]
182
21394e70
DM
183You can then configure {pve} to use such pools to store VM or
184Container images. Simply use the GUI too add a new `RBD` storage (see
185section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
186
187You also need to copy the keyring to a predefined location.
188
189NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
190the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
191`my-ceph-storage` in the following example:
192
193[source,bash]
194----
195mkdir /etc/pve/priv/ceph
196cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
197----
0840a663
DM
198
199
200ifdef::manvolnum[]
201include::pve-copyright.adoc[]
202endif::manvolnum[]