]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
update create VM screenshots
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
21394e70 21pveceph - Manage Ceph Services on Proxmox VE Nodes
e8b392d3 22==================================================
0840a663
DM
23endif::manvolnum[]
24
c994e4e5
DM
25{pve} unifies your compute and storage systems, i.e. you can use the
26same physical nodes within a cluster for both computing (processing
27VMs and containers) and replicated storage. The traditional silos of
28compute and storage resources can be wrapped up into a single
29hyper-converged appliance. Separate storage networks (SANs) and
30connections via network (NAS) disappear. With the integration of Ceph,
31an open source software-defined storage platform, {pve} has the
32ability to run and manage Ceph storage directly on the hypervisor
33nodes.
34
35Ceph is a distributed object store and file system designed to provide
36excellent performance, reliability and scalability. For smaller
37deployments, it is possible to install a Ceph server for RADOS Block
38Devices (RBD) directly on your {pve} cluster nodes, see
39xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
40hardware has plenty of CPU power and RAM, so running storage services
41and VMs on the same node is possible.
21394e70
DM
42
43To simplify management, we provide 'pveceph' - a tool to install and
44manage {ceph} services on {pve} nodes.
45
46
47Precondition
48------------
49
c994e4e5
DM
50To build a Proxmox Ceph Cluster there should be at least three (preferably)
51identical servers for the setup.
21394e70 52
c994e4e5
DM
53A 10Gb network, exclusively used for Ceph, is recommmended. A meshed
54network setup is also an option if there are no 10Gb switches
55available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
21394e70
DM
56
57Check also the recommendations from
19920184 58http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
21394e70
DM
59
60
61Installation of Ceph Packages
62-----------------------------
63
64On each node run the installation script as follows:
65
66[source,bash]
67----
19920184 68pveceph install
21394e70
DM
69----
70
71This sets up an `apt` package repository in
72`/etc/apt/sources.list.d/ceph.list` and installs the required software.
73
74
75Creating initial Ceph configuration
76-----------------------------------
77
78After installation of packages, you need to create an initial Ceph
79configuration on just one node, based on your network (`10.10.10.0/24`
80in the following example) dedicated for Ceph:
81
82[source,bash]
83----
84pveceph init --network 10.10.10.0/24
85----
86
87This creates an initial config at `/etc/pve/ceph.conf`. That file is
c994e4e5 88automatically distributed to all {pve} nodes by using
21394e70
DM
89xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
90from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
91Ceph commands without the need to specify a configuration file.
92
93
94Creating Ceph Monitors
95----------------------
96
97On each node where a monitor is requested (three monitors are recommended)
98create it by using the "Ceph" item in the GUI or run.
99
100
101[source,bash]
102----
103pveceph createmon
104----
105
106
107Creating Ceph OSDs
108------------------
109
110via GUI or via CLI as follows:
111
112[source,bash]
113----
114pveceph createosd /dev/sd[X]
115----
116
117If you want to use a dedicated SSD journal disk:
118
119NOTE: In order to use a dedicated journal disk (SSD), the disk needs
120to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
121partition table. You can create this with `gdisk /dev/sd(x)`. If there
122is no GPT, you cannot select the disk as journal. Currently the
123journal size is fixed to 5 GB.
124
125[source,bash]
126----
127pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
128----
129
130Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
131journal disk.
132
133[source,bash]
134----
135pveceph createosd /dev/sdf -journal_dev /dev/sdb
136----
137
138This partitions the disk (data and journal partition), creates
139filesystems and starts the OSD, afterwards it is running and fully
140functional. Please create at least 12 OSDs, distributed among your
141nodes (4 OSDs on each node).
142
143It should be noted that this command refuses to initialize disk when
144it detects existing data. So if you want to overwrite a disk you
145should remove existing data first. You can do that using:
146
147[source,bash]
148----
149ceph-disk zap /dev/sd[X]
150----
151
152You can create OSDs containing both journal and data partitions or you
153can place the journal on a dedicated SSD. Using a SSD journal disk is
154highly recommended if you expect good performance.
155
156
157Ceph Pools
158----------
159
160The standard installation creates per default the pool 'rbd',
161additional pools can be created via GUI.
162
163
164Ceph Client
165-----------
166
167You can then configure {pve} to use such pools to store VM or
168Container images. Simply use the GUI too add a new `RBD` storage (see
169section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
170
171You also need to copy the keyring to a predefined location.
172
173NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
174the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
175`my-ceph-storage` in the following example:
176
177[source,bash]
178----
179mkdir /etc/pve/priv/ceph
180cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
181----
0840a663
DM
182
183
184ifdef::manvolnum[]
185include::pve-copyright.adoc[]
186endif::manvolnum[]