]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
bump version to 5.0-8
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
c994e4e5
DM
26{pve} unifies your compute and storage systems, i.e. you can use the
27same physical nodes within a cluster for both computing (processing
28VMs and containers) and replicated storage. The traditional silos of
29compute and storage resources can be wrapped up into a single
30hyper-converged appliance. Separate storage networks (SANs) and
31connections via network (NAS) disappear. With the integration of Ceph,
32an open source software-defined storage platform, {pve} has the
33ability to run and manage Ceph storage directly on the hypervisor
34nodes.
35
36Ceph is a distributed object store and file system designed to provide
37excellent performance, reliability and scalability. For smaller
38deployments, it is possible to install a Ceph server for RADOS Block
39Devices (RBD) directly on your {pve} cluster nodes, see
40xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
41hardware has plenty of CPU power and RAM, so running storage services
42and VMs on the same node is possible.
21394e70
DM
43
44To simplify management, we provide 'pveceph' - a tool to install and
45manage {ceph} services on {pve} nodes.
46
47
48Precondition
49------------
50
c994e4e5
DM
51To build a Proxmox Ceph Cluster there should be at least three (preferably)
52identical servers for the setup.
21394e70 53
c994e4e5
DM
54A 10Gb network, exclusively used for Ceph, is recommmended. A meshed
55network setup is also an option if there are no 10Gb switches
56available, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki] .
21394e70
DM
57
58Check also the recommendations from
19920184 59http://docs.ceph.com/docs/master/start/hardware-recommendations/[Ceph's website].
21394e70
DM
60
61
62Installation of Ceph Packages
63-----------------------------
64
65On each node run the installation script as follows:
66
67[source,bash]
68----
19920184 69pveceph install
21394e70
DM
70----
71
72This sets up an `apt` package repository in
73`/etc/apt/sources.list.d/ceph.list` and installs the required software.
74
75
76Creating initial Ceph configuration
77-----------------------------------
78
79After installation of packages, you need to create an initial Ceph
80configuration on just one node, based on your network (`10.10.10.0/24`
81in the following example) dedicated for Ceph:
82
83[source,bash]
84----
85pveceph init --network 10.10.10.0/24
86----
87
88This creates an initial config at `/etc/pve/ceph.conf`. That file is
c994e4e5 89automatically distributed to all {pve} nodes by using
21394e70
DM
90xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
91from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
92Ceph commands without the need to specify a configuration file.
93
94
95Creating Ceph Monitors
96----------------------
97
98On each node where a monitor is requested (three monitors are recommended)
99create it by using the "Ceph" item in the GUI or run.
100
101
102[source,bash]
103----
104pveceph createmon
105----
106
107
108Creating Ceph OSDs
109------------------
110
111via GUI or via CLI as follows:
112
113[source,bash]
114----
115pveceph createosd /dev/sd[X]
116----
117
118If you want to use a dedicated SSD journal disk:
119
120NOTE: In order to use a dedicated journal disk (SSD), the disk needs
121to have a https://en.wikipedia.org/wiki/GUID_Partition_Table[GPT]
122partition table. You can create this with `gdisk /dev/sd(x)`. If there
123is no GPT, you cannot select the disk as journal. Currently the
124journal size is fixed to 5 GB.
125
126[source,bash]
127----
128pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
129----
130
131Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
132journal disk.
133
134[source,bash]
135----
136pveceph createosd /dev/sdf -journal_dev /dev/sdb
137----
138
139This partitions the disk (data and journal partition), creates
140filesystems and starts the OSD, afterwards it is running and fully
141functional. Please create at least 12 OSDs, distributed among your
142nodes (4 OSDs on each node).
143
144It should be noted that this command refuses to initialize disk when
145it detects existing data. So if you want to overwrite a disk you
146should remove existing data first. You can do that using:
147
148[source,bash]
149----
150ceph-disk zap /dev/sd[X]
151----
152
153You can create OSDs containing both journal and data partitions or you
154can place the journal on a dedicated SSD. Using a SSD journal disk is
155highly recommended if you expect good performance.
156
157
158Ceph Pools
159----------
160
161The standard installation creates per default the pool 'rbd',
162additional pools can be created via GUI.
163
164
165Ceph Client
166-----------
167
168You can then configure {pve} to use such pools to store VM or
169Container images. Simply use the GUI too add a new `RBD` storage (see
170section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
171
172You also need to copy the keyring to a predefined location.
173
174NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
175the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
176`my-ceph-storage` in the following example:
177
178[source,bash]
179----
180mkdir /etc/pve/priv/ceph
181cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
182----
0840a663
DM
183
184
185ifdef::manvolnum[]
186include::pve-copyright.adoc[]
187endif::manvolnum[]