]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
pveceph: correct CephFS subtitle
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
1ff5e4e8 26[thumbnail="screenshot/gui-ceph-status.png"]
8997dd6e 27
a474ca1f
AA
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
c994e4e5
DM
36
37Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
38excellent performance, reliability and scalability.
39
04ba9b24
TL
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
a474ca1f
AA
42- Thin provisioning
43- Snapshots support
44- Self healing
a474ca1f
AA
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
a474ca1f
AA
50- Open source
51
1d54c3b4
AA
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
c994e4e5
DM
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
21394e70
DM
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
127ca409 61.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
1d54c3b4
AA
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
477fbcfb
AA
66TIP: We highly recommend to get familiar with Ceph's architecture
67footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
68and vocabulary
69footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
1d54c3b4 70
21394e70
DM
71
72Precondition
73------------
74
76f6eca4
AA
75To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76three (preferably) identical servers for the setup.
21394e70
DM
77
78Check also the recommendations from
1d54c3b4 79http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
21394e70 80
76f6eca4 81.CPU
2f19a6b0
TL
82Higher CPU core frequency reduce latency and should be preferred. As a simple
83rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
84provide enough resources for stable and durable Ceph performance.
76f6eca4
AA
85
86.Memory
87Especially in a hyper-converged setup, the memory consumption needs to be
2f19a6b0
TL
88carefully monitored. In addition to the intended workload from virtual machines
89and container, Ceph needs enough memory available to provide good and stable
90performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
91will be used by an OSD. OSD caching will use additional memory.
76f6eca4
AA
92
93.Network
94We recommend a network bandwidth of at least 10 GbE or more, which is used
95exclusively for Ceph. A meshed network setup
96footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
97is also an option if there are no 10 GbE switches available.
98
2f19a6b0
TL
99The volume of traffic, especially during recovery, will interfere with other
100services on the same network and may even break the {pve} cluster stack.
76f6eca4
AA
101
102Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
2f19a6b0
TL
103link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
10410 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
105will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
106even 100 GBps are possible.
76f6eca4
AA
107
108.Disks
109When planning the size of your Ceph cluster, it is important to take the
110recovery time into consideration. Especially with small clusters, the recovery
111might take long. It is recommended that you use SSDs instead of HDDs in small
112setups to reduce recovery time, minimizing the likelihood of a subsequent
113failure event during recovery.
114
2f19a6b0 115In general SSDs will provide more IOPs than spinning disks. This fact and the
76f6eca4 116higher cost may make a xref:pve_ceph_device_classes[class based] separation of
2f19a6b0 117pools appealing. Another possibility to speedup OSDs is to use a faster disk
352c803f
TL
118as journal or DB/**W**rite-**A**head-**L**og device, see
119xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
120OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
121selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
76f6eca4
AA
122
123Aside from the disk type, Ceph best performs with an even sized and distributed
2f19a6b0
TL
124amount of disks per node. For example, 4 x 500 GB disks with in each node is
125better than a mixed setup with a single 1 TB and three 250 GB disk.
126
127One also need to balance OSD count and single OSD capacity. More capacity
128allows to increase storage density, but it also means that a single OSD
129failure forces ceph to recover more data at once.
76f6eca4 130
a474ca1f 131.Avoid RAID
86be506d 132As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 133(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d
TL
134performance or availability. On the contrary, Ceph is designed to handle whole
135disks on it's own, without any abstraction in between. RAID controller are not
136designed for the Ceph use case and may complicate things and sometimes even
137reduce performance, as their write and caching algorithms may interfere with
138the ones from Ceph.
a474ca1f
AA
139
140WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
141
76f6eca4 142NOTE: Above recommendations should be seen as a rough guidance for choosing
2f19a6b0
TL
143hardware. Therefore, it is still essential to adapt it to your specific needs,
144test your setup and monitor health and performance continuously.
76f6eca4 145
2394c306
TM
146[[pve_ceph_install_wizard]]
147Initial Ceph installation & configuration
148-----------------------------------------
149
150[thumbnail="screenshot/gui-node-ceph-install.png"]
151
152With {pve} you have the benefit of an easy to use installation wizard
153for Ceph. Click on one of your cluster nodes and navigate to the Ceph
6a711e64
TL
154section in the menu tree. If Ceph is not already installed you will be
155offered to do so now.
2394c306
TM
156
157The wizard is divided into different sections, where each needs to be
6a711e64
TL
158finished successfully in order to use Ceph. After starting the installation
159the wizard will download and install all required packages from {pve}'s ceph
160repository.
2394c306
TM
161
162After finishing the first step, you will need to create a configuration.
6a711e64
TL
163This step is only needed once per cluster, as this configuration is distributed
164automatically to all remaining cluster members through {pve}'s clustered
165xref:chapter_pmxcfs[configuration file system (pmxcfs)].
2394c306
TM
166
167The configuration step includes the following settings:
168
169* *Public Network:* You should setup a dedicated network for Ceph, this
170setting is required. Separating your Ceph traffic is highly recommended,
6a711e64
TL
171because it could lead to troubles with other latency dependent services,
172e.g., cluster communication may decrease Ceph's performance, if not done.
2394c306
TM
173
174[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
175
176* *Cluster Network:* As an optional step you can go even further and
177separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
178as well. This will relieve the public network and could lead to
179significant performance improvements especially in big clusters.
180
181You have two more options which are considered advanced and therefore
182should only changed if you are an expert.
183
184* *Number of replicas*: Defines the how often a object is replicated
185* *Minimum replicas*: Defines the minimum number of required replicas
6a711e64 186 for I/O to be marked as complete.
2394c306 187
6a711e64 188Additionally you need to choose your first monitor node, this is required.
2394c306
TM
189
190That's it, you should see a success page as the last step with further
191instructions on how to go on. You are now prepared to start using Ceph,
192even though you will need to create additional xref:pve_ceph_monitors[monitors],
193create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
194
195The rest of this chapter will guide you on how to get the most out of
196your {pve} based Ceph setup, this will include aforementioned and
197more like xref:pveceph_fs[CephFS] which is a very handy addition to your
198new Ceph cluster.
21394e70 199
58f95dd7 200[[pve_ceph_install]]
21394e70
DM
201Installation of Ceph Packages
202-----------------------------
2394c306
TM
203Use {pve} Ceph installation wizard (recommended) or run the following
204command on each node:
21394e70
DM
205
206[source,bash]
207----
19920184 208pveceph install
21394e70
DM
209----
210
211This sets up an `apt` package repository in
212`/etc/apt/sources.list.d/ceph.list` and installs the required software.
213
214
b3338e29
AA
215Create initial Ceph configuration
216---------------------------------
21394e70 217
1ff5e4e8 218[thumbnail="screenshot/gui-ceph-config.png"]
8997dd6e 219
2394c306
TM
220Use the {pve} Ceph installation wizard (recommended) or run the
221following command on one node:
21394e70
DM
222
223[source,bash]
224----
225pveceph init --network 10.10.10.0/24
226----
227
2394c306
TM
228This creates an initial configuration at `/etc/pve/ceph.conf` with a
229dedicated network for ceph. That file is automatically distributed to
230all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
231creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
232So you can simply run Ceph commands without the need to specify a
233configuration file.
21394e70
DM
234
235
d9a27ee1 236[[pve_ceph_monitors]]
b3338e29
AA
237Ceph Monitor
238-----------
1d54c3b4
AA
239The Ceph Monitor (MON)
240footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
a474ca1f 241maintains a master copy of the cluster map. For high availability you need to
2394c306 242have at least 3 monitors. One monitor will already be installed if you
620d6725 243used the installation wizard. You won't need more than 3 monitors as long
2394c306
TM
244as your cluster is small to midsize, only really large clusters will
245need more than that.
1d54c3b4 246
b3338e29
AA
247
248Create Monitors
249~~~~~~~~~~~~~~~
250
251[thumbnail="screenshot/gui-ceph-monitor.png"]
252
1d54c3b4
AA
253On each node where you want to place a monitor (three monitors are recommended),
254create it by using the 'Ceph -> Monitor' tab in the GUI or run.
21394e70
DM
255
256
257[source,bash]
258----
d1fdb121 259pveceph mon create
21394e70
DM
260----
261
1d54c3b4 262
b3338e29
AA
263Destroy Monitors
264~~~~~~~~~~~~~~~~
0e38a564
AA
265
266To remove a Ceph Monitor via the GUI first select a node in the tree view and
267go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
268button.
269
270To remove a Ceph Monitor via the CLI first connect to the node on which the MON
271is running. Then execute the following command:
272[source,bash]
273----
274pveceph mon destroy
275----
276
277NOTE: At least three Monitors are needed for quorum.
278
279
1d54c3b4 280[[pve_ceph_manager]]
b3338e29
AA
281Ceph Manager
282------------
283The Manager daemon runs alongside the monitors. It provides an interface to
284monitor the cluster. Since the Ceph luminous release at least one ceph-mgr
285footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
286required.
287
288Create Manager
289~~~~~~~~~~~~~~
1d54c3b4 290
b3338e29 291Multiple Managers can be installed, but at any time only one Manager is active.
1d54c3b4 292
1d54c3b4
AA
293[source,bash]
294----
d1fdb121 295pveceph mgr create
1d54c3b4
AA
296----
297
c1f38fe3
AA
298NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
299high availability install more then one manager.
300
21394e70 301
b3338e29
AA
302Destroy Manager
303~~~~~~~~~~~~~~~
549350fe
AA
304
305To remove a Ceph Manager via the GUI first select a node in the tree view and
306go to the **Ceph -> Monitor** panel. Select the Manager and click the
307**Destroy** button.
308
309To remove a Ceph Monitor via the CLI first connect to the node on which the
310Manager is running. Then execute the following command:
311[source,bash]
312----
313pveceph mgr destroy
314----
315
316NOTE: A Ceph cluster can function without a Manager, but certain functions like
317the cluster status or usage require a running Manager.
318
319
d9a27ee1 320[[pve_ceph_osds]]
b3338e29
AA
321Ceph OSDs
322---------
323Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
324network. It is recommended to use one OSD per physical disk.
325
326NOTE: By default an object is 4 MiB in size.
327
328Create OSDs
329~~~~~~~~~~~
21394e70 330
1ff5e4e8 331[thumbnail="screenshot/gui-ceph-osd-status.png"]
8997dd6e 332
21394e70
DM
333via GUI or via CLI as follows:
334
335[source,bash]
336----
d1fdb121 337pveceph osd create /dev/sd[X]
21394e70
DM
338----
339
b3338e29
AA
340TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed
341evenly among your, at least three nodes (4 OSDs on each node).
1d54c3b4 342
a474ca1f 343If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
9bddef40 344sector and any OSD leftover the following command should be sufficient.
a474ca1f
AA
345
346[source,bash]
347----
9bddef40 348ceph-volume lvm zap /dev/sd[X] --destroy
a474ca1f
AA
349----
350
9bddef40 351WARNING: The above command will destroy data on the disk!
1d54c3b4 352
b3338e29 353.Ceph Bluestore
21394e70 354
1d54c3b4
AA
355Starting with the Ceph Kraken release, a new Ceph OSD storage type was
356introduced, the so called Bluestore
a474ca1f 357footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
9bddef40 358This is the default when creating OSDs since Ceph Luminous.
21394e70
DM
359
360[source,bash]
361----
d1fdb121 362pveceph osd create /dev/sd[X]
1d54c3b4
AA
363----
364
1e834cb2 365.Block.db and block.wal
1d54c3b4
AA
366
367If you want to use a separate DB/WAL device for your OSDs, you can specify it
b3338e29
AA
368through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if
369not specified separately.
1d54c3b4
AA
370
371[source,bash]
372----
d1fdb121 373pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
1d54c3b4
AA
374----
375
9bddef40
DC
376You can directly choose the size for those with the '-db_size' and '-wal_size'
377paremeters respectively. If they are not given the following values (in order)
378will be used:
379
352c803f
TL
380* bluestore_block_{db,wal}_size from ceph configuration...
381** ... database, section 'osd'
382** ... database, section 'global'
383** ... file, section 'osd'
384** ... file, section 'global'
9bddef40
DC
385* 10% (DB)/1% (WAL) of OSD size
386
1d54c3b4 387NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
ee4a0e96 388internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
389NVRAM for better performance.
390
391
b3338e29 392.Ceph Filestore
9bddef40 393
352c803f 394Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
9bddef40 395Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
352c803f
TL
396'pveceph' anymore. If you still want to create filestore OSDs, use
397'ceph-volume' directly.
1d54c3b4
AA
398
399[source,bash]
400----
9bddef40 401ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
21394e70
DM
402----
403
b3338e29
AA
404Destroy OSDs
405~~~~~~~~~~~~
be2d137e
AA
406
407To remove an OSD via the GUI first select a {PVE} node in the tree view and go
408to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
409button. Once the OSD status changed from `in` to `out` click the **STOP**
410button. As soon as the status changed from `up` to `down` select **Destroy**
411from the `More` drop-down menu.
412
413To remove an OSD via the CLI run the following commands.
414[source,bash]
415----
416ceph osd out <ID>
417systemctl stop ceph-osd@<ID>.service
418----
419NOTE: The first command instructs Ceph not to include the OSD in the data
420distribution. The second command stops the OSD service. Until this time, no
421data is lost.
422
423The following command destroys the OSD. Specify the '-cleanup' option to
424additionally destroy the partition table.
425[source,bash]
426----
427pveceph osd destroy <ID>
428----
429WARNING: The above command will destroy data on the disk!
430
431
07fef357 432[[pve_ceph_pools]]
b3338e29
AA
433Ceph Pools
434----------
1d54c3b4 435A pool is a logical group for storing objects. It holds **P**lacement
90682f35 436**G**roups (`PG`, `pg_num`), a collection of objects.
1d54c3b4 437
b3338e29
AA
438
439Create Pools
440~~~~~~~~~~~~
441
442[thumbnail="screenshot/gui-ceph-pools.png"]
443
90682f35
TL
444When no options are given, we set a default of **128 PGs**, a **size of 3
445replicas** and a **min_size of 2 replicas** for serving objects in a degraded
446state.
1d54c3b4 447
5a54ef44 448NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
90682f35 449'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
1d54c3b4
AA
450
451It is advised to calculate the PG number depending on your setup, you can find
a474ca1f
AA
452the formula and the PG calculator footnote:[PG calculator
453http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
454never be decreased.
1d54c3b4
AA
455
456
457You can create pools through command line or on the GUI on each PVE host under
458**Ceph -> Pools**.
459
460[source,bash]
461----
d1fdb121 462pveceph pool create <name>
1d54c3b4
AA
463----
464
620d6725
FE
465If you would like to automatically also get a storage definition for your pool,
466mark the checkbox "Add storages" in the GUI or use the command line option
467'--add_storages' at pool creation.
21394e70 468
1d54c3b4
AA
469Further information on Ceph pool handling can be found in the Ceph pool
470operation footnote:[Ceph pool operation
471http://docs.ceph.com/docs/luminous/rados/operations/pools/]
472manual.
21394e70 473
166c91fe 474
b3338e29
AA
475Destroy Pools
476~~~~~~~~~~~~~
166c91fe
AA
477
478To destroy a pool via the GUI select a node in the tree view and go to the
479**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
480button. To confirm the destruction of the pool you need to enter the pool name.
481
482Run the following command to destroy a pool. Specify the '-remove_storages' to
483also remove the associated storage.
484[source,bash]
485----
486pveceph pool destroy <name>
487----
488
489NOTE: Deleting the data of a pool is a background task and can take some time.
490You will notice that the data usage in the cluster is decreasing.
491
76f6eca4 492[[pve_ceph_device_classes]]
9fad507d
AA
493Ceph CRUSH & device classes
494---------------------------
495The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
496**U**nder **S**calable **H**ashing
497(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
498
499CRUSH calculates where to store to and retrieve data from, this has the
500advantage that no central index service is needed. CRUSH works with a map of
501OSDs, buckets (device locations) and rulesets (data replication) for pools.
502
503NOTE: Further information can be found in the Ceph documentation, under the
504section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
505
506This map can be altered to reflect different replication hierarchies. The object
507replicas can be separated (eg. failure domains), while maintaining the desired
508distribution.
509
510A common use case is to use different classes of disks for different Ceph pools.
511For this reason, Ceph introduced the device classes with luminous, to
512accommodate the need for easy ruleset generation.
513
514The device classes can be seen in the 'ceph osd tree' output. These classes
515represent their own root bucket, which can be seen with the below command.
516
517[source, bash]
518----
519ceph osd crush tree --show-shadow
520----
521
522Example output form the above command:
523
524[source, bash]
525----
526ID CLASS WEIGHT TYPE NAME
527-16 nvme 2.18307 root default~nvme
528-13 nvme 0.72769 host sumi1~nvme
529 12 nvme 0.72769 osd.12
530-14 nvme 0.72769 host sumi2~nvme
531 13 nvme 0.72769 osd.13
532-15 nvme 0.72769 host sumi3~nvme
533 14 nvme 0.72769 osd.14
534 -1 7.70544 root default
535 -3 2.56848 host sumi1
536 12 nvme 0.72769 osd.12
537 -5 2.56848 host sumi2
538 13 nvme 0.72769 osd.13
539 -7 2.56848 host sumi3
540 14 nvme 0.72769 osd.14
541----
542
543To let a pool distribute its objects only on a specific device class, you need
544to create a ruleset with the specific class first.
545
546[source, bash]
547----
548ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
549----
550
551[frame="none",grid="none", align="left", cols="30%,70%"]
552|===
553|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
554|<root>|which crush root it should belong to (default ceph root "default")
555|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
556|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
557|===
558
559Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
560
561[source, bash]
562----
563ceph osd pool set <pool-name> crush_rule <rule-name>
564----
565
566TIP: If the pool already contains objects, all of these have to be moved
b3338e29
AA
567accordingly. Depending on your setup this may introduce a big performance hit
568on your cluster. As an alternative, you can create a new pool and move disks
9fad507d
AA
569separately.
570
571
21394e70
DM
572Ceph Client
573-----------
574
1ff5e4e8 575[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 576
21394e70
DM
577You can then configure {pve} to use such pools to store VM or
578Container images. Simply use the GUI too add a new `RBD` storage (see
579section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
580
620d6725 581You also need to copy the keyring to a predefined location for an external Ceph
1d54c3b4
AA
582cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
583done automatically.
21394e70
DM
584
585NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
586the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
587`my-ceph-storage` in the following example:
588
589[source,bash]
590----
591mkdir /etc/pve/priv/ceph
592cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
593----
0840a663 594
58f95dd7
TL
595[[pveceph_fs]]
596CephFS
597------
598
599Ceph provides also a filesystem running on top of the same object storage as
600RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
601the RADOS backed objects to files and directories, allowing to provide a
602POSIX-compliant replicated filesystem. This allows one to have a clustered
603highly available shared filesystem in an easy way if ceph is already used. Its
604Metadata Servers guarantee that files get balanced out over the whole Ceph
605cluster, this way even high load will not overload a single host, which can be
d180eb39 606an issue with traditional shared filesystem approaches, like `NFS`, for
58f95dd7
TL
607example.
608
1e834cb2
TL
609[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
610
2394c306 611{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
58f95dd7
TL
612to save backups, ISO files or container templates and creating a
613hyper-converged CephFS itself.
614
615
616[[pveceph_fs_mds]]
617Metadata Server (MDS)
618~~~~~~~~~~~~~~~~~~~~~
619
620CephFS needs at least one Metadata Server to be configured and running to be
621able to work. One can simply create one through the {pve} web GUI's `Node ->
622CephFS` panel or on the command line with:
623
624----
625pveceph mds create
626----
627
628Multiple metadata servers can be created in a cluster. But with the default
629settings only one can be active at any time. If an MDS, or its node, becomes
630unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
631One can speed up the hand-over between the active and a standby MDS up by using
632the 'hotstandby' parameter option on create, or if you have already created it
633you may set/add:
634
635----
636mds standby replay = true
637----
638
639in the ceph.conf respective MDS section. With this enabled, this specific MDS
640will always poll the active one, so that it can take over faster as it is in a
3580eb13 641`warm` state. But naturally, the active polling will cause some additional
58f95dd7
TL
642performance impact on your system and active `MDS`.
643
1e834cb2 644.Multiple Active MDS
58f95dd7
TL
645
646Since Luminous (12.2.x) you can also have multiple active metadata servers
647running, but this is normally only useful for a high count on parallel clients,
648as else the `MDS` seldom is the bottleneck. If you want to set this up please
649refer to the ceph documentation. footnote:[Configuring multiple active MDS
127ca409 650daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
58f95dd7
TL
651
652[[pveceph_fs_create]]
8a38333f
AA
653Create CephFS
654~~~~~~~~~~~~~
58f95dd7
TL
655
656With {pve}'s CephFS integration into you can create a CephFS easily over the
657Web GUI, the CLI or an external API interface. Some prerequisites are required
658for this to work:
659
660.Prerequisites for a successful CephFS setup:
661- xref:pve_ceph_install[Install Ceph packages], if this was already done some
662 time ago you might want to rerun it on an up to date system to ensure that
663 also all CephFS related packages get installed.
664- xref:pve_ceph_monitors[Setup Monitors]
665- xref:pve_ceph_monitors[Setup your OSDs]
666- xref:pveceph_fs_mds[Setup at least one MDS]
667
668After this got all checked and done you can simply create a CephFS through
669either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
670for example with:
671
672----
673pveceph fs create --pg_num 128 --add-storage
674----
675
676This creates a CephFS named `'cephfs'' using a pool for its data named
677`'cephfs_data'' with `128` placement groups and a pool for its metadata named
678`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
679Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
680Ceph documentation for more information regarding a fitting placement group
681number (`pg_num`) for your setup footnote:[Ceph Placement Groups
127ca409 682http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
58f95dd7
TL
683Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
684storage configuration after it was created successfully.
685
686Destroy CephFS
687~~~~~~~~~~~~~~
688
fa9b4ee1 689WARNING: Destroying a CephFS will render all its data unusable, this cannot be
58f95dd7
TL
690undone!
691
692If you really want to destroy an existing CephFS you first need to stop, or
620d6725 693destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
58f95dd7
TL
694GUI or the command line interface, with:
695
696----
697pveceph mds destroy NAME
698----
699on each {pve} node hosting a MDS daemon.
700
701Then, you can remove (destroy) CephFS by issuing a:
702
703----
de2f8225 704ceph fs rm NAME --yes-i-really-mean-it
58f95dd7
TL
705----
706on a single node hosting Ceph. After this you may want to remove the created
707data and metadata pools, this can be done either over the Web GUI or the CLI
708with:
709
710----
711pveceph pool destroy NAME
712----
0840a663 713
6ff32926 714
10df14fb
TL
715Ceph monitoring and troubleshooting
716-----------------------------------
717A good start is to continuosly monitor the ceph health from the start of
718initial deployment. Either through the ceph tools itself, but also by accessing
719the status through the {pve} link:api-viewer/index.html[API].
6ff32926 720
10df14fb
TL
721The following ceph commands below can be used to see if the cluster is healthy
722('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
723('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
620d6725 724below will also give you an overview of the current events and actions to take.
6ff32926
AA
725
726----
10df14fb
TL
727# single time output
728pve# ceph -s
729# continuously output status changes (press CTRL+C to stop)
730pve# ceph -w
6ff32926
AA
731----
732
733To get a more detailed view, every ceph service has a log file under
734`/var/log/ceph/` and if there is not enough detail, the log level can be
735adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
736
737You can find more information about troubleshooting
738footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
620d6725 739a Ceph cluster on the official website.
6ff32926
AA
740
741
0840a663
DM
742ifdef::manvolnum[]
743include::pve-copyright.adoc[]
744endif::manvolnum[]