]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
pveceph: switch note for Creating Ceph Manager
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
1ff5e4e8 26[thumbnail="screenshot/gui-ceph-status.png"]
8997dd6e 27
a474ca1f
AA
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
c994e4e5
DM
36
37Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
38excellent performance, reliability and scalability.
39
04ba9b24
TL
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
a474ca1f
AA
42- Thin provisioning
43- Snapshots support
44- Self healing
a474ca1f
AA
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
a474ca1f
AA
50- Open source
51
1d54c3b4
AA
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
c994e4e5
DM
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
21394e70
DM
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
127ca409 61.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
1d54c3b4
AA
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
477fbcfb
AA
66TIP: We highly recommend to get familiar with Ceph's architecture
67footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
68and vocabulary
69footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
1d54c3b4 70
21394e70
DM
71
72Precondition
73------------
74
76f6eca4
AA
75To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76three (preferably) identical servers for the setup.
21394e70
DM
77
78Check also the recommendations from
1d54c3b4 79http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
21394e70 80
76f6eca4 81.CPU
2f19a6b0
TL
82Higher CPU core frequency reduce latency and should be preferred. As a simple
83rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
84provide enough resources for stable and durable Ceph performance.
76f6eca4
AA
85
86.Memory
87Especially in a hyper-converged setup, the memory consumption needs to be
2f19a6b0
TL
88carefully monitored. In addition to the intended workload from virtual machines
89and container, Ceph needs enough memory available to provide good and stable
90performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
91will be used by an OSD. OSD caching will use additional memory.
76f6eca4
AA
92
93.Network
94We recommend a network bandwidth of at least 10 GbE or more, which is used
95exclusively for Ceph. A meshed network setup
96footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
97is also an option if there are no 10 GbE switches available.
98
2f19a6b0
TL
99The volume of traffic, especially during recovery, will interfere with other
100services on the same network and may even break the {pve} cluster stack.
76f6eca4
AA
101
102Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
2f19a6b0
TL
103link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
10410 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
105will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
106even 100 GBps are possible.
76f6eca4
AA
107
108.Disks
109When planning the size of your Ceph cluster, it is important to take the
110recovery time into consideration. Especially with small clusters, the recovery
111might take long. It is recommended that you use SSDs instead of HDDs in small
112setups to reduce recovery time, minimizing the likelihood of a subsequent
113failure event during recovery.
114
2f19a6b0 115In general SSDs will provide more IOPs than spinning disks. This fact and the
76f6eca4 116higher cost may make a xref:pve_ceph_device_classes[class based] separation of
2f19a6b0 117pools appealing. Another possibility to speedup OSDs is to use a faster disk
352c803f
TL
118as journal or DB/**W**rite-**A**head-**L**og device, see
119xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
120OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
121selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
76f6eca4
AA
122
123Aside from the disk type, Ceph best performs with an even sized and distributed
2f19a6b0
TL
124amount of disks per node. For example, 4 x 500 GB disks with in each node is
125better than a mixed setup with a single 1 TB and three 250 GB disk.
126
127One also need to balance OSD count and single OSD capacity. More capacity
128allows to increase storage density, but it also means that a single OSD
129failure forces ceph to recover more data at once.
76f6eca4 130
a474ca1f 131.Avoid RAID
86be506d 132As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 133(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d
TL
134performance or availability. On the contrary, Ceph is designed to handle whole
135disks on it's own, without any abstraction in between. RAID controller are not
136designed for the Ceph use case and may complicate things and sometimes even
137reduce performance, as their write and caching algorithms may interfere with
138the ones from Ceph.
a474ca1f
AA
139
140WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
141
76f6eca4 142NOTE: Above recommendations should be seen as a rough guidance for choosing
2f19a6b0
TL
143hardware. Therefore, it is still essential to adapt it to your specific needs,
144test your setup and monitor health and performance continuously.
76f6eca4 145
2394c306
TM
146[[pve_ceph_install_wizard]]
147Initial Ceph installation & configuration
148-----------------------------------------
149
150[thumbnail="screenshot/gui-node-ceph-install.png"]
151
152With {pve} you have the benefit of an easy to use installation wizard
153for Ceph. Click on one of your cluster nodes and navigate to the Ceph
6a711e64
TL
154section in the menu tree. If Ceph is not already installed you will be
155offered to do so now.
2394c306
TM
156
157The wizard is divided into different sections, where each needs to be
6a711e64
TL
158finished successfully in order to use Ceph. After starting the installation
159the wizard will download and install all required packages from {pve}'s ceph
160repository.
2394c306
TM
161
162After finishing the first step, you will need to create a configuration.
6a711e64
TL
163This step is only needed once per cluster, as this configuration is distributed
164automatically to all remaining cluster members through {pve}'s clustered
165xref:chapter_pmxcfs[configuration file system (pmxcfs)].
2394c306
TM
166
167The configuration step includes the following settings:
168
169* *Public Network:* You should setup a dedicated network for Ceph, this
170setting is required. Separating your Ceph traffic is highly recommended,
6a711e64
TL
171because it could lead to troubles with other latency dependent services,
172e.g., cluster communication may decrease Ceph's performance, if not done.
2394c306
TM
173
174[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
175
176* *Cluster Network:* As an optional step you can go even further and
177separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
178as well. This will relieve the public network and could lead to
179significant performance improvements especially in big clusters.
180
181You have two more options which are considered advanced and therefore
182should only changed if you are an expert.
183
184* *Number of replicas*: Defines the how often a object is replicated
185* *Minimum replicas*: Defines the minimum number of required replicas
6a711e64 186 for I/O to be marked as complete.
2394c306 187
6a711e64 188Additionally you need to choose your first monitor node, this is required.
2394c306
TM
189
190That's it, you should see a success page as the last step with further
191instructions on how to go on. You are now prepared to start using Ceph,
192even though you will need to create additional xref:pve_ceph_monitors[monitors],
193create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
194
195The rest of this chapter will guide you on how to get the most out of
196your {pve} based Ceph setup, this will include aforementioned and
197more like xref:pveceph_fs[CephFS] which is a very handy addition to your
198new Ceph cluster.
21394e70 199
58f95dd7 200[[pve_ceph_install]]
21394e70
DM
201Installation of Ceph Packages
202-----------------------------
2394c306
TM
203Use {pve} Ceph installation wizard (recommended) or run the following
204command on each node:
21394e70
DM
205
206[source,bash]
207----
19920184 208pveceph install
21394e70
DM
209----
210
211This sets up an `apt` package repository in
212`/etc/apt/sources.list.d/ceph.list` and installs the required software.
213
214
215Creating initial Ceph configuration
216-----------------------------------
217
1ff5e4e8 218[thumbnail="screenshot/gui-ceph-config.png"]
8997dd6e 219
2394c306
TM
220Use the {pve} Ceph installation wizard (recommended) or run the
221following command on one node:
21394e70
DM
222
223[source,bash]
224----
225pveceph init --network 10.10.10.0/24
226----
227
2394c306
TM
228This creates an initial configuration at `/etc/pve/ceph.conf` with a
229dedicated network for ceph. That file is automatically distributed to
230all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
231creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
232So you can simply run Ceph commands without the need to specify a
233configuration file.
21394e70
DM
234
235
d9a27ee1 236[[pve_ceph_monitors]]
21394e70
DM
237Creating Ceph Monitors
238----------------------
239
1ff5e4e8 240[thumbnail="screenshot/gui-ceph-monitor.png"]
8997dd6e 241
1d54c3b4
AA
242The Ceph Monitor (MON)
243footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
a474ca1f 244maintains a master copy of the cluster map. For high availability you need to
2394c306 245have at least 3 monitors. One monitor will already be installed if you
620d6725 246used the installation wizard. You won't need more than 3 monitors as long
2394c306
TM
247as your cluster is small to midsize, only really large clusters will
248need more than that.
1d54c3b4
AA
249
250On each node where you want to place a monitor (three monitors are recommended),
251create it by using the 'Ceph -> Monitor' tab in the GUI or run.
21394e70
DM
252
253
254[source,bash]
255----
d1fdb121 256pveceph mon create
21394e70
DM
257----
258
1d54c3b4
AA
259This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
260do not want to install a manager, specify the '-exclude-manager' option.
261
262
0e38a564
AA
263Destroying Ceph Monitor
264----------------------
265
266To remove a Ceph Monitor via the GUI first select a node in the tree view and
267go to the **Ceph -> Monitor** panel. Select the MON and click the **Destroy**
268button.
269
270To remove a Ceph Monitor via the CLI first connect to the node on which the MON
271is running. Then execute the following command:
272[source,bash]
273----
274pveceph mon destroy
275----
276
277NOTE: At least three Monitors are needed for quorum.
278
279
1d54c3b4
AA
280[[pve_ceph_manager]]
281Creating Ceph Manager
282----------------------
283
a474ca1f 284The Manager daemon runs alongside the monitors, providing an interface for
1d54c3b4
AA
285monitoring the cluster. Since the Ceph luminous release the
286ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
287is required. During monitor installation the ceph manager will be installed as
288well.
289
1d54c3b4
AA
290[source,bash]
291----
d1fdb121 292pveceph mgr create
1d54c3b4
AA
293----
294
c1f38fe3
AA
295NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
296high availability install more then one manager.
297
21394e70 298
549350fe
AA
299Destroying Ceph Manager
300----------------------
301
302To remove a Ceph Manager via the GUI first select a node in the tree view and
303go to the **Ceph -> Monitor** panel. Select the Manager and click the
304**Destroy** button.
305
306To remove a Ceph Monitor via the CLI first connect to the node on which the
307Manager is running. Then execute the following command:
308[source,bash]
309----
310pveceph mgr destroy
311----
312
313NOTE: A Ceph cluster can function without a Manager, but certain functions like
314the cluster status or usage require a running Manager.
315
316
d9a27ee1 317[[pve_ceph_osds]]
21394e70
DM
318Creating Ceph OSDs
319------------------
320
1ff5e4e8 321[thumbnail="screenshot/gui-ceph-osd-status.png"]
8997dd6e 322
21394e70
DM
323via GUI or via CLI as follows:
324
325[source,bash]
326----
d1fdb121 327pveceph osd create /dev/sd[X]
21394e70
DM
328----
329
1d54c3b4
AA
330TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
331among your, at least three nodes (4 OSDs on each node).
332
a474ca1f 333If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
9bddef40 334sector and any OSD leftover the following command should be sufficient.
a474ca1f
AA
335
336[source,bash]
337----
9bddef40 338ceph-volume lvm zap /dev/sd[X] --destroy
a474ca1f
AA
339----
340
9bddef40 341WARNING: The above command will destroy data on the disk!
1d54c3b4
AA
342
343Ceph Bluestore
344~~~~~~~~~~~~~~
21394e70 345
1d54c3b4
AA
346Starting with the Ceph Kraken release, a new Ceph OSD storage type was
347introduced, the so called Bluestore
a474ca1f 348footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
9bddef40 349This is the default when creating OSDs since Ceph Luminous.
21394e70
DM
350
351[source,bash]
352----
d1fdb121 353pveceph osd create /dev/sd[X]
1d54c3b4
AA
354----
355
1e834cb2 356.Block.db and block.wal
1d54c3b4
AA
357
358If you want to use a separate DB/WAL device for your OSDs, you can specify it
9bddef40 359through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not
a474ca1f 360specified separately.
1d54c3b4
AA
361
362[source,bash]
363----
d1fdb121 364pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
1d54c3b4
AA
365----
366
9bddef40
DC
367You can directly choose the size for those with the '-db_size' and '-wal_size'
368paremeters respectively. If they are not given the following values (in order)
369will be used:
370
352c803f
TL
371* bluestore_block_{db,wal}_size from ceph configuration...
372** ... database, section 'osd'
373** ... database, section 'global'
374** ... file, section 'osd'
375** ... file, section 'global'
9bddef40
DC
376* 10% (DB)/1% (WAL) of OSD size
377
1d54c3b4 378NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
ee4a0e96 379internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
380NVRAM for better performance.
381
382
383Ceph Filestore
9bddef40
DC
384~~~~~~~~~~~~~~
385
352c803f 386Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
9bddef40 387Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
352c803f
TL
388'pveceph' anymore. If you still want to create filestore OSDs, use
389'ceph-volume' directly.
1d54c3b4
AA
390
391[source,bash]
392----
9bddef40 393ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
21394e70
DM
394----
395
be2d137e
AA
396Destroying Ceph OSDs
397--------------------
398
399To remove an OSD via the GUI first select a {PVE} node in the tree view and go
400to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
401button. Once the OSD status changed from `in` to `out` click the **STOP**
402button. As soon as the status changed from `up` to `down` select **Destroy**
403from the `More` drop-down menu.
404
405To remove an OSD via the CLI run the following commands.
406[source,bash]
407----
408ceph osd out <ID>
409systemctl stop ceph-osd@<ID>.service
410----
411NOTE: The first command instructs Ceph not to include the OSD in the data
412distribution. The second command stops the OSD service. Until this time, no
413data is lost.
414
415The following command destroys the OSD. Specify the '-cleanup' option to
416additionally destroy the partition table.
417[source,bash]
418----
419pveceph osd destroy <ID>
420----
421WARNING: The above command will destroy data on the disk!
422
423
07fef357 424[[pve_ceph_pools]]
1d54c3b4
AA
425Creating Ceph Pools
426-------------------
21394e70 427
1ff5e4e8 428[thumbnail="screenshot/gui-ceph-pools.png"]
8997dd6e 429
1d54c3b4 430A pool is a logical group for storing objects. It holds **P**lacement
90682f35 431**G**roups (`PG`, `pg_num`), a collection of objects.
1d54c3b4 432
90682f35
TL
433When no options are given, we set a default of **128 PGs**, a **size of 3
434replicas** and a **min_size of 2 replicas** for serving objects in a degraded
435state.
1d54c3b4 436
5a54ef44 437NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
90682f35 438'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
1d54c3b4
AA
439
440It is advised to calculate the PG number depending on your setup, you can find
a474ca1f
AA
441the formula and the PG calculator footnote:[PG calculator
442http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
443never be decreased.
1d54c3b4
AA
444
445
446You can create pools through command line or on the GUI on each PVE host under
447**Ceph -> Pools**.
448
449[source,bash]
450----
d1fdb121 451pveceph pool create <name>
1d54c3b4
AA
452----
453
620d6725
FE
454If you would like to automatically also get a storage definition for your pool,
455mark the checkbox "Add storages" in the GUI or use the command line option
456'--add_storages' at pool creation.
21394e70 457
1d54c3b4
AA
458Further information on Ceph pool handling can be found in the Ceph pool
459operation footnote:[Ceph pool operation
460http://docs.ceph.com/docs/luminous/rados/operations/pools/]
461manual.
21394e70 462
166c91fe
AA
463
464Destroying Ceph Pools
465---------------------
466
467To destroy a pool via the GUI select a node in the tree view and go to the
468**Ceph -> Pools** panel. Select the pool to destroy and click the **Destroy**
469button. To confirm the destruction of the pool you need to enter the pool name.
470
471Run the following command to destroy a pool. Specify the '-remove_storages' to
472also remove the associated storage.
473[source,bash]
474----
475pveceph pool destroy <name>
476----
477
478NOTE: Deleting the data of a pool is a background task and can take some time.
479You will notice that the data usage in the cluster is decreasing.
480
76f6eca4 481[[pve_ceph_device_classes]]
9fad507d
AA
482Ceph CRUSH & device classes
483---------------------------
484The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
485**U**nder **S**calable **H**ashing
486(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
487
488CRUSH calculates where to store to and retrieve data from, this has the
489advantage that no central index service is needed. CRUSH works with a map of
490OSDs, buckets (device locations) and rulesets (data replication) for pools.
491
492NOTE: Further information can be found in the Ceph documentation, under the
493section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
494
495This map can be altered to reflect different replication hierarchies. The object
496replicas can be separated (eg. failure domains), while maintaining the desired
497distribution.
498
499A common use case is to use different classes of disks for different Ceph pools.
500For this reason, Ceph introduced the device classes with luminous, to
501accommodate the need for easy ruleset generation.
502
503The device classes can be seen in the 'ceph osd tree' output. These classes
504represent their own root bucket, which can be seen with the below command.
505
506[source, bash]
507----
508ceph osd crush tree --show-shadow
509----
510
511Example output form the above command:
512
513[source, bash]
514----
515ID CLASS WEIGHT TYPE NAME
516-16 nvme 2.18307 root default~nvme
517-13 nvme 0.72769 host sumi1~nvme
518 12 nvme 0.72769 osd.12
519-14 nvme 0.72769 host sumi2~nvme
520 13 nvme 0.72769 osd.13
521-15 nvme 0.72769 host sumi3~nvme
522 14 nvme 0.72769 osd.14
523 -1 7.70544 root default
524 -3 2.56848 host sumi1
525 12 nvme 0.72769 osd.12
526 -5 2.56848 host sumi2
527 13 nvme 0.72769 osd.13
528 -7 2.56848 host sumi3
529 14 nvme 0.72769 osd.14
530----
531
532To let a pool distribute its objects only on a specific device class, you need
533to create a ruleset with the specific class first.
534
535[source, bash]
536----
537ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
538----
539
540[frame="none",grid="none", align="left", cols="30%,70%"]
541|===
542|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
543|<root>|which crush root it should belong to (default ceph root "default")
544|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
545|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
546|===
547
548Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
549
550[source, bash]
551----
552ceph osd pool set <pool-name> crush_rule <rule-name>
553----
554
555TIP: If the pool already contains objects, all of these have to be moved
556accordingly. Depending on your setup this may introduce a big performance hit on
557your cluster. As an alternative, you can create a new pool and move disks
558separately.
559
560
21394e70
DM
561Ceph Client
562-----------
563
1ff5e4e8 564[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 565
21394e70
DM
566You can then configure {pve} to use such pools to store VM or
567Container images. Simply use the GUI too add a new `RBD` storage (see
568section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
569
620d6725 570You also need to copy the keyring to a predefined location for an external Ceph
1d54c3b4
AA
571cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
572done automatically.
21394e70
DM
573
574NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
575the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
576`my-ceph-storage` in the following example:
577
578[source,bash]
579----
580mkdir /etc/pve/priv/ceph
581cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
582----
0840a663 583
58f95dd7
TL
584[[pveceph_fs]]
585CephFS
586------
587
588Ceph provides also a filesystem running on top of the same object storage as
589RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
590the RADOS backed objects to files and directories, allowing to provide a
591POSIX-compliant replicated filesystem. This allows one to have a clustered
592highly available shared filesystem in an easy way if ceph is already used. Its
593Metadata Servers guarantee that files get balanced out over the whole Ceph
594cluster, this way even high load will not overload a single host, which can be
d180eb39 595an issue with traditional shared filesystem approaches, like `NFS`, for
58f95dd7
TL
596example.
597
1e834cb2
TL
598[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
599
2394c306 600{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
58f95dd7
TL
601to save backups, ISO files or container templates and creating a
602hyper-converged CephFS itself.
603
604
605[[pveceph_fs_mds]]
606Metadata Server (MDS)
607~~~~~~~~~~~~~~~~~~~~~
608
609CephFS needs at least one Metadata Server to be configured and running to be
610able to work. One can simply create one through the {pve} web GUI's `Node ->
611CephFS` panel or on the command line with:
612
613----
614pveceph mds create
615----
616
617Multiple metadata servers can be created in a cluster. But with the default
618settings only one can be active at any time. If an MDS, or its node, becomes
619unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
620One can speed up the hand-over between the active and a standby MDS up by using
621the 'hotstandby' parameter option on create, or if you have already created it
622you may set/add:
623
624----
625mds standby replay = true
626----
627
628in the ceph.conf respective MDS section. With this enabled, this specific MDS
629will always poll the active one, so that it can take over faster as it is in a
3580eb13 630`warm` state. But naturally, the active polling will cause some additional
58f95dd7
TL
631performance impact on your system and active `MDS`.
632
1e834cb2 633.Multiple Active MDS
58f95dd7
TL
634
635Since Luminous (12.2.x) you can also have multiple active metadata servers
636running, but this is normally only useful for a high count on parallel clients,
637as else the `MDS` seldom is the bottleneck. If you want to set this up please
638refer to the ceph documentation. footnote:[Configuring multiple active MDS
127ca409 639daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
58f95dd7
TL
640
641[[pveceph_fs_create]]
642Create a CephFS
643~~~~~~~~~~~~~~~
644
645With {pve}'s CephFS integration into you can create a CephFS easily over the
646Web GUI, the CLI or an external API interface. Some prerequisites are required
647for this to work:
648
649.Prerequisites for a successful CephFS setup:
650- xref:pve_ceph_install[Install Ceph packages], if this was already done some
651 time ago you might want to rerun it on an up to date system to ensure that
652 also all CephFS related packages get installed.
653- xref:pve_ceph_monitors[Setup Monitors]
654- xref:pve_ceph_monitors[Setup your OSDs]
655- xref:pveceph_fs_mds[Setup at least one MDS]
656
657After this got all checked and done you can simply create a CephFS through
658either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
659for example with:
660
661----
662pveceph fs create --pg_num 128 --add-storage
663----
664
665This creates a CephFS named `'cephfs'' using a pool for its data named
666`'cephfs_data'' with `128` placement groups and a pool for its metadata named
667`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
668Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
669Ceph documentation for more information regarding a fitting placement group
670number (`pg_num`) for your setup footnote:[Ceph Placement Groups
127ca409 671http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
58f95dd7
TL
672Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
673storage configuration after it was created successfully.
674
675Destroy CephFS
676~~~~~~~~~~~~~~
677
fa9b4ee1 678WARNING: Destroying a CephFS will render all its data unusable, this cannot be
58f95dd7
TL
679undone!
680
681If you really want to destroy an existing CephFS you first need to stop, or
620d6725 682destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
58f95dd7
TL
683GUI or the command line interface, with:
684
685----
686pveceph mds destroy NAME
687----
688on each {pve} node hosting a MDS daemon.
689
690Then, you can remove (destroy) CephFS by issuing a:
691
692----
de2f8225 693ceph fs rm NAME --yes-i-really-mean-it
58f95dd7
TL
694----
695on a single node hosting Ceph. After this you may want to remove the created
696data and metadata pools, this can be done either over the Web GUI or the CLI
697with:
698
699----
700pveceph pool destroy NAME
701----
0840a663 702
6ff32926 703
10df14fb
TL
704Ceph monitoring and troubleshooting
705-----------------------------------
706A good start is to continuosly monitor the ceph health from the start of
707initial deployment. Either through the ceph tools itself, but also by accessing
708the status through the {pve} link:api-viewer/index.html[API].
6ff32926 709
10df14fb
TL
710The following ceph commands below can be used to see if the cluster is healthy
711('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
712('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
620d6725 713below will also give you an overview of the current events and actions to take.
6ff32926
AA
714
715----
10df14fb
TL
716# single time output
717pve# ceph -s
718# continuously output status changes (press CTRL+C to stop)
719pve# ceph -w
6ff32926
AA
720----
721
722To get a more detailed view, every ceph service has a log file under
723`/var/log/ceph/` and if there is not enough detail, the log level can be
724adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
725
726You can find more information about troubleshooting
727footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
620d6725 728a Ceph cluster on the official website.
6ff32926
AA
729
730
0840a663
DM
731ifdef::manvolnum[]
732include::pve-copyright.adoc[]
733endif::manvolnum[]