]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
pveceph: add section - Destroying Ceph OSDs
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
1ff5e4e8 26[thumbnail="screenshot/gui-ceph-status.png"]
8997dd6e 27
a474ca1f
AA
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
c994e4e5
DM
36
37Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
38excellent performance, reliability and scalability.
39
04ba9b24
TL
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
a474ca1f
AA
42- Thin provisioning
43- Snapshots support
44- Self healing
a474ca1f
AA
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
a474ca1f
AA
50- Open source
51
1d54c3b4
AA
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
c994e4e5
DM
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
21394e70
DM
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
127ca409 61.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
1d54c3b4
AA
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
477fbcfb
AA
66TIP: We highly recommend to get familiar with Ceph's architecture
67footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
68and vocabulary
69footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
1d54c3b4 70
21394e70
DM
71
72Precondition
73------------
74
76f6eca4
AA
75To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76three (preferably) identical servers for the setup.
21394e70
DM
77
78Check also the recommendations from
1d54c3b4 79http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
21394e70 80
76f6eca4 81.CPU
2f19a6b0
TL
82Higher CPU core frequency reduce latency and should be preferred. As a simple
83rule of thumb, you should assign a CPU core (or thread) to each Ceph service to
84provide enough resources for stable and durable Ceph performance.
76f6eca4
AA
85
86.Memory
87Especially in a hyper-converged setup, the memory consumption needs to be
2f19a6b0
TL
88carefully monitored. In addition to the intended workload from virtual machines
89and container, Ceph needs enough memory available to provide good and stable
90performance. As a rule of thumb, for roughly 1 TiB of data, 1 GiB of memory
91will be used by an OSD. OSD caching will use additional memory.
76f6eca4
AA
92
93.Network
94We recommend a network bandwidth of at least 10 GbE or more, which is used
95exclusively for Ceph. A meshed network setup
96footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
97is also an option if there are no 10 GbE switches available.
98
2f19a6b0
TL
99The volume of traffic, especially during recovery, will interfere with other
100services on the same network and may even break the {pve} cluster stack.
76f6eca4
AA
101
102Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
2f19a6b0
TL
103link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
10410 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
105will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
106even 100 GBps are possible.
76f6eca4
AA
107
108.Disks
109When planning the size of your Ceph cluster, it is important to take the
110recovery time into consideration. Especially with small clusters, the recovery
111might take long. It is recommended that you use SSDs instead of HDDs in small
112setups to reduce recovery time, minimizing the likelihood of a subsequent
113failure event during recovery.
114
2f19a6b0 115In general SSDs will provide more IOPs than spinning disks. This fact and the
76f6eca4 116higher cost may make a xref:pve_ceph_device_classes[class based] separation of
2f19a6b0 117pools appealing. Another possibility to speedup OSDs is to use a faster disk
352c803f
TL
118as journal or DB/**W**rite-**A**head-**L**og device, see
119xref:pve_ceph_osds[creating Ceph OSDs]. If a faster disk is used for multiple
120OSDs, a proper balance between OSD and WAL / DB (or journal) disk must be
121selected, otherwise the faster disk becomes the bottleneck for all linked OSDs.
76f6eca4
AA
122
123Aside from the disk type, Ceph best performs with an even sized and distributed
2f19a6b0
TL
124amount of disks per node. For example, 4 x 500 GB disks with in each node is
125better than a mixed setup with a single 1 TB and three 250 GB disk.
126
127One also need to balance OSD count and single OSD capacity. More capacity
128allows to increase storage density, but it also means that a single OSD
129failure forces ceph to recover more data at once.
76f6eca4 130
a474ca1f 131.Avoid RAID
86be506d 132As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 133(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d
TL
134performance or availability. On the contrary, Ceph is designed to handle whole
135disks on it's own, without any abstraction in between. RAID controller are not
136designed for the Ceph use case and may complicate things and sometimes even
137reduce performance, as their write and caching algorithms may interfere with
138the ones from Ceph.
a474ca1f
AA
139
140WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
141
76f6eca4 142NOTE: Above recommendations should be seen as a rough guidance for choosing
2f19a6b0
TL
143hardware. Therefore, it is still essential to adapt it to your specific needs,
144test your setup and monitor health and performance continuously.
76f6eca4 145
2394c306
TM
146[[pve_ceph_install_wizard]]
147Initial Ceph installation & configuration
148-----------------------------------------
149
150[thumbnail="screenshot/gui-node-ceph-install.png"]
151
152With {pve} you have the benefit of an easy to use installation wizard
153for Ceph. Click on one of your cluster nodes and navigate to the Ceph
6a711e64
TL
154section in the menu tree. If Ceph is not already installed you will be
155offered to do so now.
2394c306
TM
156
157The wizard is divided into different sections, where each needs to be
6a711e64
TL
158finished successfully in order to use Ceph. After starting the installation
159the wizard will download and install all required packages from {pve}'s ceph
160repository.
2394c306
TM
161
162After finishing the first step, you will need to create a configuration.
6a711e64
TL
163This step is only needed once per cluster, as this configuration is distributed
164automatically to all remaining cluster members through {pve}'s clustered
165xref:chapter_pmxcfs[configuration file system (pmxcfs)].
2394c306
TM
166
167The configuration step includes the following settings:
168
169* *Public Network:* You should setup a dedicated network for Ceph, this
170setting is required. Separating your Ceph traffic is highly recommended,
6a711e64
TL
171because it could lead to troubles with other latency dependent services,
172e.g., cluster communication may decrease Ceph's performance, if not done.
2394c306
TM
173
174[thumbnail="screenshot/gui-node-ceph-install-wizard-step2.png"]
175
176* *Cluster Network:* As an optional step you can go even further and
177separate the xref:pve_ceph_osds[OSD] replication & heartbeat traffic
178as well. This will relieve the public network and could lead to
179significant performance improvements especially in big clusters.
180
181You have two more options which are considered advanced and therefore
182should only changed if you are an expert.
183
184* *Number of replicas*: Defines the how often a object is replicated
185* *Minimum replicas*: Defines the minimum number of required replicas
6a711e64 186 for I/O to be marked as complete.
2394c306 187
6a711e64 188Additionally you need to choose your first monitor node, this is required.
2394c306
TM
189
190That's it, you should see a success page as the last step with further
191instructions on how to go on. You are now prepared to start using Ceph,
192even though you will need to create additional xref:pve_ceph_monitors[monitors],
193create some xref:pve_ceph_osds[OSDs] and at least one xref:pve_ceph_pools[pool].
194
195The rest of this chapter will guide you on how to get the most out of
196your {pve} based Ceph setup, this will include aforementioned and
197more like xref:pveceph_fs[CephFS] which is a very handy addition to your
198new Ceph cluster.
21394e70 199
58f95dd7 200[[pve_ceph_install]]
21394e70
DM
201Installation of Ceph Packages
202-----------------------------
2394c306
TM
203Use {pve} Ceph installation wizard (recommended) or run the following
204command on each node:
21394e70
DM
205
206[source,bash]
207----
19920184 208pveceph install
21394e70
DM
209----
210
211This sets up an `apt` package repository in
212`/etc/apt/sources.list.d/ceph.list` and installs the required software.
213
214
215Creating initial Ceph configuration
216-----------------------------------
217
1ff5e4e8 218[thumbnail="screenshot/gui-ceph-config.png"]
8997dd6e 219
2394c306
TM
220Use the {pve} Ceph installation wizard (recommended) or run the
221following command on one node:
21394e70
DM
222
223[source,bash]
224----
225pveceph init --network 10.10.10.0/24
226----
227
2394c306
TM
228This creates an initial configuration at `/etc/pve/ceph.conf` with a
229dedicated network for ceph. That file is automatically distributed to
230all {pve} nodes by using xref:chapter_pmxcfs[pmxcfs]. The command also
231creates a symbolic link from `/etc/ceph/ceph.conf` pointing to that file.
232So you can simply run Ceph commands without the need to specify a
233configuration file.
21394e70
DM
234
235
d9a27ee1 236[[pve_ceph_monitors]]
21394e70
DM
237Creating Ceph Monitors
238----------------------
239
1ff5e4e8 240[thumbnail="screenshot/gui-ceph-monitor.png"]
8997dd6e 241
1d54c3b4
AA
242The Ceph Monitor (MON)
243footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
a474ca1f 244maintains a master copy of the cluster map. For high availability you need to
2394c306 245have at least 3 monitors. One monitor will already be installed if you
620d6725 246used the installation wizard. You won't need more than 3 monitors as long
2394c306
TM
247as your cluster is small to midsize, only really large clusters will
248need more than that.
1d54c3b4
AA
249
250On each node where you want to place a monitor (three monitors are recommended),
251create it by using the 'Ceph -> Monitor' tab in the GUI or run.
21394e70
DM
252
253
254[source,bash]
255----
d1fdb121 256pveceph mon create
21394e70
DM
257----
258
1d54c3b4
AA
259This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
260do not want to install a manager, specify the '-exclude-manager' option.
261
262
263[[pve_ceph_manager]]
264Creating Ceph Manager
265----------------------
266
a474ca1f 267The Manager daemon runs alongside the monitors, providing an interface for
1d54c3b4
AA
268monitoring the cluster. Since the Ceph luminous release the
269ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
270is required. During monitor installation the ceph manager will be installed as
271well.
272
273NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
274high availability install more then one manager.
275
276[source,bash]
277----
d1fdb121 278pveceph mgr create
1d54c3b4
AA
279----
280
21394e70 281
d9a27ee1 282[[pve_ceph_osds]]
21394e70
DM
283Creating Ceph OSDs
284------------------
285
1ff5e4e8 286[thumbnail="screenshot/gui-ceph-osd-status.png"]
8997dd6e 287
21394e70
DM
288via GUI or via CLI as follows:
289
290[source,bash]
291----
d1fdb121 292pveceph osd create /dev/sd[X]
21394e70
DM
293----
294
1d54c3b4
AA
295TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
296among your, at least three nodes (4 OSDs on each node).
297
a474ca1f 298If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
9bddef40 299sector and any OSD leftover the following command should be sufficient.
a474ca1f
AA
300
301[source,bash]
302----
9bddef40 303ceph-volume lvm zap /dev/sd[X] --destroy
a474ca1f
AA
304----
305
9bddef40 306WARNING: The above command will destroy data on the disk!
1d54c3b4
AA
307
308Ceph Bluestore
309~~~~~~~~~~~~~~
21394e70 310
1d54c3b4
AA
311Starting with the Ceph Kraken release, a new Ceph OSD storage type was
312introduced, the so called Bluestore
a474ca1f 313footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
9bddef40 314This is the default when creating OSDs since Ceph Luminous.
21394e70
DM
315
316[source,bash]
317----
d1fdb121 318pveceph osd create /dev/sd[X]
1d54c3b4
AA
319----
320
1e834cb2 321.Block.db and block.wal
1d54c3b4
AA
322
323If you want to use a separate DB/WAL device for your OSDs, you can specify it
9bddef40 324through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not
a474ca1f 325specified separately.
1d54c3b4
AA
326
327[source,bash]
328----
d1fdb121 329pveceph osd create /dev/sd[X] -db_dev /dev/sd[Y] -wal_dev /dev/sd[Z]
1d54c3b4
AA
330----
331
9bddef40
DC
332You can directly choose the size for those with the '-db_size' and '-wal_size'
333paremeters respectively. If they are not given the following values (in order)
334will be used:
335
352c803f
TL
336* bluestore_block_{db,wal}_size from ceph configuration...
337** ... database, section 'osd'
338** ... database, section 'global'
339** ... file, section 'osd'
340** ... file, section 'global'
9bddef40
DC
341* 10% (DB)/1% (WAL) of OSD size
342
1d54c3b4 343NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
ee4a0e96 344internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
345NVRAM for better performance.
346
347
348Ceph Filestore
9bddef40
DC
349~~~~~~~~~~~~~~
350
352c803f 351Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
9bddef40 352Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
352c803f
TL
353'pveceph' anymore. If you still want to create filestore OSDs, use
354'ceph-volume' directly.
1d54c3b4
AA
355
356[source,bash]
357----
9bddef40 358ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
21394e70
DM
359----
360
be2d137e
AA
361Destroying Ceph OSDs
362--------------------
363
364To remove an OSD via the GUI first select a {PVE} node in the tree view and go
365to the **Ceph -> OSD** panel. Select the OSD to destroy. Next click the **OUT**
366button. Once the OSD status changed from `in` to `out` click the **STOP**
367button. As soon as the status changed from `up` to `down` select **Destroy**
368from the `More` drop-down menu.
369
370To remove an OSD via the CLI run the following commands.
371[source,bash]
372----
373ceph osd out <ID>
374systemctl stop ceph-osd@<ID>.service
375----
376NOTE: The first command instructs Ceph not to include the OSD in the data
377distribution. The second command stops the OSD service. Until this time, no
378data is lost.
379
380The following command destroys the OSD. Specify the '-cleanup' option to
381additionally destroy the partition table.
382[source,bash]
383----
384pveceph osd destroy <ID>
385----
386WARNING: The above command will destroy data on the disk!
387
388
07fef357 389[[pve_ceph_pools]]
1d54c3b4
AA
390Creating Ceph Pools
391-------------------
21394e70 392
1ff5e4e8 393[thumbnail="screenshot/gui-ceph-pools.png"]
8997dd6e 394
1d54c3b4 395A pool is a logical group for storing objects. It holds **P**lacement
90682f35 396**G**roups (`PG`, `pg_num`), a collection of objects.
1d54c3b4 397
90682f35
TL
398When no options are given, we set a default of **128 PGs**, a **size of 3
399replicas** and a **min_size of 2 replicas** for serving objects in a degraded
400state.
1d54c3b4 401
5a54ef44 402NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
90682f35 403'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
1d54c3b4
AA
404
405It is advised to calculate the PG number depending on your setup, you can find
a474ca1f
AA
406the formula and the PG calculator footnote:[PG calculator
407http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
408never be decreased.
1d54c3b4
AA
409
410
411You can create pools through command line or on the GUI on each PVE host under
412**Ceph -> Pools**.
413
414[source,bash]
415----
d1fdb121 416pveceph pool create <name>
1d54c3b4
AA
417----
418
620d6725
FE
419If you would like to automatically also get a storage definition for your pool,
420mark the checkbox "Add storages" in the GUI or use the command line option
421'--add_storages' at pool creation.
21394e70 422
1d54c3b4
AA
423Further information on Ceph pool handling can be found in the Ceph pool
424operation footnote:[Ceph pool operation
425http://docs.ceph.com/docs/luminous/rados/operations/pools/]
426manual.
21394e70 427
76f6eca4 428[[pve_ceph_device_classes]]
9fad507d
AA
429Ceph CRUSH & device classes
430---------------------------
431The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
432**U**nder **S**calable **H**ashing
433(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
434
435CRUSH calculates where to store to and retrieve data from, this has the
436advantage that no central index service is needed. CRUSH works with a map of
437OSDs, buckets (device locations) and rulesets (data replication) for pools.
438
439NOTE: Further information can be found in the Ceph documentation, under the
440section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
441
442This map can be altered to reflect different replication hierarchies. The object
443replicas can be separated (eg. failure domains), while maintaining the desired
444distribution.
445
446A common use case is to use different classes of disks for different Ceph pools.
447For this reason, Ceph introduced the device classes with luminous, to
448accommodate the need for easy ruleset generation.
449
450The device classes can be seen in the 'ceph osd tree' output. These classes
451represent their own root bucket, which can be seen with the below command.
452
453[source, bash]
454----
455ceph osd crush tree --show-shadow
456----
457
458Example output form the above command:
459
460[source, bash]
461----
462ID CLASS WEIGHT TYPE NAME
463-16 nvme 2.18307 root default~nvme
464-13 nvme 0.72769 host sumi1~nvme
465 12 nvme 0.72769 osd.12
466-14 nvme 0.72769 host sumi2~nvme
467 13 nvme 0.72769 osd.13
468-15 nvme 0.72769 host sumi3~nvme
469 14 nvme 0.72769 osd.14
470 -1 7.70544 root default
471 -3 2.56848 host sumi1
472 12 nvme 0.72769 osd.12
473 -5 2.56848 host sumi2
474 13 nvme 0.72769 osd.13
475 -7 2.56848 host sumi3
476 14 nvme 0.72769 osd.14
477----
478
479To let a pool distribute its objects only on a specific device class, you need
480to create a ruleset with the specific class first.
481
482[source, bash]
483----
484ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
485----
486
487[frame="none",grid="none", align="left", cols="30%,70%"]
488|===
489|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
490|<root>|which crush root it should belong to (default ceph root "default")
491|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
492|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
493|===
494
495Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
496
497[source, bash]
498----
499ceph osd pool set <pool-name> crush_rule <rule-name>
500----
501
502TIP: If the pool already contains objects, all of these have to be moved
503accordingly. Depending on your setup this may introduce a big performance hit on
504your cluster. As an alternative, you can create a new pool and move disks
505separately.
506
507
21394e70
DM
508Ceph Client
509-----------
510
1ff5e4e8 511[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 512
21394e70
DM
513You can then configure {pve} to use such pools to store VM or
514Container images. Simply use the GUI too add a new `RBD` storage (see
515section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
516
620d6725 517You also need to copy the keyring to a predefined location for an external Ceph
1d54c3b4
AA
518cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
519done automatically.
21394e70
DM
520
521NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
522the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
523`my-ceph-storage` in the following example:
524
525[source,bash]
526----
527mkdir /etc/pve/priv/ceph
528cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
529----
0840a663 530
58f95dd7
TL
531[[pveceph_fs]]
532CephFS
533------
534
535Ceph provides also a filesystem running on top of the same object storage as
536RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
537the RADOS backed objects to files and directories, allowing to provide a
538POSIX-compliant replicated filesystem. This allows one to have a clustered
539highly available shared filesystem in an easy way if ceph is already used. Its
540Metadata Servers guarantee that files get balanced out over the whole Ceph
541cluster, this way even high load will not overload a single host, which can be
d180eb39 542an issue with traditional shared filesystem approaches, like `NFS`, for
58f95dd7
TL
543example.
544
1e834cb2
TL
545[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
546
2394c306 547{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
58f95dd7
TL
548to save backups, ISO files or container templates and creating a
549hyper-converged CephFS itself.
550
551
552[[pveceph_fs_mds]]
553Metadata Server (MDS)
554~~~~~~~~~~~~~~~~~~~~~
555
556CephFS needs at least one Metadata Server to be configured and running to be
557able to work. One can simply create one through the {pve} web GUI's `Node ->
558CephFS` panel or on the command line with:
559
560----
561pveceph mds create
562----
563
564Multiple metadata servers can be created in a cluster. But with the default
565settings only one can be active at any time. If an MDS, or its node, becomes
566unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
567One can speed up the hand-over between the active and a standby MDS up by using
568the 'hotstandby' parameter option on create, or if you have already created it
569you may set/add:
570
571----
572mds standby replay = true
573----
574
575in the ceph.conf respective MDS section. With this enabled, this specific MDS
576will always poll the active one, so that it can take over faster as it is in a
3580eb13 577`warm` state. But naturally, the active polling will cause some additional
58f95dd7
TL
578performance impact on your system and active `MDS`.
579
1e834cb2 580.Multiple Active MDS
58f95dd7
TL
581
582Since Luminous (12.2.x) you can also have multiple active metadata servers
583running, but this is normally only useful for a high count on parallel clients,
584as else the `MDS` seldom is the bottleneck. If you want to set this up please
585refer to the ceph documentation. footnote:[Configuring multiple active MDS
127ca409 586daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
58f95dd7
TL
587
588[[pveceph_fs_create]]
589Create a CephFS
590~~~~~~~~~~~~~~~
591
592With {pve}'s CephFS integration into you can create a CephFS easily over the
593Web GUI, the CLI or an external API interface. Some prerequisites are required
594for this to work:
595
596.Prerequisites for a successful CephFS setup:
597- xref:pve_ceph_install[Install Ceph packages], if this was already done some
598 time ago you might want to rerun it on an up to date system to ensure that
599 also all CephFS related packages get installed.
600- xref:pve_ceph_monitors[Setup Monitors]
601- xref:pve_ceph_monitors[Setup your OSDs]
602- xref:pveceph_fs_mds[Setup at least one MDS]
603
604After this got all checked and done you can simply create a CephFS through
605either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
606for example with:
607
608----
609pveceph fs create --pg_num 128 --add-storage
610----
611
612This creates a CephFS named `'cephfs'' using a pool for its data named
613`'cephfs_data'' with `128` placement groups and a pool for its metadata named
614`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
615Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
616Ceph documentation for more information regarding a fitting placement group
617number (`pg_num`) for your setup footnote:[Ceph Placement Groups
127ca409 618http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
58f95dd7
TL
619Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
620storage configuration after it was created successfully.
621
622Destroy CephFS
623~~~~~~~~~~~~~~
624
fa9b4ee1 625WARNING: Destroying a CephFS will render all its data unusable, this cannot be
58f95dd7
TL
626undone!
627
628If you really want to destroy an existing CephFS you first need to stop, or
620d6725 629destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
58f95dd7
TL
630GUI or the command line interface, with:
631
632----
633pveceph mds destroy NAME
634----
635on each {pve} node hosting a MDS daemon.
636
637Then, you can remove (destroy) CephFS by issuing a:
638
639----
de2f8225 640ceph fs rm NAME --yes-i-really-mean-it
58f95dd7
TL
641----
642on a single node hosting Ceph. After this you may want to remove the created
643data and metadata pools, this can be done either over the Web GUI or the CLI
644with:
645
646----
647pveceph pool destroy NAME
648----
0840a663 649
6ff32926 650
10df14fb
TL
651Ceph monitoring and troubleshooting
652-----------------------------------
653A good start is to continuosly monitor the ceph health from the start of
654initial deployment. Either through the ceph tools itself, but also by accessing
655the status through the {pve} link:api-viewer/index.html[API].
6ff32926 656
10df14fb
TL
657The following ceph commands below can be used to see if the cluster is healthy
658('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
659('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
620d6725 660below will also give you an overview of the current events and actions to take.
6ff32926
AA
661
662----
10df14fb
TL
663# single time output
664pve# ceph -s
665# continuously output status changes (press CTRL+C to stop)
666pve# ceph -w
6ff32926
AA
667----
668
669To get a more detailed view, every ceph service has a log file under
670`/var/log/ceph/` and if there is not enough detail, the log level can be
671adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
672
673You can find more information about troubleshooting
674footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
620d6725 675a Ceph cluster on the official website.
6ff32926
AA
676
677
0840a663
DM
678ifdef::manvolnum[]
679include::pve-copyright.adoc[]
680endif::manvolnum[]