]> git.proxmox.com Git - pve-docs.git/blame - pveceph.adoc
ceph: Expand the Precondition section
[pve-docs.git] / pveceph.adoc
CommitLineData
80c0adcb 1[[chapter_pveceph]]
0840a663 2ifdef::manvolnum[]
b2f242ab
DM
3pveceph(1)
4==========
404a158e 5:pve-toplevel:
0840a663
DM
6
7NAME
8----
9
21394e70 10pveceph - Manage Ceph Services on Proxmox VE Nodes
0840a663 11
49a5e11c 12SYNOPSIS
0840a663
DM
13--------
14
15include::pveceph.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
0840a663 20ifndef::manvolnum[]
fe93f133
DM
21Manage Ceph Services on Proxmox VE Nodes
22========================================
49d3ad91 23:pve-toplevel:
0840a663
DM
24endif::manvolnum[]
25
1ff5e4e8 26[thumbnail="screenshot/gui-ceph-status.png"]
8997dd6e 27
a474ca1f
AA
28{pve} unifies your compute and storage systems, i.e. you can use the same
29physical nodes within a cluster for both computing (processing VMs and
30containers) and replicated storage. The traditional silos of compute and
31storage resources can be wrapped up into a single hyper-converged appliance.
32Separate storage networks (SANs) and connections via network attached storages
33(NAS) disappear. With the integration of Ceph, an open source software-defined
34storage platform, {pve} has the ability to run and manage Ceph storage directly
35on the hypervisor nodes.
c994e4e5
DM
36
37Ceph is a distributed object store and file system designed to provide
1d54c3b4
AA
38excellent performance, reliability and scalability.
39
04ba9b24
TL
40.Some advantages of Ceph on {pve} are:
41- Easy setup and management with CLI and GUI support
a474ca1f
AA
42- Thin provisioning
43- Snapshots support
44- Self healing
a474ca1f
AA
45- Scalable to the exabyte level
46- Setup pools with different performance and redundancy characteristics
47- Data is replicated, making it fault tolerant
48- Runs on economical commodity hardware
49- No need for hardware RAID controllers
a474ca1f
AA
50- Open source
51
1d54c3b4
AA
52For small to mid sized deployments, it is possible to install a Ceph server for
53RADOS Block Devices (RBD) directly on your {pve} cluster nodes, see
c994e4e5
DM
54xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]. Recent
55hardware has plenty of CPU power and RAM, so running storage services
56and VMs on the same node is possible.
21394e70
DM
57
58To simplify management, we provide 'pveceph' - a tool to install and
59manage {ceph} services on {pve} nodes.
60
127ca409 61.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
1d54c3b4
AA
62- Ceph Monitor (ceph-mon)
63- Ceph Manager (ceph-mgr)
64- Ceph OSD (ceph-osd; Object Storage Daemon)
65
477fbcfb
AA
66TIP: We highly recommend to get familiar with Ceph's architecture
67footnote:[Ceph architecture http://docs.ceph.com/docs/luminous/architecture/]
68and vocabulary
69footnote:[Ceph glossary http://docs.ceph.com/docs/luminous/glossary].
1d54c3b4 70
21394e70
DM
71
72Precondition
73------------
74
76f6eca4
AA
75To build a hyper-converged Proxmox + Ceph Cluster there should be at least
76three (preferably) identical servers for the setup.
21394e70
DM
77
78Check also the recommendations from
1d54c3b4 79http://docs.ceph.com/docs/luminous/start/hardware-recommendations/[Ceph's website].
21394e70 80
76f6eca4
AA
81.CPU
82As higher the core frequency the better, this will reduce latency. Among other
83things, this benefits the services of Ceph, as they can process data faster.
84To simplify planning, you should assign a CPU core (or thread) to each Ceph
85service to provide enough resources for stable and durable Ceph performance.
86
87.Memory
88Especially in a hyper-converged setup, the memory consumption needs to be
89carefully monitored. In addition to the intended workload (VM / Container),
90Ceph needs enough memory to provide good and stable performance. As a rule of
91thumb, for roughly 1TiB of data, 1 GiB of memory will be used by an OSD. With
92additionally needed memory for OSD caching.
93
94.Network
95We recommend a network bandwidth of at least 10 GbE or more, which is used
96exclusively for Ceph. A meshed network setup
97footnote:[Full Mesh Network for Ceph {webwiki-url}Full_Mesh_Network_for_Ceph_Server]
98is also an option if there are no 10 GbE switches available.
99
100To be explicit about the network, since Ceph is a distributed network storage,
101its traffic must be put on its own physical network. The volume of traffic
102especially during recovery will interfere with other services on the same
103network.
104
105Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
106link, a SSD or a NVMe SSD certainly can. Modern NVMe SSDs will even saturate 10
107Gb of bandwidth. You also should consider higher bandwidths, as these tend to
108come with lower latency.
109
110.Disks
111When planning the size of your Ceph cluster, it is important to take the
112recovery time into consideration. Especially with small clusters, the recovery
113might take long. It is recommended that you use SSDs instead of HDDs in small
114setups to reduce recovery time, minimizing the likelihood of a subsequent
115failure event during recovery.
116
117In general SSDs will provide more IOPs then spinning disks. This fact and the
118higher cost may make a xref:pve_ceph_device_classes[class based] separation of
119pools appealing. Another possibility to speedup OSDs is to use a faster disk
120as journal or DB/WAL device, see xref:pve_ceph_osds[creating Ceph OSDs]. If a
121faster disk is used for multiple OSDs, a proper balance between OSD and WAL /
122DB (or journal) disk must be selected, otherwise the faster disk becomes the
123bottleneck for all linked OSDs.
124
125Aside from the disk type, Ceph best performs with an even sized and distributed
126amount of disks per node. For example, 4x disks à 500 GB in each node.
127
a474ca1f 128.Avoid RAID
86be506d 129As Ceph handles data object redundancy and multiple parallel writes to disks
c78756be 130(OSDs) on its own, using a RAID controller normally doesn’t improve
86be506d
TL
131performance or availability. On the contrary, Ceph is designed to handle whole
132disks on it's own, without any abstraction in between. RAID controller are not
133designed for the Ceph use case and may complicate things and sometimes even
134reduce performance, as their write and caching algorithms may interfere with
135the ones from Ceph.
a474ca1f
AA
136
137WARNING: Avoid RAID controller, use host bus adapter (HBA) instead.
138
76f6eca4
AA
139NOTE: Above recommendations should be seen as a rough guidance for choosing
140hardware. Therefore, it is still essential to test your setup and monitor
141health & performance.
142
21394e70 143
58f95dd7 144[[pve_ceph_install]]
21394e70
DM
145Installation of Ceph Packages
146-----------------------------
147
148On each node run the installation script as follows:
149
150[source,bash]
151----
19920184 152pveceph install
21394e70
DM
153----
154
155This sets up an `apt` package repository in
156`/etc/apt/sources.list.d/ceph.list` and installs the required software.
157
158
159Creating initial Ceph configuration
160-----------------------------------
161
1ff5e4e8 162[thumbnail="screenshot/gui-ceph-config.png"]
8997dd6e 163
21394e70
DM
164After installation of packages, you need to create an initial Ceph
165configuration on just one node, based on your network (`10.10.10.0/24`
166in the following example) dedicated for Ceph:
167
168[source,bash]
169----
170pveceph init --network 10.10.10.0/24
171----
172
a474ca1f 173This creates an initial configuration at `/etc/pve/ceph.conf`. That file is
c994e4e5 174automatically distributed to all {pve} nodes by using
21394e70
DM
175xref:chapter_pmxcfs[pmxcfs]. The command also creates a symbolic link
176from `/etc/ceph/ceph.conf` pointing to that file. So you can simply run
177Ceph commands without the need to specify a configuration file.
178
179
d9a27ee1 180[[pve_ceph_monitors]]
21394e70
DM
181Creating Ceph Monitors
182----------------------
183
1ff5e4e8 184[thumbnail="screenshot/gui-ceph-monitor.png"]
8997dd6e 185
1d54c3b4
AA
186The Ceph Monitor (MON)
187footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
a474ca1f
AA
188maintains a master copy of the cluster map. For high availability you need to
189have at least 3 monitors.
1d54c3b4
AA
190
191On each node where you want to place a monitor (three monitors are recommended),
192create it by using the 'Ceph -> Monitor' tab in the GUI or run.
21394e70
DM
193
194
195[source,bash]
196----
197pveceph createmon
198----
199
1d54c3b4
AA
200This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
201do not want to install a manager, specify the '-exclude-manager' option.
202
203
204[[pve_ceph_manager]]
205Creating Ceph Manager
206----------------------
207
a474ca1f 208The Manager daemon runs alongside the monitors, providing an interface for
1d54c3b4
AA
209monitoring the cluster. Since the Ceph luminous release the
210ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
211is required. During monitor installation the ceph manager will be installed as
212well.
213
214NOTE: It is recommended to install the Ceph Manager on the monitor nodes. For
215high availability install more then one manager.
216
217[source,bash]
218----
219pveceph createmgr
220----
221
21394e70 222
d9a27ee1 223[[pve_ceph_osds]]
21394e70
DM
224Creating Ceph OSDs
225------------------
226
1ff5e4e8 227[thumbnail="screenshot/gui-ceph-osd-status.png"]
8997dd6e 228
21394e70
DM
229via GUI or via CLI as follows:
230
231[source,bash]
232----
233pveceph createosd /dev/sd[X]
234----
235
1d54c3b4
AA
236TIP: We recommend a Ceph cluster size, starting with 12 OSDs, distributed evenly
237among your, at least three nodes (4 OSDs on each node).
238
a474ca1f
AA
239If the disk was used before (eg. ZFS/RAID/OSD), to remove partition table, boot
240sector and any OSD leftover the following commands should be sufficient.
241
242[source,bash]
243----
244dd if=/dev/zero of=/dev/sd[X] bs=1M count=200
245ceph-disk zap /dev/sd[X]
246----
247
248WARNING: The above commands will destroy data on the disk!
1d54c3b4
AA
249
250Ceph Bluestore
251~~~~~~~~~~~~~~
21394e70 252
1d54c3b4
AA
253Starting with the Ceph Kraken release, a new Ceph OSD storage type was
254introduced, the so called Bluestore
a474ca1f
AA
255footnote:[Ceph Bluestore http://ceph.com/community/new-luminous-bluestore/].
256This is the default when creating OSDs in Ceph luminous.
21394e70
DM
257
258[source,bash]
259----
1d54c3b4
AA
260pveceph createosd /dev/sd[X]
261----
262
ee4a0e96 263NOTE: In order to select a disk in the GUI, to be more fail-safe, the disk needs
a474ca1f
AA
264to have a GPT footnoteref:[GPT, GPT partition table
265https://en.wikipedia.org/wiki/GUID_Partition_Table] partition table. You can
266create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
267disk as DB/WAL.
1d54c3b4
AA
268
269If you want to use a separate DB/WAL device for your OSDs, you can specify it
a474ca1f
AA
270through the '-journal_dev' option. The WAL is placed with the DB, if not
271specified separately.
1d54c3b4
AA
272
273[source,bash]
274----
a474ca1f 275pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
1d54c3b4
AA
276----
277
278NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
ee4a0e96 279internal journal or write-ahead log. It is recommended to use a fast SSD or
1d54c3b4
AA
280NVRAM for better performance.
281
282
283Ceph Filestore
284~~~~~~~~~~~~~
285Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can
286still be used and might give better performance in small setups, when backed by
ee4a0e96 287an NVMe SSD or similar.
1d54c3b4
AA
288
289[source,bash]
290----
291pveceph createosd /dev/sd[X] -bluestore 0
292----
293
294NOTE: In order to select a disk in the GUI, the disk needs to have a
295GPT footnoteref:[GPT] partition table. You can
296create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
297disk as journal. Currently the journal size is fixed to 5 GB.
298
299If you want to use a dedicated SSD journal disk:
300
301[source,bash]
302----
e677b344 303pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] -bluestore 0
21394e70
DM
304----
305
306Example: Use /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD
307journal disk.
308
309[source,bash]
310----
e677b344 311pveceph createosd /dev/sdf -journal_dev /dev/sdb -bluestore 0
21394e70
DM
312----
313
314This partitions the disk (data and journal partition), creates
315filesystems and starts the OSD, afterwards it is running and fully
1d54c3b4 316functional.
21394e70 317
1d54c3b4
AA
318NOTE: This command refuses to initialize disk when it detects existing data. So
319if you want to overwrite a disk you should remove existing data first. You can
320do that using: 'ceph-disk zap /dev/sd[X]'
21394e70
DM
321
322You can create OSDs containing both journal and data partitions or you
323can place the journal on a dedicated SSD. Using a SSD journal disk is
1d54c3b4 324highly recommended to achieve good performance.
21394e70
DM
325
326
07fef357 327[[pve_ceph_pools]]
1d54c3b4
AA
328Creating Ceph Pools
329-------------------
21394e70 330
1ff5e4e8 331[thumbnail="screenshot/gui-ceph-pools.png"]
8997dd6e 332
1d54c3b4 333A pool is a logical group for storing objects. It holds **P**lacement
90682f35 334**G**roups (`PG`, `pg_num`), a collection of objects.
1d54c3b4 335
90682f35
TL
336When no options are given, we set a default of **128 PGs**, a **size of 3
337replicas** and a **min_size of 2 replicas** for serving objects in a degraded
338state.
1d54c3b4 339
5a54ef44 340NOTE: The default number of PGs works for 2-5 disks. Ceph throws a
90682f35 341'HEALTH_WARNING' if you have too few or too many PGs in your cluster.
1d54c3b4
AA
342
343It is advised to calculate the PG number depending on your setup, you can find
a474ca1f
AA
344the formula and the PG calculator footnote:[PG calculator
345http://ceph.com/pgcalc/] online. While PGs can be increased later on, they can
346never be decreased.
1d54c3b4
AA
347
348
349You can create pools through command line or on the GUI on each PVE host under
350**Ceph -> Pools**.
351
352[source,bash]
353----
354pveceph createpool <name>
355----
356
357If you would like to automatically get also a storage definition for your pool,
358active the checkbox "Add storages" on the GUI or use the command line option
359'--add_storages' on pool creation.
21394e70 360
1d54c3b4
AA
361Further information on Ceph pool handling can be found in the Ceph pool
362operation footnote:[Ceph pool operation
363http://docs.ceph.com/docs/luminous/rados/operations/pools/]
364manual.
21394e70 365
76f6eca4 366[[pve_ceph_device_classes]]
9fad507d
AA
367Ceph CRUSH & device classes
368---------------------------
369The foundation of Ceph is its algorithm, **C**ontrolled **R**eplication
370**U**nder **S**calable **H**ashing
371(CRUSH footnote:[CRUSH https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf]).
372
373CRUSH calculates where to store to and retrieve data from, this has the
374advantage that no central index service is needed. CRUSH works with a map of
375OSDs, buckets (device locations) and rulesets (data replication) for pools.
376
377NOTE: Further information can be found in the Ceph documentation, under the
378section CRUSH map footnote:[CRUSH map http://docs.ceph.com/docs/luminous/rados/operations/crush-map/].
379
380This map can be altered to reflect different replication hierarchies. The object
381replicas can be separated (eg. failure domains), while maintaining the desired
382distribution.
383
384A common use case is to use different classes of disks for different Ceph pools.
385For this reason, Ceph introduced the device classes with luminous, to
386accommodate the need for easy ruleset generation.
387
388The device classes can be seen in the 'ceph osd tree' output. These classes
389represent their own root bucket, which can be seen with the below command.
390
391[source, bash]
392----
393ceph osd crush tree --show-shadow
394----
395
396Example output form the above command:
397
398[source, bash]
399----
400ID CLASS WEIGHT TYPE NAME
401-16 nvme 2.18307 root default~nvme
402-13 nvme 0.72769 host sumi1~nvme
403 12 nvme 0.72769 osd.12
404-14 nvme 0.72769 host sumi2~nvme
405 13 nvme 0.72769 osd.13
406-15 nvme 0.72769 host sumi3~nvme
407 14 nvme 0.72769 osd.14
408 -1 7.70544 root default
409 -3 2.56848 host sumi1
410 12 nvme 0.72769 osd.12
411 -5 2.56848 host sumi2
412 13 nvme 0.72769 osd.13
413 -7 2.56848 host sumi3
414 14 nvme 0.72769 osd.14
415----
416
417To let a pool distribute its objects only on a specific device class, you need
418to create a ruleset with the specific class first.
419
420[source, bash]
421----
422ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
423----
424
425[frame="none",grid="none", align="left", cols="30%,70%"]
426|===
427|<rule-name>|name of the rule, to connect with a pool (seen in GUI & CLI)
428|<root>|which crush root it should belong to (default ceph root "default")
429|<failure-domain>|at which failure-domain the objects should be distributed (usually host)
430|<class>|what type of OSD backing store to use (eg. nvme, ssd, hdd)
431|===
432
433Once the rule is in the CRUSH map, you can tell a pool to use the ruleset.
434
435[source, bash]
436----
437ceph osd pool set <pool-name> crush_rule <rule-name>
438----
439
440TIP: If the pool already contains objects, all of these have to be moved
441accordingly. Depending on your setup this may introduce a big performance hit on
442your cluster. As an alternative, you can create a new pool and move disks
443separately.
444
445
21394e70
DM
446Ceph Client
447-----------
448
1ff5e4e8 449[thumbnail="screenshot/gui-ceph-log.png"]
8997dd6e 450
21394e70
DM
451You can then configure {pve} to use such pools to store VM or
452Container images. Simply use the GUI too add a new `RBD` storage (see
453section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
454
1d54c3b4
AA
455You also need to copy the keyring to a predefined location for a external Ceph
456cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
457done automatically.
21394e70
DM
458
459NOTE: The file name needs to be `<storage_id> + `.keyring` - `<storage_id>` is
460the expression after 'rbd:' in `/etc/pve/storage.cfg` which is
461`my-ceph-storage` in the following example:
462
463[source,bash]
464----
465mkdir /etc/pve/priv/ceph
466cp /etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/my-ceph-storage.keyring
467----
0840a663 468
58f95dd7
TL
469[[pveceph_fs]]
470CephFS
471------
472
473Ceph provides also a filesystem running on top of the same object storage as
474RADOS block devices do. A **M**eta**d**ata **S**erver (`MDS`) is used to map
475the RADOS backed objects to files and directories, allowing to provide a
476POSIX-compliant replicated filesystem. This allows one to have a clustered
477highly available shared filesystem in an easy way if ceph is already used. Its
478Metadata Servers guarantee that files get balanced out over the whole Ceph
479cluster, this way even high load will not overload a single host, which can be
d180eb39 480an issue with traditional shared filesystem approaches, like `NFS`, for
58f95dd7
TL
481example.
482
483{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage])
484to save backups, ISO files or container templates and creating a
485hyper-converged CephFS itself.
486
487
488[[pveceph_fs_mds]]
489Metadata Server (MDS)
490~~~~~~~~~~~~~~~~~~~~~
491
492CephFS needs at least one Metadata Server to be configured and running to be
493able to work. One can simply create one through the {pve} web GUI's `Node ->
494CephFS` panel or on the command line with:
495
496----
497pveceph mds create
498----
499
500Multiple metadata servers can be created in a cluster. But with the default
501settings only one can be active at any time. If an MDS, or its node, becomes
502unresponsive (or crashes), another `standby` MDS will get promoted to `active`.
503One can speed up the hand-over between the active and a standby MDS up by using
504the 'hotstandby' parameter option on create, or if you have already created it
505you may set/add:
506
507----
508mds standby replay = true
509----
510
511in the ceph.conf respective MDS section. With this enabled, this specific MDS
512will always poll the active one, so that it can take over faster as it is in a
3580eb13 513`warm` state. But naturally, the active polling will cause some additional
58f95dd7
TL
514performance impact on your system and active `MDS`.
515
516Multiple Active MDS
517^^^^^^^^^^^^^^^^^^^
518
519Since Luminous (12.2.x) you can also have multiple active metadata servers
520running, but this is normally only useful for a high count on parallel clients,
521as else the `MDS` seldom is the bottleneck. If you want to set this up please
522refer to the ceph documentation. footnote:[Configuring multiple active MDS
127ca409 523daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
58f95dd7
TL
524
525[[pveceph_fs_create]]
526Create a CephFS
527~~~~~~~~~~~~~~~
528
529With {pve}'s CephFS integration into you can create a CephFS easily over the
530Web GUI, the CLI or an external API interface. Some prerequisites are required
531for this to work:
532
533.Prerequisites for a successful CephFS setup:
534- xref:pve_ceph_install[Install Ceph packages], if this was already done some
535 time ago you might want to rerun it on an up to date system to ensure that
536 also all CephFS related packages get installed.
537- xref:pve_ceph_monitors[Setup Monitors]
538- xref:pve_ceph_monitors[Setup your OSDs]
539- xref:pveceph_fs_mds[Setup at least one MDS]
540
541After this got all checked and done you can simply create a CephFS through
542either the Web GUI's `Node -> CephFS` panel or the command line tool `pveceph`,
543for example with:
544
545----
546pveceph fs create --pg_num 128 --add-storage
547----
548
549This creates a CephFS named `'cephfs'' using a pool for its data named
550`'cephfs_data'' with `128` placement groups and a pool for its metadata named
551`'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
552Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
553Ceph documentation for more information regarding a fitting placement group
554number (`pg_num`) for your setup footnote:[Ceph Placement Groups
127ca409 555http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
58f95dd7
TL
556Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
557storage configuration after it was created successfully.
558
559Destroy CephFS
560~~~~~~~~~~~~~~
561
fa9b4ee1 562WARNING: Destroying a CephFS will render all its data unusable, this cannot be
58f95dd7
TL
563undone!
564
565If you really want to destroy an existing CephFS you first need to stop, or
566destroy, all metadata server (`M̀DS`). You can destroy them either over the Web
567GUI or the command line interface, with:
568
569----
570pveceph mds destroy NAME
571----
572on each {pve} node hosting a MDS daemon.
573
574Then, you can remove (destroy) CephFS by issuing a:
575
576----
de2f8225 577ceph fs rm NAME --yes-i-really-mean-it
58f95dd7
TL
578----
579on a single node hosting Ceph. After this you may want to remove the created
580data and metadata pools, this can be done either over the Web GUI or the CLI
581with:
582
583----
584pveceph pool destroy NAME
585----
0840a663 586
6ff32926 587
10df14fb
TL
588Ceph monitoring and troubleshooting
589-----------------------------------
590A good start is to continuosly monitor the ceph health from the start of
591initial deployment. Either through the ceph tools itself, but also by accessing
592the status through the {pve} link:api-viewer/index.html[API].
6ff32926 593
10df14fb
TL
594The following ceph commands below can be used to see if the cluster is healthy
595('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
596('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
597below will also give you an overview on the current events and actions take.
6ff32926
AA
598
599----
10df14fb
TL
600# single time output
601pve# ceph -s
602# continuously output status changes (press CTRL+C to stop)
603pve# ceph -w
6ff32926
AA
604----
605
606To get a more detailed view, every ceph service has a log file under
607`/var/log/ceph/` and if there is not enough detail, the log level can be
608adjusted footnote:[Ceph log and debugging http://docs.ceph.com/docs/luminous/rados/troubleshooting/log-and-debug/].
609
610You can find more information about troubleshooting
611footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
612a Ceph cluster on its website.
613
614
0840a663
DM
615ifdef::manvolnum[]
616include::pve-copyright.adoc[]
617endif::manvolnum[]