footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
maintains a master copy of the cluster map. For high availability you need to
have at least 3 monitors. One monitor will already be installed if you
-used the installation wizard. You wont need more than 3 monitors as long
+used the installation wizard. You won't need more than 3 monitors as long
as your cluster is small to midsize, only really large clusters will
need more than that.
pveceph createosd /dev/sd[X]
----
-Block.db and block.wal
-^^^^^^^^^^^^^^^^^^^^^^
+.Block.db and block.wal
If you want to use a separate DB/WAL device for your OSDs, you can specify it
through the '-db_dev' and '-wal_dev' options. The WAL is placed with the DB, if not
pveceph createpool <name>
----
-If you would like to automatically get also a storage definition for your pool,
-active the checkbox "Add storages" on the GUI or use the command line option
-'--add_storages' on pool creation.
+If you would like to automatically also get a storage definition for your pool,
+mark the checkbox "Add storages" in the GUI or use the command line option
+'--add_storages' at pool creation.
Further information on Ceph pool handling can be found in the Ceph pool
operation footnote:[Ceph pool operation
Container images. Simply use the GUI too add a new `RBD` storage (see
section xref:ceph_rados_block_devices[Ceph RADOS Block Devices (RBD)]).
-You also need to copy the keyring to a predefined location for a external Ceph
+You also need to copy the keyring to a predefined location for an external Ceph
cluster. If Ceph is installed on the Proxmox nodes itself, then this will be
done automatically.
an issue with traditional shared filesystem approaches, like `NFS`, for
example.
+[thumbnail="screenshot/gui-node-ceph-cephfs-panel.png"]
+
{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]
to save backups, ISO files or container templates and creating a
hyper-converged CephFS itself.
`warm` state. But naturally, the active polling will cause some additional
performance impact on your system and active `MDS`.
-Multiple Active MDS
-^^^^^^^^^^^^^^^^^^^
+.Multiple Active MDS
Since Luminous (12.2.x) you can also have multiple active metadata servers
running, but this is normally only useful for a high count on parallel clients,
undone!
If you really want to destroy an existing CephFS you first need to stop, or
-destroy, all metadata server (`M̀DS`). You can destroy them either over the Web
+destroy, all metadata servers (`M̀DS`). You can destroy them either over the Web
GUI or the command line interface, with:
----
The following ceph commands below can be used to see if the cluster is healthy
('HEALTH_OK'), if there are warnings ('HEALTH_WARN'), or even errors
('HEALTH_ERR'). If the cluster is in an unhealthy state the status commands
-below will also give you an overview on the current events and actions take.
+below will also give you an overview of the current events and actions to take.
----
# single time output
You can find more information about troubleshooting
footnote:[Ceph troubleshooting http://docs.ceph.com/docs/luminous/rados/troubleshooting/]
-a Ceph cluster on its website.
+a Ceph cluster on the official website.
ifdef::manvolnum[]