To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes.
-.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/master/start/intro/], for use as a RBD storage:
+.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
- Ceph Monitor (ceph-mon)
- Ceph Manager (ceph-mgr)
- Ceph OSD (ceph-osd; Object Storage Daemon)
pveceph createosd /dev/sd[X]
----
-NOTE: In order to select a disk in the GUI, to be more failsafe, the disk needs
+NOTE: In order to select a disk in the GUI, to be more fail-safe, the disk needs
to have a GPT footnoteref:[GPT, GPT partition table
https://en.wikipedia.org/wiki/GUID_Partition_Table] partition table. You can
create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
----
NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
-internal journal or write-ahead log. It is recommended to use a fast SSDs or
+internal journal or write-ahead log. It is recommended to use a fast SSD or
NVRAM for better performance.
~~~~~~~~~~~~~
Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can
still be used and might give better performance in small setups, when backed by
-a NVMe SSD or similar.
+an NVMe SSD or similar.
[source,bash]
----
highly available shared filesystem in an easy way if ceph is already used. Its
Metadata Servers guarantee that files get balanced out over the whole Ceph
cluster, this way even high load will not overload a single host, which can be
-be an issue with traditional shared filesystem approaches, like `NFS`, for
+an issue with traditional shared filesystem approaches, like `NFS`, for
example.
{pve} supports both, using an existing xref:storage_cephfs[CephFS as storage])
in the ceph.conf respective MDS section. With this enabled, this specific MDS
will always poll the active one, so that it can take over faster as it is in a
-`warm' state. But naturally, the active polling will cause some additional
+`warm` state. But naturally, the active polling will cause some additional
performance impact on your system and active `MDS`.
Multiple Active MDS
running, but this is normally only useful for a high count on parallel clients,
as else the `MDS` seldom is the bottleneck. If you want to set this up please
refer to the ceph documentation. footnote:[Configuring multiple active MDS
-daemons http://docs.ceph.com/docs/mimic/cephfs/multimds/]
+daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
[[pveceph_fs_create]]
Create a CephFS
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/].
+http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully.
Destroy CephFS
~~~~~~~~~~~~~~
-WARN: Destroying a CephFS will render all its data unusable, this cannot be
+WARNING: Destroying a CephFS will render all its data unusable, this cannot be
undone!
If you really want to destroy an existing CephFS you first need to stop, or
Then, you can remove (destroy) CephFS by issuing a:
----
-ceph rm fs NAME --yes-i-really-mean-it
+ceph fs rm NAME --yes-i-really-mean-it
----
on a single node hosting Ceph. After this you may want to remove the created
data and metadata pools, this can be done either over the Web GUI or the CLI