To simplify management, we provide 'pveceph' - a tool to install and
manage {ceph} services on {pve} nodes.
-.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/master/start/intro/], for use as a RBD storage:
+.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage:
- Ceph Monitor (ceph-mon)
- Ceph Manager (ceph-mgr)
- Ceph OSD (ceph-osd; Object Storage Daemon)
pveceph createosd /dev/sd[X]
----
-NOTE: In order to select a disk in the GUI, to be more failsafe, the disk needs
+NOTE: In order to select a disk in the GUI, to be more fail-safe, the disk needs
to have a GPT footnoteref:[GPT, GPT partition table
https://en.wikipedia.org/wiki/GUID_Partition_Table] partition table. You can
create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the
----
NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s
-internal journal or write-ahead log. It is recommended to use a fast SSDs or
+internal journal or write-ahead log. It is recommended to use a fast SSD or
NVRAM for better performance.
~~~~~~~~~~~~~
Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can
still be used and might give better performance in small setups, when backed by
-a NVMe SSD or similar.
+an NVMe SSD or similar.
[source,bash]
----
running, but this is normally only useful for a high count on parallel clients,
as else the `MDS` seldom is the bottleneck. If you want to set this up please
refer to the ceph documentation. footnote:[Configuring multiple active MDS
-daemons http://docs.ceph.com/docs/mimic/cephfs/multimds/]
+daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/]
[[pveceph_fs_create]]
Create a CephFS
Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
Ceph documentation for more information regarding a fitting placement group
number (`pg_num`) for your setup footnote:[Ceph Placement Groups
-http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/].
+http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/].
Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
storage configuration after it was created successfully.