X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pveceph.adoc;h=0ad89d4f0b5419142466097a74f17bdacdd9c743;hp=601520616efd59c8d3b9071f2eb10f19c5909cf0;hb=7d6078845fa6a3bd308c7dc843273e56be33f315;hpb=90682f35982513fdecf9109cd15235fa982413fc diff --git a/pveceph.adoc b/pveceph.adoc index 6015206..0ad89d4 100644 --- a/pveceph.adoc +++ b/pveceph.adoc @@ -58,7 +58,7 @@ and VMs on the same node is possible. To simplify management, we provide 'pveceph' - a tool to install and manage {ceph} services on {pve} nodes. -.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/master/start/intro/], for use as a RBD storage: +.Ceph consists of a couple of Daemons footnote:[Ceph intro http://docs.ceph.com/docs/luminous/start/intro/], for use as a RBD storage: - Ceph Monitor (ceph-mon) - Ceph Manager (ceph-mgr) - Ceph OSD (ceph-osd; Object Storage Daemon) @@ -211,7 +211,7 @@ This is the default when creating OSDs in Ceph luminous. pveceph createosd /dev/sd[X] ---- -NOTE: In order to select a disk in the GUI, to be more failsafe, the disk needs +NOTE: In order to select a disk in the GUI, to be more fail-safe, the disk needs to have a GPT footnoteref:[GPT, GPT partition table https://en.wikipedia.org/wiki/GUID_Partition_Table] partition table. You can create this with `gdisk /dev/sd(x)`. If there is no GPT, you cannot select the @@ -227,7 +227,7 @@ pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y] ---- NOTE: The DB stores BlueStore’s internal metadata and the WAL is BlueStore’s -internal journal or write-ahead log. It is recommended to use a fast SSDs or +internal journal or write-ahead log. It is recommended to use a fast SSD or NVRAM for better performance. @@ -235,7 +235,7 @@ Ceph Filestore ~~~~~~~~~~~~~ Till Ceph luminous, Filestore was used as storage type for Ceph OSDs. It can still be used and might give better performance in small setups, when backed by -a NVMe SSD or similar. +an NVMe SSD or similar. [source,bash] ---- @@ -427,7 +427,7 @@ POSIX-compliant replicated filesystem. This allows one to have a clustered highly available shared filesystem in an easy way if ceph is already used. Its Metadata Servers guarantee that files get balanced out over the whole Ceph cluster, this way even high load will not overload a single host, which can be -be an issue with traditional shared filesystem approaches, like `NFS`, for +an issue with traditional shared filesystem approaches, like `NFS`, for example. {pve} supports both, using an existing xref:storage_cephfs[CephFS as storage]) @@ -460,7 +460,7 @@ mds standby replay = true in the ceph.conf respective MDS section. With this enabled, this specific MDS will always poll the active one, so that it can take over faster as it is in a -`warm' state. But naturally, the active polling will cause some additional +`warm` state. But naturally, the active polling will cause some additional performance impact on your system and active `MDS`. Multiple Active MDS @@ -470,7 +470,7 @@ Since Luminous (12.2.x) you can also have multiple active metadata servers running, but this is normally only useful for a high count on parallel clients, as else the `MDS` seldom is the bottleneck. If you want to set this up please refer to the ceph documentation. footnote:[Configuring multiple active MDS -daemons http://docs.ceph.com/docs/mimic/cephfs/multimds/] +daemons http://docs.ceph.com/docs/luminous/cephfs/multimds/] [[pveceph_fs_create]] Create a CephFS @@ -502,14 +502,14 @@ This creates a CephFS named `'cephfs'' using a pool for its data named Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the Ceph documentation for more information regarding a fitting placement group number (`pg_num`) for your setup footnote:[Ceph Placement Groups -http://docs.ceph.com/docs/mimic/rados/operations/placement-groups/]. +http://docs.ceph.com/docs/luminous/rados/operations/placement-groups/]. Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve} storage configuration after it was created successfully. Destroy CephFS ~~~~~~~~~~~~~~ -WARN: Destroying a CephFS will render all its data unusable, this cannot be +WARNING: Destroying a CephFS will render all its data unusable, this cannot be undone! If you really want to destroy an existing CephFS you first need to stop, or @@ -524,7 +524,7 @@ on each {pve} node hosting a MDS daemon. Then, you can remove (destroy) CephFS by issuing a: ---- -ceph rm fs NAME --yes-i-really-mean-it +ceph fs rm NAME --yes-i-really-mean-it ---- on a single node hosting Ceph. After this you may want to remove the created data and metadata pools, this can be done either over the Web GUI or the CLI