values into human-readable text, for example:
- Unix epoch is displayed as ISO 8601 date string.
-- Durations are displayed as week/day/hour/miniute/secound count, i.e `1d 5h`.
+- Durations are displayed as week/day/hour/minute/second count, i.e `1d 5h`.
- Byte sizes value include units (`B`, `KiB`, `MiB`, `GiB`, `TiB`, `PiB`).
- Fractions are display as percentage, i.e. 1.0 is displayed as 100%.
|===========================================================
[horizontal]
-'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Deamons)
+'Ceph':: Ceph Storage Cluster traffic (Ceph Monitors, OSD & MDS Daemons)
[width="100%",options="header"]
|===========================================================
This drops or rejects all the traffic to the VMs, with some exceptions for
DHCP, NDP, Router Advertisement, MAC and IP filtering depending on the set
configuration. The same rules for dropping/rejecting packets are inherited
-from the datacenter, while the exceptions for accepted incomming/outgoing
+from the datacenter, while the exceptions for accepted incoming/outgoing
traffic of the host do not apply.
Again, you can use xref:pve_firewall_iptables_inspect[iptables-save (see above)]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The installer creates the ZFS pool `rpool`. No swap space is created but you can
reserve some unpartitioned space on the install disks for swap. You can also
-create a swap zvol after the installation, altough this can lead to problems.
+create a swap zvol after the installation, although this can lead to problems.
(see <<zfs_swap,ZFS swap notes>>).
`ashift`::
`hdsize`::
Defines the total hard disk size to be used. This is useful to save free space
-on the hard disk(s) for further partitioning (for exmaple to create a
+on the hard disk(s) for further partitioning (for example to create a
swap-partition). `hdsize` is only honored for bootable disks, that is only the
first disk or mirror for RAID0, RAID1 or RAID10, and all disks in RAID-Z[123].
Further, estimate your bandwidth needs. While one HDD might not saturate a 1 Gb
link, multiple HDD OSDs per node can, and modern NVMe SSDs will even saturate
-10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwith
+10 Gbps of bandwidth quickly. Deploying a network capable of even more bandwidth
will ensure that it isn't your bottleneck and won't be anytime soon, 25, 40 or
even 100 GBps are possible.
----
You can directly choose the size for those with the '-db_size' and '-wal_size'
-paremeters respectively. If they are not given the following values (in order)
+parameters respectively. If they are not given the following values (in order)
will be used:
* bluestore_block_{db,wal}_size from ceph configuration...
which may lead to a situation where an address is changed without thinking
about implications for corosync.
-A seperate, static hostname specifically for corosync is recommended, if
+A separate, static hostname specifically for corosync is recommended, if
hostnames are preferred. Also, make sure that every node in the cluster can
resolve all hostnames correctly.
Nodes that joined the cluster on earlier versions likely still use their
unresolved hostname in `corosync.conf`. It might be a good idea to replace
-them with IPs or a seperate hostname, as mentioned above.
+them with IPs or a separate hostname, as mentioned above.
[[pvecm_redundancy]]
Links are used according to a priority setting. You can configure this priority
by setting 'knet_link_priority' in the corresponding interface section in
-`corosync.conf`, or, preferrably, using the 'priority' parameter when creating
+`corosync.conf`, or, preferably, using the 'priority' parameter when creating
your cluster with `pvecm`:
----
QDevice Technical Overview
~~~~~~~~~~~~~~~~~~~~~~~~~~
-The Corosync Quroum Device (QDevice) is a daemon which runs on each cluster
+The Corosync Quorum Device (QDevice) is a daemon which runs on each cluster
node. It provides a configured number of votes to the clusters quorum
subsystem based on an external running third-party arbitrator's decision.
Its primary use is to allow a cluster to sustain more node failures than
----
You can also configure all the Cloud-Init options using a single command
-only. We have simply splitted the above example to separate the
+only. We have simply split the above example to separate the
commands for reducing the line length. Also make sure to adopt the IP
setup for your specific environment.
default https://en.wikipedia.org/wiki/Intel_440FX[Intel 440FX] or the
https://ark.intel.com/content/www/us/en/ark/products/31918/intel-82q35-graphics-and-memory-controller.html[Q35]
chipset, which also provides a virtual PCIe bus, and thus may be desired if
-one want's to pass through PCIe hardware.
+one wants to pass through PCIe hardware.
[[qm_hard_disk]]
Hard Disk