From: Stefan Reiter Date: Wed, 12 Jun 2019 13:06:32 +0000 (+0200) Subject: Fixed some wording and typos X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=commitdiff_plain;h=a22d7c24ebb8675393874f22c987b6895d174bbf Fixed some wording and typos Signed-off-by: Stefan Reiter --- diff --git a/pve-installation.adoc b/pve-installation.adoc index f15258e..691d236 100644 --- a/pve-installation.adoc +++ b/pve-installation.adoc @@ -104,7 +104,7 @@ you can choose disks there. Additionally you can set additional options (see [thumbnail="screenshot/pve-select-location.png", float="left"] -The next page just ask for basic configuration options like your +The next page just asks for basic configuration options like your location, the time zone and keyboard layout. The location is used to select a download server near you to speed up updates. The installer is usually able to auto detect those settings, so you only need to change diff --git a/pve-intro.adoc b/pve-intro.adoc index e4a8d99..7b236be 100644 --- a/pve-intro.adoc +++ b/pve-intro.adoc @@ -39,7 +39,7 @@ enables you to store the configuration of thousands of virtual machines. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides -in RAM which provides a maximum storage size is 30MB - more than +in RAM which provides a maximum storage size of 30MB - more than enough for thousands of VMs. + Proxmox VE is the only virtualization platform using this unique @@ -145,7 +145,7 @@ Flexible Networking Proxmox VE uses a bridged networking model. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges -are attached to physical network cards assigned a TCP/IP +are attached to physical network cards and assigned a TCP/IP configuration. For further flexibility, VLANs (IEEE 802.1q) and network @@ -183,7 +183,7 @@ Open source software also helps to keep your costs low and makes your core infrastructure independent from a single vendor. -Your benefit with {pve} +Your benefits with {pve} ----------------------- * Open source software diff --git a/pve-network.adoc b/pve-network.adoc index c7ffa6c..b2dae97 100644 --- a/pve-network.adoc +++ b/pve-network.adoc @@ -5,14 +5,14 @@ ifdef::wiki[] :pve-toplevel: endif::wiki[] -Network configuration can be done either via the GUI, or by manually +Network configuration can be done either via the GUI, or by manually editing the file `/etc/network/interfaces`, which contains the whole network configuration. The `interfaces(5)` manual page contains the complete format description. All {pve} tools try hard to keep direct user modifications, but using the GUI is still preferable, because it protects you from errors. -Once the network is configured, you can use the Debian traditional tools `ifup` +Once the network is configured, you can use the Debian traditional tools `ifup` and `ifdown` commands to bring interfaces up and down. NOTE: {pve} does not write changes directly to @@ -68,16 +68,16 @@ For more information see https://www.freedesktop.org/wiki/Software/systemd/Predi Choosing a network configuration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Depending on your current network organization and your resources you can +Depending on your current network organization and your resources you can choose either a bridged, routed, or masquerading networking setup. {pve} server in a private LAN, using an external gateway to reach the internet ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The *Bridged* model makes the most sense in this case, and this is also +The *Bridged* model makes the most sense in this case, and this is also the default mode on new {pve} installations. -Each of your Guest system will have a virtual interface attached to the -{pve} bridge. This is similar in effect to having the Guest network card +Each of your Guest system will have a virtual interface attached to the +{pve} bridge. This is similar in effect to having the Guest network card directly connected to a new switch on your LAN, the {pve} host playing the role of the switch. @@ -91,7 +91,7 @@ what your provider allows. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In that case the only way to get outgoing network accesses for your guest -systems is to use *Masquerading*. For incoming network access to your guests, +systems is to use *Masquerading*. For incoming network access to your guests, you will need to configure *Port Forwarding*. For further flexibility, you can configure @@ -104,7 +104,7 @@ Default Configuration using a Bridge [thumbnail="default-network-setup-bridge.svg"] Bridges are like physical network switches implemented in software. -All VMs can share a single bridge, or you can create multiple bridges to +All VMs can share a single bridge, or you can create multiple bridges to separate network domains. Each host can have up to 4094 bridges. The installation program creates a single bridge named `vmbr0`, which @@ -275,7 +275,7 @@ network-peers use different MAC addresses for their network packet traffic. If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using -the corresponding bonding mode (802.3ad). Otherwise you should generally use the +the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode. + // http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html If you intend to run your cluster network on the bonding interfaces, then you @@ -366,25 +366,25 @@ VLAN for Guest Networks {pve} supports this setup out of the box. You can specify the VLAN tag when you create a VM. The VLAN tag is part of the guest network -configuration. The networking layer supports differnet modes to +configuration. The networking layer supports different modes to implement VLANs, depending on the bridge configuration: * *VLAN awareness on the Linux bridge:* In this case, each guest's virtual network card is assigned to a VLAN tag, which is transparently supported by the Linux bridge. -Trunk mode is also possible, but that makes the configuration +Trunk mode is also possible, but that makes configuration in the guest necessary. * *"traditional" VLAN on the Linux bridge:* In contrast to the VLAN awareness method, this method is not transparent and creates a VLAN device with associated bridge for each VLAN. -That is, if e.g. in our default network, a guest VLAN 5 is used -to create eno1.5 and vmbr0v5, which remains until rebooting. +That is, creating a guest on VLAN 5 for example, would create two +interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs. * *Open vSwitch VLAN:* This mode uses the OVS VLAN feature. -* *Guest configured VLAN:* +* *Guest configured VLAN:* VLANs are assigned inside the guest. In this case, the setup is completely done inside the guest and can not be influenced from the outside. The benefit is that you can use more than one VLAN on a diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc index 556470f..06d1b2f 100644 --- a/pve-package-repos.adoc +++ b/pve-package-repos.adoc @@ -47,7 +47,7 @@ email about the available new packages. On the GUI, the change-log of each package can be viewed (if available), showing all details of the update. So you will never miss important security fixes. -Please note that and you need a valid subscription key to access this +Please note that you need a valid subscription key to access this repository. We offer different support levels, and you can find further details at https://www.proxmox.com/en/proxmox-ve/pricing. diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc index a347e31..3895df4 100644 --- a/qm-pci-passthrough.adoc +++ b/qm-pci-passthrough.adoc @@ -33,7 +33,7 @@ Further, server grade hardware has often better support than consumer grade hardware, but even then, many modern system can support this. Please refer to your hardware vendor to check if they support this feature -under Linux for your specific setup +under Linux for your specific setup. Configuration @@ -295,7 +295,7 @@ vendor. Mediated Devices (vGPU, GVT-g) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Mediated devices are another method to use reuse features and performance from +Mediated devices are another method to reuse features and performance from physical hardware for virtualized hardware. These are found most common in virtualized GPU setups such as Intels GVT-g and Nvidias vGPUs used in their GRID technology. @@ -309,12 +309,12 @@ Host Configuration ^^^^^^^^^^^^^^^^^^ In general your card's driver must support that feature, otherwise it will -not work. So please refer to your vendor for compatbile drivers and how to +not work. So please refer to your vendor for compatible drivers and how to configure them. -Intels drivers for GVT-g are integraded in the Kernel and should work -with the 5th, 6th and 7th generation Intel Core Processors, further E3 v4, E3 -v5 and E3 v6 Xeon Processors are supported. +Intels drivers for GVT-g are integrated in the Kernel and should work +with 5th, 6th and 7th generation Intel Core Processors, as well as E3 v4, E3 +v5 and E3 v6 Xeon Processors. To enable it for Intel Graphcs, you have to make sure to load the module 'kvmgt' (for example via `/etc/modules`) and to enable it on the Kernel