After configuring the desired domain(s) for a node and ensuring that the
desired ACME account is selected, you can order your new certificate over the
-web-interface. On success the interface will reload after 10 seconds.
+web interface. On success the interface will reload after 10 seconds.
Renewal will happen xref:sysadmin_certs_acme_automatic_renewal[automatically].
Adding a BTRFS file system to {pve}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-You can add an existing BTRFS file system to {pve} via the web-interface, or
+You can add an existing BTRFS file system to {pve} via the web interface, or
using the CLI, for example:
----
it to structurally valid XHTML (or HTML).
____
-The {pve} web-interface has support for using Markdown to rendering rich text
+The {pve} web interface has support for using Markdown to rendering rich text
formatting in node and virtual guest notes.
{pve} supports CommonMark with most extensions of GFM (GitHub Flavoured Markdown),
defined for the standard rules in *Firewall* -> *Options*.
While the `loglevel` for each individual rule can be defined or changed easily
-in the WebUI during creation or modification of the rule, it is possible to set
+in the web UI during creation or modification of the rule, it is possible to set
this also via the corresponding `pvesh` API calls.
Further, the log-level can also be set via the firewall configuration file by
The CIFS backend extends the directory backend, so that no manual
setup of a CIFS mount is needed. Such a storage can be added directly
-through the {pve} API or the WebUI, with all our backend advantages,
+through the {pve} API or the web UI, with all our backend advantages,
like server heartbeat check or comfortable selection of exported
shares.
share::
CIFS share to use (get available ones with `pvesm scan cifs <address>` or the
-WebUI). Required.
+web UI). Required.
username::
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively to the the recommended {pve} Ceph installation wizard available
-in the web-interface, you can use the following CLI command on each node:
+in the web interface, you can use the following CLI command on each node:
[source,bash]
----
Create OSDs
~~~~~~~~~~~
-You can create an OSD either via the {pve} web-interface or via the CLI using
+You can create an OSD either via the {pve} web interface or via the CLI using
`pveceph`. For example:
[source,bash]
Create and Edit Pools
~~~~~~~~~~~~~~~~~~~~~
-You can create and edit pools from the command line or the web-interface of any
+You can create and edit pools from the command line or the web interface of any
{pve} host under **Ceph -> Pools**.
When no options are given, we set a default of **128 PGs**, a **size of 3
----
TIP: If you would also like to automatically define a storage for your
-pool, keep the `Add as Storage' checkbox checked in the web-interface, or use the
+pool, keep the `Add as Storage' checkbox checked in the web interface, or use the
command-line option '--add_storages' at pool creation.
Pool Options
Tag: 100
----
-Apply the configuration on the main SDN web-interface panel to create VNets
+Apply the configuration on the main SDN web interface panel to create VNets
locally on each node.
Create four Debian-bases virtual machines (vm1, vm2, vm3, vm4) and add network
Tag: 100000
----
-Apply the configuration on the main SDN web-interface panel to create VNets
+Apply the configuration on the main SDN web interface panel to create VNets
locally on each nodes.
Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on `vxvnet1`.
Gateway: 10.0.2.1
----
-Apply the configuration from the main SDN web-interface panel to create VNets
+Apply the configuration from the main SDN web interface panel to create VNets
locally on each node and generate the FRR configuration.
Create a Debian-based virtual machine ('vm1') on node1, with a vNIC on `myvnet1`.
Node A failed and can not get back online. Now you have to migrate the guest
to Node B manually.
-- connect to node B over ssh or open its shell via the WebUI
+- connect to node B over ssh or open its shell via the web UI
- check if that the cluster is quorate
+
There is no server setup required. Simply install a TOTP app on your
smartphone (for example, https://freeotp.github.io/[FreeOTP]) and use
-the Proxmox Backup Server web-interface to add a TOTP factor.
+the Proxmox Backup Server web interface to add a TOTP factor.
[[user_tfa_setup_webauthn]]
=== WebAuthn
u2f: appid=https://mypve.example.com:8006
----
-For a single node, the 'AppId' can simply be the address of the web-interface,
+For a single node, the 'AppId' can simply be the address of the web interface,
exactly as it is used in the browser, including the 'https://' and the port, as
shown above. Please note that some browsers may be more strict than others when
matching 'AppIds'.
If your device has multiple functions (e.g., ``00:02.0`' and ``00:02.1`' ),
you can pass them through all together with the shortened syntax ``00:02`'.
This is equivalent with checking the ``All Functions`' checkbox in the
-web-interface.
+web interface.
There are some options to which may be necessary, depending on the device
and guest OS:
attacks and is able to utilize the CPU feature
Otherwise you need to set the desired CPU flag of the virtual CPU, either by
-editing the CPU options in the WebUI, or by setting the 'flags' property of the
+editing the CPU options in the web UI, or by setting the 'flags' property of the
'cpu' option in the VM configuration file.
For Spectre v1,v2,v4 fixes, your CPU or system vendor also needs to provide a
provide network access. This built-in DHCP will serve addresses in the private
10.0.2.0/24 range. The NAT mode is much slower than the bridged mode, and
should only be used for testing. This mode is only available via CLI or the API,
-but not via the WebUI.
+but not via the web UI.
You can also skip adding a network device when creating a VM by selecting *No
network device*.
If your VM is running and no locally bound resources are configured (such as
devices that are passed through), you can initiate a live migration with the `--online`
-flag in the `qm migration` command evocation. The web-interface defaults to
+flag in the `qm migration` command evocation. The web interface defaults to
live migration when the VM is running.
How it works