Technology
----------
-We use the http://www.corosync.org[Corosync Cluster Engine] for
-cluster communication, and http://www.sqlite.org[SQlite] for the
+We use the https://www.corosync.org[Corosync Cluster Engine] for
+cluster communication, and https://www.sqlite.org[SQlite] for the
database file. The file system is implemented in user space using
-http://fuse.sourceforge.net[FUSE].
+https://github.com/libfuse/libfuse[FUSE].
File System Layout
------------------
You should have received a copy of the GNU Affero General Public
License along with this program. If not, see
-http://www.gnu.org/licenses/
+https://www.gnu.org/licenses/
Currently supported are:
- * Graphite (see http://graphiteapp.org )
+ * Graphite (see https://graphiteapp.org )
* InfluxDB (see https://www.influxdata.com/time-series-platform/influxdb/ )
The external metric server definitions are saved in '/etc/pve/status.cfg', and
What distribution is {pve} based on?::
-{pve} is based on http://www.debian.org[Debian GNU/Linux]
+{pve} is based on https://www.debian.org[Debian GNU/Linux]
What license does the {pve} project use?::
Supported Intel CPUs::
64-bit processors with
-http://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel
-Virtualization Technology (Intel VT-x)] support. (http://ark.intel.com/search/advanced/?s=t&VTX=true&InstructionSet=64-bit[List of processors with Intel VT and 64-bit])
+https://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel
+Virtualization Technology (Intel VT-x)] support.
+(https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&2_VTX=True&2_InstructionSet=64-bit[List of processors with Intel VT and 64-bit])
Supported AMD CPUs::
64-bit processors with
-http://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD
+https://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD
Virtualization Technology (AMD-V)] support.
What is a container/virtual environment (VE)/virtual private server (VPS)?::
Suricata IPS integration
~~~~~~~~~~~~~~~~~~~~~~~~
-If you want to use the http://suricata-ids.org/[Suricata IPS]
+If you want to use the https://suricata-ids.org/[Suricata IPS]
(Intrusion Prevention System), it's possible.
Packets will be forwarded to the IPS only after the firewall ACCEPTed
---------------
* List of all official tutorials on our
- http://www.youtube.com/proxmoxve[{pve} YouTube Channel]
+ https://www.youtube.com/proxmoxve[{pve} YouTube Channel]
* Tutorials in Spanish language on
- http://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es
+ https://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es
YouTube Play List]
{pve} uses a Linux kernel and is based on the Debian GNU/Linux
Distribution. The source code of {pve} is released under the
-http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
+https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
License, version 3]. This means that you are free to inspect the
source code at any time or contribute to the project yourself.
was simple (server generated web page).
But we quickly developed new features using the
-http://corosync.github.io/corosync/[Corosync] cluster stack, and the
+https://corosync.github.io/corosync/[Corosync] cluster stack, and the
introduction of the new Proxmox cluster file system (pmxcfs) was a big
step forward, because it completely hides the cluster complexity from
the user. Managing a cluster of 16 nodes is as simple as managing a
The support for various storage types is another big task. Notably,
{pve} was the first distribution to ship ZFS on Linux by default in
2014. Another milestone was the ability to run and manage
-http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
+https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
are extremely cost effective.
When we started we were among the first companies providing
Storage pool type: `cephfs`
-CephFS implements a POSIX-compliant filesystem, using a http://ceph.com[Ceph]
+CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph]
storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
its properties. This includes redundancy, scalability, self-healing, and high
availability.
iSCSI is a widely employed technology used to connect to storage
servers. Almost all storage vendors support iSCSI. There are also open
source iSCSI target solutions available,
-e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
+e.g. https://www.openmediavault.org/[OpenMediaVault], which is based on
Debian.
To use this backend, you need to install the
-http://www.open-iscsi.org/[Open-iSCSI] (`open-iscsi`) package. This is a
+https://www.open-iscsi.com/[Open-iSCSI] (`open-iscsi`) package. This is a
standard Debian package, but it is not installed by default to save
resources.
Storage pool type: `rbd`
-http://ceph.com[Ceph] is a distributed object store and file system
+https://ceph.com[Ceph] is a distributed object store and file system
designed to provide excellent performance, reliability and
scalability. RADOS block devices implement a feature rich block level
storage, and you get the following advantages:
~~~~~~~~~~~~~~~~~~~~~~~~~~
The required API permissions are documented for each individual
-method, and can be found at http://pve.proxmox.com/pve-docs/api-viewer/
+method, and can be found at https://pve.proxmox.com/pve-docs/api-viewer/
The permissions are specified as a list which can be interpreted as a
tree of logic and access-check functions:
:pve-toplevel:
endif::wiki[]
-http://cloudinit.readthedocs.io[Cloud-Init] is the de facto
+https://cloudinit.readthedocs.io[Cloud-Init] is the de facto
multi-distribution package that handles early initialization of a
virtual machine instance. Using Cloud-Init, configuration of network
devices and ssh keys on the hypervisor side is possible. When the VM
as measured with `bonnie++(8)`. Using the virtio network interface can deliver
up to three times the throughput of an emulated Intel E1000 network card, as
measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
-http://www.linux-kvm.org/page/Using_VirtIO_NIC]
+https://www.linux-kvm.org/page/Using_VirtIO_NIC]
[[qm_virtual_machines_settings]]
There are, however, some scenarios in which a BIOS is not a good firmware
to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
-http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
-In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
+https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
+In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
If you want to use OVMF, there are several things to consider: