]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-intro.adoc
installation: make debian version note link to FAQ support table
[pve-docs.git] / pve-intro.adoc
... / ...
CommitLineData
1Introduction
2============
3
4{pve} is a platform to run virtual machines and containers. It is
5based on Debian Linux, and completely open source. For maximum
6flexibility, we implemented two virtualization technologies -
7Kernel-based Virtual Machine (KVM) and container-based virtualization
8(LXC).
9
10One main design goal was to make administration as easy as
11possible. You can use {pve} on a single node, or assemble a cluster of
12many nodes. All management tasks can be done using our web-based
13management interface, and even a novice user can setup and install
14{pve} within minutes.
15
16image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"]
17
18
19Central Management
20------------------
21
22While many people start with a single node, {pve} can scale out to a
23large set of clustered nodes. The cluster stack is fully integrated
24and ships with the default installation.
25
26Unique Multi-Master Design::
27
28The integrated web-based management interface gives you a clean
29overview of all your KVM guests and Linux containers and even of your
30whole cluster. You can easily manage your VMs and containers, storage
31or cluster from the GUI. There is no need to install a separate,
32complex, and pricey management server.
33
34Proxmox Cluster File System (pmxcfs)::
35
36{pve} uses the unique Proxmox Cluster file system (pmxcfs), a
37database-driven file system for storing configuration files. This
38enables you to store the configuration of thousands of virtual
39machines. By using corosync, these files are replicated in real time
40on all cluster nodes. The file system stores all data inside a
41persistent database on disk, nonetheless, a copy of the data resides
42in RAM which provides a maximum storage size of 30MB - more than
43enough for thousands of VMs.
44+
45{pve} is the only virtualization platform using this unique
46cluster file system.
47
48Web-based Management Interface::
49
50{pve} is simple to use. Management tasks can be done via the
51included web based management interface - there is no need to install a
52separate management tool or any additional management node with huge
53databases. The multi-master tool allows you to manage your whole
54cluster from any node of your cluster. The central web-based
55management - based on the JavaScript Framework (ExtJS) - empowers
56you to control all functionalities from the GUI and overview history
57and syslogs of each single node. This includes running backup or
58restore jobs, live-migration or HA triggered activities.
59
60Command Line::
61
62For advanced users who are used to the comfort of the Unix shell or
63Windows Powershell, {pve} provides a command line interface to
64manage all the components of your virtual environment. This command
65line interface has intelligent tab completion and full documentation
66in the form of UNIX man pages.
67
68REST API::
69
70{pve} uses a RESTful API. We choose JSON as primary data format,
71and the whole API is formally defined using JSON Schema. This enables
72fast and easy integration for third party management tools like custom
73hosting environments.
74
75Role-based Administration::
76
77You can define granular access for all objects (like VMs, storages,
78nodes, etc.) by using the role based user- and permission
79management. This allows you to define privileges and helps you to
80control access to objects. This concept is also known as access
81control lists: Each permission specifies a subject (a user or group)
82and a role (set of privileges) on a specific path.
83
84Authentication Realms::
85
86{pve} supports multiple authentication sources like Microsoft
87Active Directory, LDAP, Linux PAM standard authentication or the
88built-in {pve} authentication server.
89
90
91Flexible Storage
92----------------
93
94The {pve} storage model is very flexible. Virtual machine images
95can either be stored on one or several local storages or on shared
96storage like NFS and on SAN. There are no limits, you may configure as
97many storage definitions as you like. You can use all storage
98technologies available for Debian Linux.
99
100One major benefit of storing VMs on shared storage is the ability to
101live-migrate running machines without any downtime, as all nodes in
102the cluster have direct access to VM disk images.
103
104We currently support the following Network storage types:
105
106* LVM Group (network backing with iSCSI targets)
107* iSCSI target
108* NFS Share
109* CIFS Share
110* Ceph RBD
111* Directly use iSCSI LUNs
112* GlusterFS
113
114Local storage types supported are:
115
116* LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
117* Directory (storage on existing filesystem)
118* ZFS
119
120
121Integrated Backup and Restore
122-----------------------------
123
124The integrated backup tool (`vzdump`) creates consistent snapshots of
125running Containers and KVM guests. It basically creates an archive of
126the VM or CT data which includes the VM/CT configuration files.
127
128KVM live backup works for all storage types including VM images on
129NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing
130VM backups fast and effective (sparse files, out of order data, minimized I/O).
131
132
133High Availability Cluster
134-------------------------
135
136A multi-node {pve} HA Cluster enables the definition of highly
137available virtual servers. The {pve} HA Cluster is based on
138proven Linux HA technologies, providing stable and reliable HA
139services.
140
141
142Flexible Networking
143-------------------
144
145{pve} uses a bridged networking model. All VMs can share one
146bridge as if virtual network cables from each guest were all plugged
147into the same switch. For connecting VMs to the outside world, bridges
148are attached to physical network cards and assigned a TCP/IP
149configuration.
150
151For further flexibility, VLANs (IEEE 802.1q) and network
152bonding/aggregation are possible. In this way it is possible to build
153complex, flexible virtual networks for the {pve} hosts,
154leveraging the full power of the Linux network stack.
155
156
157Integrated Firewall
158-------------------
159
160The integrated firewall allows you to filter network packets on
161any VM or Container interface. Common sets of firewall rules can
162be grouped into ``security groups''.
163
164include::hyper-converged-infrastructure.adoc[]
165
166
167Why Open Source
168---------------
169
170{pve} uses a Linux kernel and is based on the Debian GNU/Linux
171Distribution. The source code of {pve} is released under the
172https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
173License, version 3]. This means that you are free to inspect the
174source code at any time or contribute to the project yourself.
175
176At Proxmox we are committed to use open source software whenever
177possible. Using open source software guarantees full access to all
178functionalities - as well as high security and reliability. We think
179that everybody should have the right to access the source code of a
180software to run it, build on it, or submit changes back to the
181project. Everybody is encouraged to contribute while Proxmox ensures
182the product always meets professional quality criteria.
183
184Open source software also helps to keep your costs low and makes your
185core infrastructure independent from a single vendor.
186
187
188Your benefits with {pve}
189------------------------
190
191* Open source software
192* No vendor lock-in
193* Linux kernel
194* Fast installation and easy-to-use
195* Web-based management interface
196* REST API
197* Huge active community
198* Low administration costs and simple deployment
199
200include::getting-help.adoc[]
201
202
203Project History
204---------------
205
206The project started in 2007, followed by a first stable version in
2072008. At the time we used OpenVZ for containers, and KVM for virtual
208machines. The clustering features were limited, and the user interface
209was simple (server generated web page).
210
211But we quickly developed new features using the
212https://corosync.github.io/corosync/[Corosync] cluster stack, and the
213introduction of the new Proxmox cluster file system (pmxcfs) was a big
214step forward, because it completely hides the cluster complexity from
215the user. Managing a cluster of 16 nodes is as simple as managing a
216single node.
217
218We also introduced a new REST API, with a complete declarative
219specification written in JSON-Schema. This enabled other people to
220integrate {pve} into their infrastructure, and made it easy to provide
221additional services.
222
223Also, the new REST API made it possible to replace the original user
224interface with a modern HTML5 application using JavaScript. We also
225replaced the old Java based VNC console code with
226https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser
227to manage your VMs.
228
229The support for various storage types is another big task. Notably,
230{pve} was the first distribution to ship ZFS on Linux by default in
2312014. Another milestone was the ability to run and manage
232https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
233are extremely cost effective.
234
235When we started we were among the first companies providing
236commercial support for KVM. The KVM project itself continuously
237evolved, and is now a widely used hypervisor. New features arrive
238with each release. We developed the KVM live backup feature, which
239makes it possible to create snapshot backups on any storage type.
240
241The most notable change with version 4.0 was the move from OpenVZ to
242https://linuxcontainers.org/[LXC]. Containers are now deeply
243integrated, and they can use the same storage and network features
244as virtual machines.
245
246include::howto-improve-pve-docs.adoc[]
247include::translation.adoc[]
248