]> git.proxmox.com Git - pve-docs.git/blame_incremental - pve-intro.adoc
config: remove reference to preceeding / from content-dirs
[pve-docs.git] / pve-intro.adoc
... / ...
CommitLineData
1Introduction
2============
3
4{pve} is a platform to run virtual machines and containers. It is
5based on Debian Linux, and completely open source. For maximum
6flexibility, we implemented two virtualization technologies -
7Kernel-based Virtual Machine (KVM) and container-based virtualization
8(LXC).
9
10One main design goal was to make administration as easy as
11possible. You can use {pve} on a single node, or assemble a cluster of
12many nodes. All management tasks can be done using our web-based
13management interface, and even a novice user can setup and install
14{pve} within minutes.
15
16image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"]
17
18
19[[intro_central_management]]
20Central Management
21------------------
22
23While many people start with a single node, {pve} can scale out to a
24large set of clustered nodes. The cluster stack is fully integrated
25and ships with the default installation.
26
27Unique Multi-Master Design::
28
29The integrated web-based management interface gives you a clean
30overview of all your KVM guests and Linux containers and even of your
31whole cluster. You can easily manage your VMs and containers, storage
32or cluster from the GUI. There is no need to install a separate,
33complex, and pricey management server.
34
35Proxmox Cluster File System (pmxcfs)::
36
37{pve} uses the unique Proxmox Cluster file system (pmxcfs), a
38database-driven file system for storing configuration files. This
39enables you to store the configuration of thousands of virtual
40machines. By using corosync, these files are replicated in real time
41on all cluster nodes. The file system stores all data inside a
42persistent database on disk, nonetheless, a copy of the data resides
43in RAM which provides a maximum storage size of 30MB - more than
44enough for thousands of VMs.
45+
46{pve} is the only virtualization platform using this unique
47cluster file system.
48
49Web-based Management Interface::
50
51{pve} is simple to use. Management tasks can be done via the
52included web based management interface - there is no need to install a
53separate management tool or any additional management node with huge
54databases. The multi-master tool allows you to manage your whole
55cluster from any node of your cluster. The central web-based
56management - based on the JavaScript Framework (ExtJS) - empowers
57you to control all functionalities from the GUI and overview history
58and syslogs of each single node. This includes running backup or
59restore jobs, live-migration or HA triggered activities.
60
61Command Line::
62
63For advanced users who are used to the comfort of the Unix shell or
64Windows Powershell, {pve} provides a command line interface to
65manage all the components of your virtual environment. This command
66line interface has intelligent tab completion and full documentation
67in the form of UNIX man pages.
68
69REST API::
70
71{pve} uses a RESTful API. We choose JSON as primary data format,
72and the whole API is formally defined using JSON Schema. This enables
73fast and easy integration for third party management tools like custom
74hosting environments.
75
76Role-based Administration::
77
78You can define granular access for all objects (like VMs, storages,
79nodes, etc.) by using the role based user- and permission
80management. This allows you to define privileges and helps you to
81control access to objects. This concept is also known as access
82control lists: Each permission specifies a subject (a user or group)
83and a role (set of privileges) on a specific path.
84
85Authentication Realms::
86
87{pve} supports multiple authentication sources like Microsoft
88Active Directory, LDAP, Linux PAM standard authentication or the
89built-in {pve} authentication server.
90
91
92Flexible Storage
93----------------
94
95The {pve} storage model is very flexible. Virtual machine images
96can either be stored on one or several local storages or on shared
97storage like NFS and on SAN. There are no limits, you may configure as
98many storage definitions as you like. You can use all storage
99technologies available for Debian Linux.
100
101One major benefit of storing VMs on shared storage is the ability to
102live-migrate running machines without any downtime, as all nodes in
103the cluster have direct access to VM disk images.
104
105We currently support the following Network storage types:
106
107* LVM Group (network backing with iSCSI targets)
108* iSCSI target
109* NFS Share
110* CIFS Share
111* Ceph RBD
112* Directly use iSCSI LUNs
113* GlusterFS
114
115Local storage types supported are:
116
117* LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
118* Directory (storage on existing filesystem)
119* ZFS
120
121
122Integrated Backup and Restore
123-----------------------------
124
125The integrated backup tool (`vzdump`) creates consistent snapshots of
126running Containers and KVM guests. It basically creates an archive of
127the VM or CT data which includes the VM/CT configuration files.
128
129KVM live backup works for all storage types including VM images on
130NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing
131VM backups fast and effective (sparse files, out of order data, minimized I/O).
132
133
134High Availability Cluster
135-------------------------
136
137A multi-node {pve} HA Cluster enables the definition of highly
138available virtual servers. The {pve} HA Cluster is based on
139proven Linux HA technologies, providing stable and reliable HA
140services.
141
142
143Flexible Networking
144-------------------
145
146{pve} uses a bridged networking model. All VMs can share one
147bridge as if virtual network cables from each guest were all plugged
148into the same switch. For connecting VMs to the outside world, bridges
149are attached to physical network cards and assigned a TCP/IP
150configuration.
151
152For further flexibility, VLANs (IEEE 802.1q) and network
153bonding/aggregation are possible. In this way it is possible to build
154complex, flexible virtual networks for the {pve} hosts,
155leveraging the full power of the Linux network stack.
156
157
158Integrated Firewall
159-------------------
160
161The integrated firewall allows you to filter network packets on
162any VM or Container interface. Common sets of firewall rules can
163be grouped into ``security groups''.
164
165include::hyper-converged-infrastructure.adoc[]
166
167
168Why Open Source
169---------------
170
171{pve} uses a Linux kernel and is based on the Debian GNU/Linux
172Distribution. The source code of {pve} is released under the
173https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
174License, version 3]. This means that you are free to inspect the
175source code at any time or contribute to the project yourself.
176
177At Proxmox we are committed to use open source software whenever
178possible. Using open source software guarantees full access to all
179functionalities - as well as high security and reliability. We think
180that everybody should have the right to access the source code of a
181software to run it, build on it, or submit changes back to the
182project. Everybody is encouraged to contribute while Proxmox ensures
183the product always meets professional quality criteria.
184
185Open source software also helps to keep your costs low and makes your
186core infrastructure independent from a single vendor.
187
188
189Your benefits with {pve}
190------------------------
191
192* Open source software
193* No vendor lock-in
194* Linux kernel
195* Fast installation and easy-to-use
196* Web-based management interface
197* REST API
198* Huge active community
199* Low administration costs and simple deployment
200
201include::getting-help.adoc[]
202
203
204[[intro_project_history]]
205Project History
206---------------
207
208The project started in 2007, followed by a first stable version in
2092008. At the time we used OpenVZ for containers, and KVM for virtual
210machines. The clustering features were limited, and the user interface
211was simple (server generated web page).
212
213But we quickly developed new features using the
214https://corosync.github.io/corosync/[Corosync] cluster stack, and the
215introduction of the new Proxmox cluster file system (pmxcfs) was a big
216step forward, because it completely hides the cluster complexity from
217the user. Managing a cluster of 16 nodes is as simple as managing a
218single node.
219
220We also introduced a new REST API, with a complete declarative
221specification written in JSON-Schema. This enabled other people to
222integrate {pve} into their infrastructure, and made it easy to provide
223additional services.
224
225Also, the new REST API made it possible to replace the original user
226interface with a modern HTML5 application using JavaScript. We also
227replaced the old Java based VNC console code with
228https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser
229to manage your VMs.
230
231The support for various storage types is another big task. Notably,
232{pve} was the first distribution to ship ZFS on Linux by default in
2332014. Another milestone was the ability to run and manage
234https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
235are extremely cost effective.
236
237When we started we were among the first companies providing
238commercial support for KVM. The KVM project itself continuously
239evolved, and is now a widely used hypervisor. New features arrive
240with each release. We developed the KVM live backup feature, which
241makes it possible to create snapshot backups on any storage type.
242
243The most notable change with version 4.0 was the move from OpenVZ to
244https://linuxcontainers.org/[LXC]. Containers are now deeply
245integrated, and they can use the same storage and network features
246as virtual machines.
247
248include::howto-improve-pve-docs.adoc[]
249include::translation.adoc[]
250