]>
Commit | Line | Data |
---|---|---|
1 | Introduction | |
2 | ============ | |
3 | ||
4 | {pve} is a platform to run virtual machines and containers. It is | |
5 | based on Debian Linux, and completely open source. For maximum | |
6 | flexibility, we implemented two virtualization technologies - | |
7 | Kernel-based Virtual Machine (KVM) and container-based virtualization | |
8 | (LXC). | |
9 | ||
10 | One main design goal was to make administration as easy as | |
11 | possible. You can use {pve} on a single node, or assemble a cluster of | |
12 | many nodes. All management tasks can be done using our web-based | |
13 | management interface, and even a novice user can setup and install | |
14 | {pve} within minutes. | |
15 | ||
16 | image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"] | |
17 | ||
18 | ||
19 | Central Management | |
20 | ------------------ | |
21 | ||
22 | While many people start with a single node, {pve} can scale out to a | |
23 | large set of clustered nodes. The cluster stack is fully integrated | |
24 | and ships with the default installation. | |
25 | ||
26 | Unique Multi-Master Design:: | |
27 | ||
28 | The integrated web-based management interface gives you a clean | |
29 | overview of all your KVM guests and Linux containers and even of your | |
30 | whole cluster. You can easily manage your VMs and containers, storage | |
31 | or cluster from the GUI. There is no need to install a separate, | |
32 | complex, and pricey management server. | |
33 | ||
34 | Proxmox Cluster File System (pmxcfs):: | |
35 | ||
36 | Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a | |
37 | database-driven file system for storing configuration files. This | |
38 | enables you to store the configuration of thousands of virtual | |
39 | machines. By using corosync, these files are replicated in real time | |
40 | on all cluster nodes. The file system stores all data inside a | |
41 | persistent database on disk, nonetheless, a copy of the data resides | |
42 | in RAM which provides a maximum storage size of 30MB - more than | |
43 | enough for thousands of VMs. | |
44 | + | |
45 | Proxmox VE is the only virtualization platform using this unique | |
46 | cluster file system. | |
47 | ||
48 | Web-based Management Interface:: | |
49 | ||
50 | Proxmox VE is simple to use. Management tasks can be done via the | |
51 | included web based management interface - there is no need to install a | |
52 | separate management tool or any additional management node with huge | |
53 | databases. The multi-master tool allows you to manage your whole | |
54 | cluster from any node of your cluster. The central web-based | |
55 | management - based on the JavaScript Framework (ExtJS) - empowers | |
56 | you to control all functionalities from the GUI and overview history | |
57 | and syslogs of each single node. This includes running backup or | |
58 | restore jobs, live-migration or HA triggered activities. | |
59 | ||
60 | Command Line:: | |
61 | ||
62 | For advanced users who are used to the comfort of the Unix shell or | |
63 | Windows Powershell, Proxmox VE provides a command line interface to | |
64 | manage all the components of your virtual environment. This command | |
65 | line interface has intelligent tab completion and full documentation | |
66 | in the form of UNIX man pages. | |
67 | ||
68 | REST API:: | |
69 | ||
70 | Proxmox VE uses a RESTful API. We choose JSON as primary data format, | |
71 | and the whole API is formally defined using JSON Schema. This enables | |
72 | fast and easy integration for third party management tools like custom | |
73 | hosting environments. | |
74 | ||
75 | Role-based Administration:: | |
76 | ||
77 | You can define granular access for all objects (like VMs, storages, | |
78 | nodes, etc.) by using the role based user- and permission | |
79 | management. This allows you to define privileges and helps you to | |
80 | control access to objects. This concept is also known as access | |
81 | control lists: Each permission specifies a subject (a user or group) | |
82 | and a role (set of privileges) on a specific path. | |
83 | ||
84 | Authentication Realms:: | |
85 | ||
86 | Proxmox VE supports multiple authentication sources like Microsoft | |
87 | Active Directory, LDAP, Linux PAM standard authentication or the | |
88 | built-in Proxmox VE authentication server. | |
89 | ||
90 | ||
91 | Flexible Storage | |
92 | ---------------- | |
93 | ||
94 | The Proxmox VE storage model is very flexible. Virtual machine images | |
95 | can either be stored on one or several local storages or on shared | |
96 | storage like NFS and on SAN. There are no limits, you may configure as | |
97 | many storage definitions as you like. You can use all storage | |
98 | technologies available for Debian Linux. | |
99 | ||
100 | One major benefit of storing VMs on shared storage is the ability to | |
101 | live-migrate running machines without any downtime, as all nodes in | |
102 | the cluster have direct access to VM disk images. | |
103 | ||
104 | We currently support the following Network storage types: | |
105 | ||
106 | * LVM Group (network backing with iSCSI targets) | |
107 | * iSCSI target | |
108 | * NFS Share | |
109 | * CIFS Share | |
110 | * Ceph RBD | |
111 | * Directly use iSCSI LUNs | |
112 | * GlusterFS | |
113 | ||
114 | Local storage types supported are: | |
115 | ||
116 | * LVM Group (local backing devices like block devices, FC devices, DRBD, etc.) | |
117 | * Directory (storage on existing filesystem) | |
118 | * ZFS | |
119 | ||
120 | ||
121 | Integrated Backup and Restore | |
122 | ----------------------------- | |
123 | ||
124 | The integrated backup tool (`vzdump`) creates consistent snapshots of | |
125 | running Containers and KVM guests. It basically creates an archive of | |
126 | the VM or CT data which includes the VM/CT configuration files. | |
127 | ||
128 | KVM live backup works for all storage types including VM images on | |
129 | NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing | |
130 | VM backups fast and effective (sparse files, out of order data, minimized I/O). | |
131 | ||
132 | ||
133 | High Availability Cluster | |
134 | ------------------------- | |
135 | ||
136 | A multi-node Proxmox VE HA Cluster enables the definition of highly | |
137 | available virtual servers. The Proxmox VE HA Cluster is based on | |
138 | proven Linux HA technologies, providing stable and reliable HA | |
139 | services. | |
140 | ||
141 | ||
142 | Flexible Networking | |
143 | ------------------- | |
144 | ||
145 | Proxmox VE uses a bridged networking model. All VMs can share one | |
146 | bridge as if virtual network cables from each guest were all plugged | |
147 | into the same switch. For connecting VMs to the outside world, bridges | |
148 | are attached to physical network cards and assigned a TCP/IP | |
149 | configuration. | |
150 | ||
151 | For further flexibility, VLANs (IEEE 802.1q) and network | |
152 | bonding/aggregation are possible. In this way it is possible to build | |
153 | complex, flexible virtual networks for the Proxmox VE hosts, | |
154 | leveraging the full power of the Linux network stack. | |
155 | ||
156 | ||
157 | Integrated Firewall | |
158 | ------------------- | |
159 | ||
160 | The integrated firewall allows you to filter network packets on | |
161 | any VM or Container interface. Common sets of firewall rules can | |
162 | be grouped into ``security groups''. | |
163 | ||
164 | ||
165 | Why Open Source | |
166 | --------------- | |
167 | ||
168 | Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux | |
169 | Distribution. The source code of Proxmox VE is released under the | |
170 | http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public | |
171 | License, version 3]. This means that you are free to inspect the | |
172 | source code at any time or contribute to the project yourself. | |
173 | ||
174 | At Proxmox we are committed to use open source software whenever | |
175 | possible. Using open source software guarantees full access to all | |
176 | functionalities - as well as high security and reliability. We think | |
177 | that everybody should have the right to access the source code of a | |
178 | software to run it, build on it, or submit changes back to the | |
179 | project. Everybody is encouraged to contribute while Proxmox ensures | |
180 | the product always meets professional quality criteria. | |
181 | ||
182 | Open source software also helps to keep your costs low and makes your | |
183 | core infrastructure independent from a single vendor. | |
184 | ||
185 | ||
186 | Your benefits with {pve} | |
187 | ------------------------ | |
188 | ||
189 | * Open source software | |
190 | * No vendor lock-in | |
191 | * Linux kernel | |
192 | * Fast installation and easy-to-use | |
193 | * Web-based management interface | |
194 | * REST API | |
195 | * Huge active community | |
196 | * Low administration costs and simple deployment | |
197 | ||
198 | include::getting-help.adoc[] | |
199 | ||
200 | ||
201 | Project History | |
202 | --------------- | |
203 | ||
204 | The project started in 2007, followed by a first stable version in | |
205 | 2008. At the time we used OpenVZ for containers, and KVM for virtual | |
206 | machines. The clustering features were limited, and the user interface | |
207 | was simple (server generated web page). | |
208 | ||
209 | But we quickly developed new features using the | |
210 | http://corosync.github.io/corosync/[Corosync] cluster stack, and the | |
211 | introduction of the new Proxmox cluster file system (pmxcfs) was a big | |
212 | step forward, because it completely hides the cluster complexity from | |
213 | the user. Managing a cluster of 16 nodes is as simple as managing a | |
214 | single node. | |
215 | ||
216 | We also introduced a new REST API, with a complete declarative | |
217 | specification written in JSON-Schema. This enabled other people to | |
218 | integrate {pve} into their infrastructure, and made it easy to provide | |
219 | additional services. | |
220 | ||
221 | Also, the new REST API made it possible to replace the original user | |
222 | interface with a modern HTML5 application using JavaScript. We also | |
223 | replaced the old Java based VNC console code with | |
224 | https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser | |
225 | to manage your VMs. | |
226 | ||
227 | The support for various storage types is another big task. Notably, | |
228 | {pve} was the first distribution to ship ZFS on Linux by default in | |
229 | 2014. Another milestone was the ability to run and manage | |
230 | http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups | |
231 | are extremely cost effective. | |
232 | ||
233 | When we started we were among the first companies providing | |
234 | commercial support for KVM. The KVM project itself continuously | |
235 | evolved, and is now a widely used hypervisor. New features arrive | |
236 | with each release. We developed the KVM live backup feature, which | |
237 | makes it possible to create snapshot backups on any storage type. | |
238 | ||
239 | The most notable change with version 4.0 was the move from OpenVZ to | |
240 | https://linuxcontainers.org/[LXC]. Containers are now deeply | |
241 | integrated, and they can use the same storage and network features | |
242 | as virtual machines. | |
243 | ||
244 | include::howto-improve-pve-docs.adoc[] | |
245 | include::translation.adoc[] | |
246 |