]>
Commit | Line | Data |
---|---|---|
1 | Introduction | |
2 | ============ | |
3 | ||
4 | {pve} is a platform to run virtual machines and containers. It is | |
5 | based on Debian Linux, and completely open source. For maximum | |
6 | flexibility, we implemented two virtualization technologies - | |
7 | Kernel-based Virtual Machine (KVM) and container-based virtualization | |
8 | (LXC). | |
9 | ||
10 | One main design goal was to make administration as easy as | |
11 | possible. You can use {pve} on a single node, or assemble a cluster of | |
12 | many nodes. All management tasks can be done using our web-based | |
13 | management interface, and even a novice user can setup and install | |
14 | {pve} within minutes. | |
15 | ||
16 | image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"] | |
17 | ||
18 | ||
19 | [[intro_central_management]] | |
20 | Central Management | |
21 | ------------------ | |
22 | ||
23 | While many people start with a single node, {pve} can scale out to a | |
24 | large set of clustered nodes. The cluster stack is fully integrated | |
25 | and ships with the default installation. | |
26 | ||
27 | Unique Multi-Master Design:: | |
28 | ||
29 | The integrated web-based management interface gives you a clean | |
30 | overview of all your KVM guests and Linux containers and even of your | |
31 | whole cluster. You can easily manage your VMs and containers, storage | |
32 | or cluster from the GUI. There is no need to install a separate, | |
33 | complex, and pricey management server. | |
34 | ||
35 | Proxmox Cluster File System (pmxcfs):: | |
36 | ||
37 | {pve} uses the unique Proxmox Cluster file system (pmxcfs), a | |
38 | database-driven file system for storing configuration files. This | |
39 | enables you to store the configuration of thousands of virtual | |
40 | machines. By using corosync, these files are replicated in real time | |
41 | on all cluster nodes. The file system stores all data inside a | |
42 | persistent database on disk, nonetheless, a copy of the data resides | |
43 | in RAM which provides a maximum storage size of 30MB - more than | |
44 | enough for thousands of VMs. | |
45 | + | |
46 | {pve} is the only virtualization platform using this unique | |
47 | cluster file system. | |
48 | ||
49 | Web-based Management Interface:: | |
50 | ||
51 | {pve} is simple to use. Management tasks can be done via the | |
52 | included web based management interface - there is no need to install a | |
53 | separate management tool or any additional management node with huge | |
54 | databases. The multi-master tool allows you to manage your whole | |
55 | cluster from any node of your cluster. The central web-based | |
56 | management - based on the JavaScript Framework (ExtJS) - empowers | |
57 | you to control all functionalities from the GUI and overview history | |
58 | and syslogs of each single node. This includes running backup or | |
59 | restore jobs, live-migration or HA triggered activities. | |
60 | ||
61 | Command Line:: | |
62 | ||
63 | For advanced users who are used to the comfort of the Unix shell or | |
64 | Windows Powershell, {pve} provides a command-line interface to | |
65 | manage all the components of your virtual environment. This command-line | |
66 | interface has intelligent tab completion and full documentation | |
67 | in the form of UNIX man pages. | |
68 | ||
69 | REST API:: | |
70 | ||
71 | {pve} uses a RESTful API. We choose JSON as primary data format, | |
72 | and the whole API is formally defined using JSON Schema. This enables | |
73 | fast and easy integration for third party management tools like custom | |
74 | hosting environments. | |
75 | ||
76 | Role-based Administration:: | |
77 | ||
78 | You can define granular access for all objects (like VMs, storages, | |
79 | nodes, etc.) by using the role based user- and permission | |
80 | management. This allows you to define privileges and helps you to | |
81 | control access to objects. This concept is also known as access | |
82 | control lists: Each permission specifies a subject (a user or group) | |
83 | and a role (set of privileges) on a specific path. | |
84 | ||
85 | Authentication Realms:: | |
86 | ||
87 | {pve} supports multiple authentication sources like Microsoft | |
88 | Active Directory, LDAP, Linux PAM standard authentication or the | |
89 | built-in {pve} authentication server. | |
90 | ||
91 | ||
92 | Flexible Storage | |
93 | ---------------- | |
94 | ||
95 | The {pve} storage model is very flexible. Virtual machine images | |
96 | can either be stored on one or several local storages or on shared | |
97 | storage like NFS and on SAN. There are no limits, you may configure as | |
98 | many storage definitions as you like. You can use all storage | |
99 | technologies available for Debian Linux. | |
100 | ||
101 | One major benefit of storing VMs on shared storage is the ability to | |
102 | live-migrate running machines without any downtime, as all nodes in | |
103 | the cluster have direct access to VM disk images. | |
104 | ||
105 | We currently support the following Network storage types: | |
106 | ||
107 | * LVM Group (network backing with iSCSI targets) | |
108 | * iSCSI target | |
109 | * NFS Share | |
110 | * CIFS Share | |
111 | * Ceph RBD | |
112 | * Directly use iSCSI LUNs | |
113 | * GlusterFS | |
114 | ||
115 | Local storage types supported are: | |
116 | ||
117 | * LVM Group (local backing devices like block devices, FC devices, DRBD, etc.) | |
118 | * Directory (storage on existing filesystem) | |
119 | * ZFS | |
120 | ||
121 | ||
122 | Integrated Backup and Restore | |
123 | ----------------------------- | |
124 | ||
125 | The integrated backup tool (`vzdump`) creates consistent snapshots of | |
126 | running Containers and KVM guests. It basically creates an archive of | |
127 | the VM or CT data which includes the VM/CT configuration files. | |
128 | ||
129 | KVM live backup works for all storage types including VM images on | |
130 | NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing | |
131 | VM backups fast and effective (sparse files, out of order data, minimized I/O). | |
132 | ||
133 | ||
134 | High Availability Cluster | |
135 | ------------------------- | |
136 | ||
137 | A multi-node {pve} HA Cluster enables the definition of highly | |
138 | available virtual servers. The {pve} HA Cluster is based on | |
139 | proven Linux HA technologies, providing stable and reliable HA | |
140 | services. | |
141 | ||
142 | ||
143 | Flexible Networking | |
144 | ------------------- | |
145 | ||
146 | {pve} uses a bridged networking model. All VMs can share one | |
147 | bridge as if virtual network cables from each guest were all plugged | |
148 | into the same switch. For connecting VMs to the outside world, bridges | |
149 | are attached to physical network cards and assigned a TCP/IP | |
150 | configuration. | |
151 | ||
152 | For further flexibility, VLANs (IEEE 802.1q) and network | |
153 | bonding/aggregation are possible. In this way it is possible to build | |
154 | complex, flexible virtual networks for the {pve} hosts, | |
155 | leveraging the full power of the Linux network stack. | |
156 | ||
157 | ||
158 | Integrated Firewall | |
159 | ------------------- | |
160 | ||
161 | The integrated firewall allows you to filter network packets on | |
162 | any VM or Container interface. Common sets of firewall rules can | |
163 | be grouped into ``security groups''. | |
164 | ||
165 | include::hyper-converged-infrastructure.adoc[] | |
166 | ||
167 | ||
168 | Why Open Source | |
169 | --------------- | |
170 | ||
171 | {pve} uses a Linux kernel and is based on the Debian GNU/Linux | |
172 | Distribution. The source code of {pve} is released under the | |
173 | https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public | |
174 | License, version 3]. This means that you are free to inspect the | |
175 | source code at any time or contribute to the project yourself. | |
176 | ||
177 | At Proxmox we are committed to use open source software whenever | |
178 | possible. Using open source software guarantees full access to all | |
179 | functionalities - as well as high security and reliability. We think | |
180 | that everybody should have the right to access the source code of a | |
181 | software to run it, build on it, or submit changes back to the | |
182 | project. Everybody is encouraged to contribute while Proxmox ensures | |
183 | the product always meets professional quality criteria. | |
184 | ||
185 | Open source software also helps to keep your costs low and makes your | |
186 | core infrastructure independent from a single vendor. | |
187 | ||
188 | ||
189 | Your benefits with {pve} | |
190 | ------------------------ | |
191 | ||
192 | * Open source software | |
193 | * No vendor lock-in | |
194 | * Linux kernel | |
195 | * Fast installation and easy-to-use | |
196 | * Web-based management interface | |
197 | * REST API | |
198 | * Huge active community | |
199 | * Low administration costs and simple deployment | |
200 | ||
201 | include::getting-help.adoc[] | |
202 | ||
203 | ||
204 | [[intro_project_history]] | |
205 | Project History | |
206 | --------------- | |
207 | ||
208 | The project started in 2007, followed by a first stable version in | |
209 | 2008. At the time we used OpenVZ for containers, and KVM for virtual | |
210 | machines. The clustering features were limited, and the user interface | |
211 | was simple (server generated web page). | |
212 | ||
213 | But we quickly developed new features using the | |
214 | https://corosync.github.io/corosync/[Corosync] cluster stack, and the | |
215 | introduction of the new Proxmox cluster file system (pmxcfs) was a big | |
216 | step forward, because it completely hides the cluster complexity from | |
217 | the user. Managing a cluster of 16 nodes is as simple as managing a | |
218 | single node. | |
219 | ||
220 | We also introduced a new REST API, with a complete declarative | |
221 | specification written in JSON-Schema. This enabled other people to | |
222 | integrate {pve} into their infrastructure, and made it easy to provide | |
223 | additional services. | |
224 | ||
225 | Also, the new REST API made it possible to replace the original user | |
226 | interface with a modern HTML5 application using JavaScript. We also | |
227 | replaced the old Java based VNC console code with | |
228 | https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser | |
229 | to manage your VMs. | |
230 | ||
231 | The support for various storage types is another big task. Notably, | |
232 | {pve} was the first distribution to ship ZFS on Linux by default in | |
233 | 2014. Another milestone was the ability to run and manage | |
234 | https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups | |
235 | are extremely cost effective. | |
236 | ||
237 | When we started we were among the first companies providing | |
238 | commercial support for KVM. The KVM project itself continuously | |
239 | evolved, and is now a widely used hypervisor. New features arrive | |
240 | with each release. We developed the KVM live backup feature, which | |
241 | makes it possible to create snapshot backups on any storage type. | |
242 | ||
243 | The most notable change with version 4.0 was the move from OpenVZ to | |
244 | https://linuxcontainers.org/[LXC]. Containers are now deeply | |
245 | integrated, and they can use the same storage and network features | |
246 | as virtual machines. | |
247 | ||
248 | include::howto-improve-pve-docs.adoc[] | |
249 | include::translation.adoc[] | |
250 |