]>
Commit | Line | Data |
---|---|---|
0fe09789 DM |
1 | Introduction |
2 | ============ | |
3 | ||
4 | {pve} is a platform to run virtual machines and containers. It is | |
5 | based on Debian Linux, and completely open source. For maximum | |
6 | flexibility, we implemented two virtualization technologies - | |
7 | Kernel-based Virtual Machine (KVM) and container-based virtualization | |
8 | (LXC). | |
9 | ||
10 | One main design goal was to make administration as easy as | |
11 | possible. You can use {pve} on a single node, or assemble a cluster of | |
12 | many nodes. All management task can be done using our web-based | |
13 | management interface, and even a novice user can setup and install | |
14 | {pve} within minutes. | |
15 | ||
16 | image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"] | |
17 | ||
18 | ||
19 | Central Management | |
20 | ------------------ | |
21 | ||
22 | While many people start with a single node, {pve} can scale out to a | |
23 | large set of clustered nodes. The cluster stack is fully integrated | |
24 | and ships with the default installation. | |
25 | ||
98d7e09a | 26 | Unique Multi-master Design:: |
0fe09789 DM |
27 | |
28 | The integrated web-based management interface gives you a clean | |
29 | overview of all your KVM guests and Linux containers and even of your | |
30 | whole cluster. You can easily manage your VMs and containers, storage | |
31 | or cluster from the GUI. There is no need to install a separate, | |
32 | complex, and pricy management server. | |
33 | ||
98d7e09a | 34 | Proxmox Cluster File System (pmxcfs):: |
0fe09789 DM |
35 | |
36 | Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a | |
37 | database-driven file system for storing configuration files. This | |
38 | enables you to store the configuration of thousands of virtual | |
39 | machines. By using corosync, these files are replicated in real time | |
40 | on all cluster nodes. The file system stores all data inside a | |
41 | persistent database on disk, nonetheless, a copy of the data resides | |
42 | in RAM which provides a maximum storage size is 30MB - more than | |
43 | enough for thousands of VMs. | |
98d7e09a | 44 | + |
0fe09789 DM |
45 | Proxmox VE is the only virtualization platform using this unique |
46 | cluster file system. | |
47 | ||
98d7e09a | 48 | Web-based Management Interface:: |
0fe09789 DM |
49 | |
50 | Proxmox VE is simple to use. Management tasks can be done via the | |
51 | included web based managment interface - there is no need to install a | |
52 | separate management tool or any additional management node with huge | |
53 | databases. The multi-master tool allows you to manage your whole | |
54 | cluster from any node of your cluster. The central web-based | |
55 | management - based on the JavaScript Framework (ExtJS) - empowers | |
56 | you to control all functionalities from the GUI and overview history | |
57 | and syslogs of each single node. This includes running backup or | |
58 | restore jobs, live-migration or HA triggered activities. | |
59 | ||
98d7e09a | 60 | Command Line:: |
0fe09789 DM |
61 | |
62 | For advanced users who are used to the comfort of the Unix shell or | |
63 | Windows Powershell, Proxmox VE provides a command line interface to | |
64 | manage all the components of your virtual environment. This command | |
65 | line interface has intelligent tab completion and full documentation | |
66 | in the form of UNIX man pages. | |
67 | ||
98d7e09a | 68 | REST API:: |
0fe09789 DM |
69 | |
70 | Proxmox VE uses a RESTful API. We choose JSON as primary data format, | |
71 | and the whole API is formally defined using JSON Schema. This enables | |
72 | fast and easy integration for third party management tools like custom | |
73 | hosting environments. | |
74 | ||
98d7e09a | 75 | Role-based Administration:: |
0fe09789 DM |
76 | |
77 | You can define granular access for all objects (like VM´s, storages, | |
78 | nodes, etc.) by using the role based user- and permission | |
79 | management. This allows you to define privileges and helps you to | |
80 | control access to objects. This concept is also known as access | |
81 | control lists: Each permission specifies a subject (a user or group) | |
82 | and a role (set of privileges) on a specific path. | |
83 | ||
98d7e09a | 84 | Authentication Realms:: |
0fe09789 DM |
85 | |
86 | Proxmox VE supports multiple authentication sources like Microsoft | |
87 | Active Directory, LDAP, Linux PAM standard authentication or the | |
88 | built-in Proxmox VE authentication server. | |
89 | ||
90 | ||
91 | Flexible Storage | |
92 | ---------------- | |
93 | ||
94 | The Proxmox VE storage model is very flexible. Virtual machine images | |
95 | can either be stored on one or several local storages or on shared | |
96 | storage like NFS and on SAN. There are no limits, you may configure as | |
97 | many storage definitions as you like. You can use all storage | |
98 | technologies available for Debian Linux. | |
99 | ||
100 | One major benefit of storing VMs on shared storage is the ability to | |
101 | live-migrate running machines without any downtime, as all nodes in | |
102 | the cluster have direct access to VM disk images. | |
103 | ||
104 | We currently support the following Network storage types: | |
105 | ||
106 | * LVM Group (network backing with iSCSI targets) | |
107 | * iSCSI target | |
108 | * NFS Share | |
109 | * Ceph RBD | |
110 | * Directly use iSCSI LUNs | |
111 | * GlusterFS | |
112 | ||
113 | Local storage types supported are: | |
114 | ||
115 | * LVM Group (local backing devices like block devices, FC devices, DRBD, etc.) | |
116 | * Directory (storage on existing filesystem) | |
117 | * ZFS | |
118 | ||
119 | Integrated Backup and Restore | |
120 | ----------------------------- | |
121 | ||
122 | The integrated backup tool (vzdump) creates consistent snapshots of | |
123 | running Containers and KVM guests. It basically creates an archive of | |
124 | the VM or CT data which includes the VM/CT configuration files. | |
125 | ||
126 | KVM live backup works for all storage types including VM images on | |
127 | NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is | |
128 | optimized for storing VM backups fast and effective (sparse files, out | |
129 | of order data, minimized I/O). | |
130 | ||
131 | High Availability Cluster | |
132 | ------------------------- | |
133 | ||
134 | A multi-node Proxmox VE HA Cluster enables the definition of highly | |
135 | available virtual servers. The Proxmox VE HA Cluster is based on | |
136 | proven Linux HA technologies, providing stable and reliable HA | |
137 | services. | |
138 | ||
139 | Flexible Networking | |
140 | ------------------- | |
141 | ||
142 | Proxmox VE uses a bridged networking model. All VMs can share one | |
143 | bridge as if virtual network cables from each guest were all plugged | |
144 | into the same switch. For connecting VMs to the outside world, bridges | |
145 | are attached to physical network cards assigned a TCP/IP | |
146 | configuration. | |
147 | ||
148 | For further flexibility, VLANs (IEEE 802.1q) and network | |
149 | bonding/aggregation are possible. In this way it is possible to build | |
150 | complex, flexible virtual networks for the Proxmox VE hosts, | |
151 | leveraging the full power of the Linux network stack. | |
152 | ||
153 | Integrated Firewall | |
154 | ------------------- | |
155 | ||
156 | The intergrated firewall allows you to filter network packets on | |
157 | any VM or Container interface. Common sets of firewall rules can be grouped into 'security groups'. | |
158 | ||
159 | Why Open Source | |
160 | --------------- | |
161 | ||
162 | Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux | |
163 | Distribution. The source code of Proxmox VE is released under the | |
164 | http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public | |
165 | License, version 3]. This means that you are free to inspect the | |
166 | source code at any time or contribute to the project yourself. | |
167 | ||
168 | At Proxmox we are committed to use open source software whenever | |
169 | possible. Using open source software guarantees full access to all | |
170 | functionalities - as well as high security and reliability. We think | |
171 | that everybody should have the right to access the source code of a | |
172 | software to run it, build on it, or submit changes back to the | |
173 | project. Everybody is encouraged to contribute while Proxmox ensures | |
174 | the product always meets professional quality criteria. | |
175 | ||
176 | Open source software also helps to keep your costs low and makes your | |
177 | core infrastructure independent from a single vendor. | |
178 | ||
179 | Your benefit with {pve} | |
180 | ----------------------- | |
181 | ||
182 | * Open source software | |
183 | * No vendor lock-in | |
184 | * Linux kernel | |
185 | * Fast installation and easy-to-use | |
186 | * Web-based management interface | |
187 | * REST API | |
188 | * Huge active community | |
189 | * Low administration costs and simple deployment | |
190 | ||
191 | Project History | |
192 | --------------- | |
193 | ||
194 | The project started in 2007, followed by a first stable version in | |
195 | 2008. By that time we used OpenVZ for containers, and KVM for virtual | |
196 | machines. The clustering features were limited, and the user interface | |
197 | was simple (server generated web page). | |
198 | ||
199 | But we quickly developed new features using the | |
200 | http://corosync.github.io/corosync/[Corosync] cluster stack, and the | |
201 | introduction of the new Proxmox cluster file system (pmxcfs) was a big | |
202 | step forward, because it completely hides the cluster complexity from | |
203 | the user. Managing a cluster of 16 nodes is as simple as managing a | |
204 | single node. | |
205 | ||
206 | We also introduced a new REST API, with a complete declarative | |
207 | spezification written in JSON-Schema. This enabled other people to | |
208 | integrate {pve} into their infrastructur, and made it easy provide | |
209 | additional services. | |
210 | ||
211 | Also, the new REST API made it possible to replace the original user | |
212 | interface with a modern HTML5 application using JavaScript. We also | |
213 | replaced the old Java based VNC console code with | |
214 | https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser | |
215 | to manage your VMs. | |
216 | ||
217 | The support for various storage types is another big task. Notably, | |
218 | {pve} was the first distribution to ship ZFS on Linux by default in | |
219 | 2014. Another milestone was the ability to run and manage | |
220 | http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups | |
221 | are extremely cost effective. | |
222 | ||
223 | When we started we were among the first companies providing | |
224 | commercial support for KVM. The KVM project itself continuously | |
225 | evolved, and is now a widely used hypervisor. New features arrives | |
226 | with each release. We developed the KVM live backup feature, which | |
227 | makes it possible to create snapshot backups on any storage type. | |
228 | ||
229 | The most notable change with version 4.0 was the move from OpenVZ to | |
230 | https://linuxcontainers.org/[LXC]. Containers are now deeply | |
231 | integrated, and they can use the same storage and network features | |
232 | as virtual machines. |