Add sub chapter on how to improve the reference documentation
[pve-docs.git] / pve-intro.adoc
1 Introduction
2 ============
4 {pve} is a platform to run virtual machines and containers. It is
5 based on Debian Linux, and completely open source. For maximum
6 flexibility, we implemented two virtualization technologies -
7 Kernel-based Virtual Machine (KVM) and container-based virtualization
8 (LXC).
10 One main design goal was to make administration as easy as
11 possible. You can use {pve} on a single node, or assemble a cluster of
12 many nodes. All management tasks can be done using our web-based
13 management interface, and even a novice user can setup and install
14 {pve} within minutes.
16 image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"]
19 Central Management
20 ------------------
22 While many people start with a single node, {pve} can scale out to a
23 large set of clustered nodes. The cluster stack is fully integrated
24 and ships with the default installation.
26 Unique Multi-Master Design::
28 The integrated web-based management interface gives you a clean
29 overview of all your KVM guests and Linux containers and even of your
30 whole cluster. You can easily manage your VMs and containers, storage
31 or cluster from the GUI. There is no need to install a separate,
32 complex, and pricey management server.
34 Proxmox Cluster File System (pmxcfs)::
36 Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a
37 database-driven file system for storing configuration files. This
38 enables you to store the configuration of thousands of virtual
39 machines. By using corosync, these files are replicated in real time
40 on all cluster nodes. The file system stores all data inside a
41 persistent database on disk, nonetheless, a copy of the data resides
42 in RAM which provides a maximum storage size is 30MB - more than
43 enough for thousands of VMs.
44 +
45 Proxmox VE is the only virtualization platform using this unique
46 cluster file system.
48 Web-based Management Interface::
50 Proxmox VE is simple to use. Management tasks can be done via the
51 included web based management interface - there is no need to install a
52 separate management tool or any additional management node with huge
53 databases. The multi-master tool allows you to manage your whole
54 cluster from any node of your cluster. The central web-based
55 management - based on the JavaScript Framework (ExtJS) - empowers
56 you to control all functionalities from the GUI and overview history
57 and syslogs of each single node. This includes running backup or
58 restore jobs, live-migration or HA triggered activities.
60 Command Line::
62 For advanced users who are used to the comfort of the Unix shell or
63 Windows Powershell, Proxmox VE provides a command line interface to
64 manage all the components of your virtual environment. This command
65 line interface has intelligent tab completion and full documentation
66 in the form of UNIX man pages.
70 Proxmox VE uses a RESTful API. We choose JSON as primary data format,
71 and the whole API is formally defined using JSON Schema. This enables
72 fast and easy integration for third party management tools like custom
73 hosting environments.
75 Role-based Administration::
77 You can define granular access for all objects (like VMs, storages,
78 nodes, etc.) by using the role based user- and permission
79 management. This allows you to define privileges and helps you to
80 control access to objects. This concept is also known as access
81 control lists: Each permission specifies a subject (a user or group)
82 and a role (set of privileges) on a specific path.
84 Authentication Realms::
86 Proxmox VE supports multiple authentication sources like Microsoft
87 Active Directory, LDAP, Linux PAM standard authentication or the
88 built-in Proxmox VE authentication server.
91 Flexible Storage
92 ----------------
94 The Proxmox VE storage model is very flexible. Virtual machine images
95 can either be stored on one or several local storages or on shared
96 storage like NFS and on SAN. There are no limits, you may configure as
97 many storage definitions as you like. You can use all storage
98 technologies available for Debian Linux.
100 One major benefit of storing VMs on shared storage is the ability to
101 live-migrate running machines without any downtime, as all nodes in
102 the cluster have direct access to VM disk images.
104 We currently support the following Network storage types:
106 * LVM Group (network backing with iSCSI targets)
107 * iSCSI target
108 * NFS Share
109 * Ceph RBD
110 * Directly use iSCSI LUNs
111 * GlusterFS
113 Local storage types supported are:
115 * LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
116 * Directory (storage on existing filesystem)
117 * ZFS
120 Integrated Backup and Restore
121 -----------------------------
123 The integrated backup tool (`vzdump`) creates consistent snapshots of
124 running Containers and KVM guests. It basically creates an archive of
125 the VM or CT data which includes the VM/CT configuration files.
127 KVM live backup works for all storage types including VM images on
128 NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is
129 optimized for storing VM backups fast and effective (sparse files, out
130 of order data, minimized I/O).
133 High Availability Cluster
134 -------------------------
136 A multi-node Proxmox VE HA Cluster enables the definition of highly
137 available virtual servers. The Proxmox VE HA Cluster is based on
138 proven Linux HA technologies, providing stable and reliable HA
139 services.
142 Flexible Networking
143 -------------------
145 Proxmox VE uses a bridged networking model. All VMs can share one
146 bridge as if virtual network cables from each guest were all plugged
147 into the same switch. For connecting VMs to the outside world, bridges
148 are attached to physical network cards assigned a TCP/IP
149 configuration.
151 For further flexibility, VLANs (IEEE 802.1q) and network
152 bonding/aggregation are possible. In this way it is possible to build
153 complex, flexible virtual networks for the Proxmox VE hosts,
154 leveraging the full power of the Linux network stack.
157 Integrated Firewall
158 -------------------
160 The integrated firewall allows you to filter network packets on
161 any VM or Container interface. Common sets of firewall rules can
162 be grouped into ``security groups''.
165 Why Open Source
166 ---------------
168 Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux
169 Distribution. The source code of Proxmox VE is released under the
170[GNU Affero General Public
171 License, version 3]. This means that you are free to inspect the
172 source code at any time or contribute to the project yourself.
174 At Proxmox we are committed to use open source software whenever
175 possible. Using open source software guarantees full access to all
176 functionalities - as well as high security and reliability. We think
177 that everybody should have the right to access the source code of a
178 software to run it, build on it, or submit changes back to the
179 project. Everybody is encouraged to contribute while Proxmox ensures
180 the product always meets professional quality criteria.
182 Open source software also helps to keep your costs low and makes your
183 core infrastructure independent from a single vendor.
186 Your benefit with {pve}
187 -----------------------
189 * Open source software
190 * No vendor lock-in
191 * Linux kernel
192 * Fast installation and easy-to-use
193 * Web-based management interface
194 * REST API
195 * Huge active community
196 * Low administration costs and simple deployment
198 include::getting-help.adoc[]
201 Project History
202 ---------------
204 The project started in 2007, followed by a first stable version in
205 2008. At the time we used OpenVZ for containers, and KVM for virtual
206 machines. The clustering features were limited, and the user interface
207 was simple (server generated web page).
209 But we quickly developed new features using the
210[Corosync] cluster stack, and the
211 introduction of the new Proxmox cluster file system (pmxcfs) was a big
212 step forward, because it completely hides the cluster complexity from
213 the user. Managing a cluster of 16 nodes is as simple as managing a
214 single node.
216 We also introduced a new REST API, with a complete declarative
217 specification written in JSON-Schema. This enabled other people to
218 integrate {pve} into their infrastructure, and made it easy to provide
219 additional services.
221 Also, the new REST API made it possible to replace the original user
222 interface with a modern HTML5 application using JavaScript. We also
223 replaced the old Java based VNC console code with
224[noVNC]. So you only need a web browser
225 to manage your VMs.
227 The support for various storage types is another big task. Notably,
228 {pve} was the first distribution to ship ZFS on Linux by default in
229 2014. Another milestone was the ability to run and manage
230[Ceph] storage on the hypervisor nodes. Such setups
231 are extremely cost effective.
233 When we started we were among the first companies providing
234 commercial support for KVM. The KVM project itself continuously
235 evolved, and is now a widely used hypervisor. New features arrive
236 with each release. We developed the KVM live backup feature, which
237 makes it possible to create snapshot backups on any storage type.
239 The most notable change with version 4.0 was the move from OpenVZ to
240[LXC]. Containers are now deeply
241 integrated, and they can use the same storage and network features
242 as virtual machines.
244 Improving the {pve} documentation
245 ---------------------------------
247 Depending on which issue you want to improve, you can use a variety of
248 communication mediums to reach the developers.
250 If you notice an error in the current documentation, use the·
251[Proxmox bug tracker] and propose an·
252 alternate text/wording.
254 If you want to propose new content, it depends on what you want to
255 document:
257 * if the content is specific to your setup, a wiki article is the best
258 option. For instance if you want to document specific options for guest
259 systems, like which combination of Qemu drivers work best with a less popular
260 OS, this is a perfect fit for a wiki article.
262 * if you think the content is generic enough to be of interest for all users,
263 then you should try to get it in the reference documentation. The reference
264 documentation is written in the easy to use asciidoc document format.
265 Editing the official documentation requires to clone the git repository at
266 `git://` and then follow the
267;a=blob_plain;f=README.adoc;hb=HEAD[REAME.adoc] document.
269 Improving the documentation is just as easy as editing a Wikipedia
270 article and is an interesting foray in the development of a large
271 opensource project.
273 NOTE: If you are interested in working on the {pve} codebase, the
275 Documentation] wiki article will show you where to start.