]> git.proxmox.com Git - pve-docs.git/blame - pve-installation.adoc
add section about quorum and cluster cold start
[pve-docs.git] / pve-installation.adoc
CommitLineData
d91f8c1e
DM
1Installing {pve}
2----------------
3
4{pve} ships as a set of Debian packages, so you can simply install it
5on top of a normal Debian installation. After configuring the
6repositories, you need to run:
7
8[source,bash]
9----
10apt-get update
11apt-get install proxmox-ve
12----
13
14While this looks easy, it presumes that you have correctly installed
15the base system, and you know how you want to configure and use the
16local storage. Network configuration is also completely up to you.
17
18In general, this is not trivial, especially when you use LVM or
19ZFS. This is why we provide an installation CD-ROM for {pve}. That
20installer just ask you a few questions, then partitions the local
21disk(s), installs all required packages, and configures the system
22including a basic network setup. You can get a fully functional system
23within a few minutes, including the following:
24
25* Complete operating system (Debian Linux, 64-bit)
26* Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS
27* {pve} Kernel with LXC and KVM support
28* Complete toolset
29* Web based management interface
30
31NOTE: By default, the complete server is used and all existing data is
32removed.
33
34Using the {pve} Installation CD-ROM
35~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
36
37Please insert the installation CD-ROM, then boot from that
38drive. Immediately afterwards you can choose the following menu
39options:
40
41Install Proxmox VE::
42
43Start normal installation.
44
45Install Proxmox VE (Debug mode)::
46
47Start installation in debug mode. It opens a shell console at several
48installation steps, so that you can debug things if something goes
49wrong. Please press `CTRL-D` to exit those debug consoles and continue
50installation. This option is mostly for developers and not meant for
51general use.
52
53Rescue Boot::
54
55This option allows you to boot an existing installation. It searches
56all attached hard disks, and if it finds an existing installation,
57boots directly into that disk using the existing Linux kernel. This
58can be useful if there are problems with the boot block (grub), or the
59BIOS is unable to read the boot block from the disk.
60
61Test Memory::
62
63Runs 'memtest86+'. This is useful to check if your memory if
64functional and error free.
65
66You normally select *Install Proxmox VE* to start the installation.
67After that you get prompted to select the target hard disk(s). The
8e4bb261 68`Options` button lets you select the target file system, which
d91f8c1e
DM
69defaults to `ext4`. The installer uses LVM if you select 'ext3',
70'ext4' or 'xfs' as file system, and offers additional option to
71restrict LVM space (see <<advanced_lvm_options,below>>)
72
73If you have more than one disk, you can also use ZFS as file system.
74ZFS supports several software RAID levels, so this is specially useful
75if you do not have a hardware RAID controller. The `Options` button
76lets you select the ZFS RAID level, and you can choose disks there.
77
78The next pages just asks for basic configuration options like time
79zone and keyboard layout. You also need to specify your email address
80and select a superuser password.
81
82The last step is the network configuration. Please note that you can
83use either IPv4 or IPv6 here, but not both. If you want to configure a
84dual stack node, you can easily do that after installation.
85
86If you press `Next` now, installation starts to format disks, and
87copies packages to the target. Please wait until that is finished,
88then reboot the server.
89
90Further configuration is done via the Proxmox web interface. Just
91point your browser to the IP address given during installation
92(https://youripaddress:8006). {pve} is tested for IE9, Firefox 10
93and higher, and Google Chrome.
94
95
96[[advanced_lvm_options]]
97Advanced LVM configuration options
98^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
99
100The installer creates a Volume Group (VG) called `pve`, and additional
101Logical Volumes (LVs) called `root`, `data` and `swap`. The size of
102those volumes can be controlled with:
103
104`hdsize`::
105
106Defines the total HD size to be used. This way you can save free
107space on the HD for further partitioning (i.e. for an additional PV
108and VG on the same hard disk that can be used for LVM storage).
109
110`swapsize`::
111
112To define the size of the `swap` volume. Default is the same size as
113installed RAM, with 4GB minimum and `hdsize/8` as maximum.
114
115`maxroot`::
116
117The `root` volume size. The `root` volume stores the whole operation
118system.
119
120`maxvz`::
121
122Define the size of the `data` volume, which is mounted at
123'/var/lib/vz'.
124
125`minfree`::
126
127To define the amount of free space left in LVM volume group `pve`.
12816GB is the default if storage available > 128GB, `hdsize/8` otherwise.
129+
130NOTE: LVM requires free space in the VG for snapshot creation (not
131required for lvmthin snapshots).
132
133
134ZFS Performance Tips
135^^^^^^^^^^^^^^^^^^^^
136
137ZFS uses a lot of memory, so it is best to add additional 8-16GB RAM
138if you want to use ZFS.
139
140ZFS also provides the feature to use a fast SSD drive as write cache. The
141write cache is called the ZFS Intent Log (ZIL). You can add that after
142installation using the following command:
143
144 zpool add <pool-name> log </dev/path_to_fast_ssd>
145