]>
Commit | Line | Data |
---|---|---|
d91f8c1e DM |
1 | Installing {pve} |
2 | ---------------- | |
3 | ||
4 | {pve} ships as a set of Debian packages, so you can simply install it | |
5 | on top of a normal Debian installation. After configuring the | |
6 | repositories, you need to run: | |
7 | ||
8 | [source,bash] | |
9 | ---- | |
10 | apt-get update | |
11 | apt-get install proxmox-ve | |
12 | ---- | |
13 | ||
14 | While this looks easy, it presumes that you have correctly installed | |
15 | the base system, and you know how you want to configure and use the | |
16 | local storage. Network configuration is also completely up to you. | |
17 | ||
18 | In general, this is not trivial, especially when you use LVM or | |
19 | ZFS. This is why we provide an installation CD-ROM for {pve}. That | |
20 | installer just ask you a few questions, then partitions the local | |
21 | disk(s), installs all required packages, and configures the system | |
22 | including a basic network setup. You can get a fully functional system | |
23 | within a few minutes, including the following: | |
24 | ||
25 | * Complete operating system (Debian Linux, 64-bit) | |
26 | * Partition the hard drive with ext4 (alternative ext3 or xfs) or ZFS | |
27 | * {pve} Kernel with LXC and KVM support | |
28 | * Complete toolset | |
29 | * Web based management interface | |
30 | ||
31 | NOTE: By default, the complete server is used and all existing data is | |
32 | removed. | |
33 | ||
34 | Using the {pve} Installation CD-ROM | |
35 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
36 | ||
37 | Please insert the installation CD-ROM, then boot from that | |
38 | drive. Immediately afterwards you can choose the following menu | |
39 | options: | |
40 | ||
41 | Install Proxmox VE:: | |
42 | ||
43 | Start normal installation. | |
44 | ||
45 | Install Proxmox VE (Debug mode):: | |
46 | ||
47 | Start installation in debug mode. It opens a shell console at several | |
48 | installation steps, so that you can debug things if something goes | |
49 | wrong. Please press `CTRL-D` to exit those debug consoles and continue | |
50 | installation. This option is mostly for developers and not meant for | |
51 | general use. | |
52 | ||
53 | Rescue Boot:: | |
54 | ||
55 | This option allows you to boot an existing installation. It searches | |
56 | all attached hard disks, and if it finds an existing installation, | |
57 | boots directly into that disk using the existing Linux kernel. This | |
58 | can be useful if there are problems with the boot block (grub), or the | |
59 | BIOS is unable to read the boot block from the disk. | |
60 | ||
61 | Test Memory:: | |
62 | ||
63 | Runs 'memtest86+'. This is useful to check if your memory if | |
64 | functional and error free. | |
65 | ||
66 | You normally select *Install Proxmox VE* to start the installation. | |
67 | After that you get prompted to select the target hard disk(s). The | |
8e4bb261 | 68 | `Options` button lets you select the target file system, which |
d91f8c1e DM |
69 | defaults to `ext4`. The installer uses LVM if you select 'ext3', |
70 | 'ext4' or 'xfs' as file system, and offers additional option to | |
71 | restrict LVM space (see <<advanced_lvm_options,below>>) | |
72 | ||
73 | If you have more than one disk, you can also use ZFS as file system. | |
74 | ZFS supports several software RAID levels, so this is specially useful | |
75 | if you do not have a hardware RAID controller. The `Options` button | |
76 | lets you select the ZFS RAID level, and you can choose disks there. | |
77 | ||
78 | The next pages just asks for basic configuration options like time | |
79 | zone and keyboard layout. You also need to specify your email address | |
80 | and select a superuser password. | |
81 | ||
82 | The last step is the network configuration. Please note that you can | |
83 | use either IPv4 or IPv6 here, but not both. If you want to configure a | |
84 | dual stack node, you can easily do that after installation. | |
85 | ||
86 | If you press `Next` now, installation starts to format disks, and | |
87 | copies packages to the target. Please wait until that is finished, | |
88 | then reboot the server. | |
89 | ||
90 | Further configuration is done via the Proxmox web interface. Just | |
91 | point your browser to the IP address given during installation | |
92 | (https://youripaddress:8006). {pve} is tested for IE9, Firefox 10 | |
93 | and higher, and Google Chrome. | |
94 | ||
95 | ||
96 | [[advanced_lvm_options]] | |
97 | Advanced LVM configuration options | |
98 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
99 | ||
100 | The installer creates a Volume Group (VG) called `pve`, and additional | |
101 | Logical Volumes (LVs) called `root`, `data` and `swap`. The size of | |
102 | those volumes can be controlled with: | |
103 | ||
104 | `hdsize`:: | |
105 | ||
106 | Defines the total HD size to be used. This way you can save free | |
107 | space on the HD for further partitioning (i.e. for an additional PV | |
108 | and VG on the same hard disk that can be used for LVM storage). | |
109 | ||
110 | `swapsize`:: | |
111 | ||
112 | To define the size of the `swap` volume. Default is the same size as | |
113 | installed RAM, with 4GB minimum and `hdsize/8` as maximum. | |
114 | ||
115 | `maxroot`:: | |
116 | ||
117 | The `root` volume size. The `root` volume stores the whole operation | |
118 | system. | |
119 | ||
120 | `maxvz`:: | |
121 | ||
122 | Define the size of the `data` volume, which is mounted at | |
123 | '/var/lib/vz'. | |
124 | ||
125 | `minfree`:: | |
126 | ||
127 | To define the amount of free space left in LVM volume group `pve`. | |
128 | 16GB is the default if storage available > 128GB, `hdsize/8` otherwise. | |
129 | + | |
130 | NOTE: LVM requires free space in the VG for snapshot creation (not | |
131 | required for lvmthin snapshots). | |
132 | ||
133 | ||
134 | ZFS Performance Tips | |
135 | ^^^^^^^^^^^^^^^^^^^^ | |
136 | ||
137 | ZFS uses a lot of memory, so it is best to add additional 8-16GB RAM | |
138 | if you want to use ZFS. | |
139 | ||
140 | ZFS also provides the feature to use a fast SSD drive as write cache. The | |
141 | write cache is called the ZFS Intent Log (ZIL). You can add that after | |
142 | installation using the following command: | |
143 | ||
144 | zpool add <pool-name> log </dev/path_to_fast_ssd> | |
145 |