]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_pct]] |
0c6b782f | 2 | ifdef::manvolnum[] |
b2f242ab | 3 | pct(1) |
7e2fdb3d | 4 | ====== |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
0c6b782f DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | pct - Tool to manage Linux Containers (LXC) on Proxmox VE | |
11 | ||
12 | ||
49a5e11c | 13 | SYNOPSIS |
0c6b782f DM |
14 | -------- |
15 | ||
16 | include::pct.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ||
22 | ifndef::manvolnum[] | |
23 | Proxmox Container Toolkit | |
24 | ========================= | |
194d2f29 | 25 | :pve-toplevel: |
0c6b782f | 26 | endif::manvolnum[] |
5f09af76 | 27 | ifdef::wiki[] |
cb84ed18 | 28 | :title: Linux Container |
5f09af76 | 29 | endif::wiki[] |
4a2ae9ed | 30 | |
14e97811 OB |
31 | Containers are a lightweight alternative to fully virtualized machines (VMs). |
32 | They use the kernel of the host system that they run on, instead of emulating a | |
33 | full operating system (OS). This means that containers can access resources on | |
34 | the host system directly. | |
4a2ae9ed | 35 | |
14e97811 OB |
36 | The runtime costs for containers is low, usually negligible. However, there |
37 | are some drawbacks that need be considered: | |
4a2ae9ed | 38 | |
14e97811 OB |
39 | * Only Linux distributions can be run in containers. (It is not |
40 | possible to run FreeBSD or MS Windows inside a container.) | |
4a2ae9ed | 41 | |
14e97811 OB |
42 | * For security reasons, access to host resources needs to be restricted. Containers |
43 | run in their own separate namespaces. Additionally some syscalls are not | |
44 | allowed within containers. | |
4a2ae9ed DM |
45 | |
46 | {pve} uses https://linuxcontainers.org/[LXC] as underlying container | |
14e97811 OB |
47 | technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the usage of LXC |
48 | containers. | |
4a2ae9ed | 49 | |
14e97811 OB |
50 | Containers are tightly integrated with {pve}. This means that they are aware of |
51 | the cluster setup, and they can use the same network and storage resources as | |
52 | virtual machines. You can also use the {pve} firewall, or manage containers | |
53 | using the HA framework. | |
4a2ae9ed DM |
54 | |
55 | Our primary goal is to offer an environment as one would get from a | |
56 | VM, but without the additional overhead. We call this "System | |
57 | Containers". | |
58 | ||
14e97811 | 59 | NOTE: If you want to run micro-containers (with docker, rkt, etc.) it |
70a42028 | 60 | is best to run them inside a VM. |
4a2ae9ed DM |
61 | |
62 | ||
99f6ae1a DM |
63 | Technology Overview |
64 | ------------------- | |
65 | ||
66 | * LXC (https://linuxcontainers.org/) | |
67 | ||
68 | * Integrated into {pve} graphical user interface (GUI) | |
69 | ||
70 | * Easy to use command line tool `pct` | |
71 | ||
72 | * Access via {pve} REST API | |
73 | ||
74 | * lxcfs to provide containerized /proc file system | |
75 | ||
14e97811 | 76 | * CGroups (control groups) for resource allocation |
99f6ae1a | 77 | |
14e97811 | 78 | * AppArmor/Seccomp to improve security |
99f6ae1a | 79 | |
14e97811 | 80 | * Modern Linux kernels |
99f6ae1a DM |
81 | |
82 | * Image based deployment (templates) | |
83 | ||
14e97811 | 84 | * Uses {pve} storage library |
99f6ae1a | 85 | |
14e97811 | 86 | * Container setup from host (network, DNS, storage, etc.) |
99f6ae1a | 87 | |
4a2ae9ed DM |
88 | Security Considerations |
89 | ----------------------- | |
90 | ||
14e97811 OB |
91 | Containers use the kernel of the host system. This creates a big attack |
92 | surface for malicious users. This should be considered if containers | |
93 | are provided to untrustworthy people. In general, full | |
94 | virtual machines provide better isolation. | |
95 | ||
96 | However, LXC uses many security features like AppArmor, CGroups and kernel | |
97 | namespaces to reduce the attack surface. | |
98 | ||
99 | AppArmor profiles are used to restrict access to possibly dangerous actions. | |
100 | Some system calls, i.e. `mount`, are prohibited from execution. | |
4a2ae9ed | 101 | |
14e97811 OB |
102 | To trace AppArmor activity, use: |
103 | ||
104 | ---- | |
105 | # dmesg | grep apparmor | |
106 | ---- | |
3bd9d0cf | 107 | |
53e3cd6f DM |
108 | Guest Operating System Configuration |
109 | ------------------------------------ | |
110 | ||
14e97811 OB |
111 | {pve} tries to detect the Linux distribution in the container, and modifies some |
112 | files. Here is a short list of things done at container startup: | |
53e3cd6f DM |
113 | |
114 | set /etc/hostname:: to set the container name | |
115 | ||
116 | modify /etc/hosts:: to allow lookup of the local hostname | |
117 | ||
118 | network setup:: pass the complete network setup to the container | |
119 | ||
120 | configure DNS:: pass information about DNS servers | |
121 | ||
122 | adapt the init system:: for example, fix the number of spawned getty processes | |
123 | ||
124 | set the root password:: when creating a new container | |
125 | ||
126 | rewrite ssh_host_keys:: so that each container has unique keys | |
127 | ||
128 | randomize crontab:: so that cron does not start at the same time on all containers | |
129 | ||
130 | Changes made by {PVE} are enclosed by comment markers: | |
131 | ||
132 | ---- | |
133 | # --- BEGIN PVE --- | |
134 | <data> | |
135 | # --- END PVE --- | |
136 | ---- | |
137 | ||
138 | Those markers will be inserted at a reasonable location in the | |
139 | file. If such a section already exists, it will be updated in place | |
140 | and will not be moved. | |
141 | ||
142 | Modification of a file can be prevented by adding a `.pve-ignore.` | |
143 | file for it. For instance, if the file `/etc/.pve-ignore.hosts` | |
144 | exists then the `/etc/hosts` file will not be touched. This can be a | |
470d4313 | 145 | simple empty file created via: |
53e3cd6f | 146 | |
14e97811 OB |
147 | ---- |
148 | # touch /etc/.pve-ignore.hosts | |
149 | ---- | |
53e3cd6f DM |
150 | |
151 | Most modifications are OS dependent, so they differ between different | |
152 | distributions and versions. You can completely disable modifications | |
153 | by manually setting the `ostype` to `unmanaged`. | |
154 | ||
155 | OS type detection is done by testing for certain files inside the | |
156 | container: | |
157 | ||
158 | Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`) | |
159 | ||
160 | Debian:: test /etc/debian_version | |
161 | ||
162 | Fedora:: test /etc/fedora-release | |
163 | ||
164 | RedHat or CentOS:: test /etc/redhat-release | |
165 | ||
166 | ArchLinux:: test /etc/arch-release | |
167 | ||
168 | Alpine:: test /etc/alpine-release | |
169 | ||
170 | Gentoo:: test /etc/gentoo-release | |
171 | ||
172 | NOTE: Container start fails if the configured `ostype` differs from the auto | |
173 | detected type. | |
174 | ||
175 | ||
80c0adcb | 176 | [[pct_container_images]] |
d61bab51 DM |
177 | Container Images |
178 | ---------------- | |
179 | ||
8c1189b6 FG |
180 | Container images, sometimes also referred to as ``templates'' or |
181 | ``appliances'', are `tar` archives which contain everything to run a | |
14e97811 | 182 | container. `pct` uses them to create a new container, for example: |
d61bab51 | 183 | |
14e97811 OB |
184 | ---- |
185 | # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
186 | ---- | |
d61bab51 | 187 | |
14e97811 OB |
188 | {pve} itself provides a variety of basic templates for the most common |
189 | Linux distributions. They can be downloaded using the GUI or the | |
190 | `pveam` (short for {pve} Appliance Manager) command line utility. | |
191 | Additionally, https://www.turnkeylinux.org/[TurnKey Linux] | |
192 | container templates are also available to download. | |
d61bab51 | 193 | |
14e97811 | 194 | The list of available templates is updated daily via cron. To trigger it manually: |
3a6fa247 | 195 | |
14e97811 OB |
196 | ---- |
197 | # pveam update | |
198 | ---- | |
3a6fa247 | 199 | |
14e97811 | 200 | To view the list of available images run: |
3a6fa247 | 201 | |
14e97811 OB |
202 | ---- |
203 | # pveam available | |
204 | ---- | |
3a6fa247 | 205 | |
8c1189b6 FG |
206 | You can restrict this large list by specifying the `section` you are |
207 | interested in, for example basic `system` images: | |
3a6fa247 DM |
208 | |
209 | .List available system images | |
210 | ---- | |
211 | # pveam available --section system | |
14e97811 OB |
212 | system alpine-3.10-default_20190626_amd64.tar.xz |
213 | system alpine-3.9-default_20190224_amd64.tar.xz | |
214 | system archlinux-base_20190924-1_amd64.tar.gz | |
215 | system centos-6-default_20191016_amd64.tar.xz | |
216 | system centos-7-default_20190926_amd64.tar.xz | |
217 | system centos-8-default_20191016_amd64.tar.xz | |
218 | system debian-10.0-standard_10.0-1_amd64.tar.gz | |
219 | system debian-8.0-standard_8.11-1_amd64.tar.gz | |
220 | system debian-9.0-standard_9.7-1_amd64.tar.gz | |
221 | system fedora-30-default_20190718_amd64.tar.xz | |
222 | system fedora-31-default_20191029_amd64.tar.xz | |
223 | system gentoo-current-default_20190718_amd64.tar.xz | |
224 | system opensuse-15.0-default_20180907_amd64.tar.xz | |
225 | system opensuse-15.1-default_20190719_amd64.tar.xz | |
226 | system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz | |
227 | system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz | |
228 | system ubuntu-19.04-standard_19.04-1_amd64.tar.gz | |
229 | system ubuntu-19.10-standard_19.10-1_amd64.tar.gz | |
3a6fa247 DM |
230 | ---- |
231 | ||
a8e99754 | 232 | Before you can use such a template, you need to download them into one |
8c1189b6 | 233 | of your storages. You can simply use storage `local` for that |
3a6fa247 DM |
234 | purpose. For clustered installations, it is preferred to use a shared |
235 | storage so that all nodes can access those images. | |
236 | ||
14e97811 OB |
237 | ---- |
238 | # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz | |
239 | ---- | |
3a6fa247 | 240 | |
24f73a63 | 241 | You are now ready to create containers using that image, and you can |
8c1189b6 | 242 | list all downloaded images on storage `local` with: |
24f73a63 DM |
243 | |
244 | ---- | |
245 | # pveam list local | |
14e97811 | 246 | local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB |
24f73a63 DM |
247 | ---- |
248 | ||
a8e99754 | 249 | The above command shows you the full {pve} volume identifiers. They include |
24f73a63 | 250 | the storage name, and most other {pve} commands can use them. For |
5eba0743 | 251 | example you can delete that image later with: |
24f73a63 | 252 | |
14e97811 OB |
253 | ---- |
254 | # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
255 | ---- | |
d61bab51 | 256 | |
80c0adcb | 257 | [[pct_container_storage]] |
70a42028 DM |
258 | Container Storage |
259 | ----------------- | |
260 | ||
14e97811 OB |
261 | The {pve} LXC container storage model is more flexible than traditional |
262 | container storage models. A container can have multiple mount points. This makes | |
263 | it possible to use the best suited storage for each application. | |
264 | ||
265 | For example the root file system of the container can be on slow and cheap | |
266 | storage while the database can be on fast and distributed storage via a second | |
267 | mount point. See section <<pct_mount_points, Mount Points>> for further details. | |
268 | ||
269 | Any storage type supported by the {pve} storage library can be used. This means | |
270 | that containers can be stored on local (for example `lvm`, `zfs` or directory), | |
271 | shared external (like `iSCSI`, `NFS`) or even distributed storage systems like | |
272 | Ceph. Advanced storage features like snapshots or clones can be used if the | |
273 | underlying storage supports them. The `vzdump` backup tool can use snapshots to | |
274 | provide consistent container backups. | |
275 | ||
276 | Furthermore, local devices or local directories can be mounted directly using | |
277 | 'bind mounts'. This gives access to local resources inside a container with | |
278 | practically zero overhead. Bind mounts can be used as an easy way to share data | |
279 | between containers. | |
70a42028 | 280 | |
eeecce95 | 281 | |
4f785ca7 DM |
282 | FUSE Mounts |
283 | ~~~~~~~~~~~ | |
284 | ||
285 | WARNING: Because of existing issues in the Linux kernel's freezer | |
286 | subsystem the usage of FUSE mounts inside a container is strongly | |
287 | advised against, as containers need to be frozen for suspend or | |
288 | snapshot mode backups. | |
289 | ||
290 | If FUSE mounts cannot be replaced by other mounting mechanisms or storage | |
291 | technologies, it is possible to establish the FUSE mount on the Proxmox host | |
292 | and use a bind mount point to make it accessible inside the container. | |
293 | ||
294 | ||
295 | Using Quotas Inside Containers | |
296 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
297 | ||
298 | Quotas allow to set limits inside a container for the amount of disk | |
14e97811 OB |
299 | space that each user can use. |
300 | ||
301 | NOTE: This only works on ext4 image based storage types and currently only works | |
302 | with privileged containers. | |
4f785ca7 DM |
303 | |
304 | Activating the `quota` option causes the following mount options to be | |
305 | used for a mount point: | |
306 | `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` | |
307 | ||
14e97811 | 308 | This allows quotas to be used like on any other system. You |
4f785ca7 DM |
309 | can initialize the `/aquota.user` and `/aquota.group` files by running |
310 | ||
311 | ---- | |
14e97811 OB |
312 | # quotacheck -cmug / |
313 | # quotaon / | |
4f785ca7 DM |
314 | ---- |
315 | ||
316 | and edit the quotas via the `edquota` command. Refer to the documentation | |
317 | of the distribution running inside the container for details. | |
318 | ||
319 | NOTE: You need to run the above commands for every mount point by passing | |
320 | the mount point's path instead of just `/`. | |
321 | ||
322 | ||
323 | Using ACLs Inside Containers | |
324 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
325 | ||
14e97811 OB |
326 | The standard Posix **A**ccess **C**ontrol **L**ists are also available inside |
327 | containers. ACLs allow you to set more detailed file ownership than the | |
328 | traditional user/group/others model. | |
4f785ca7 DM |
329 | |
330 | ||
14e97811 | 331 | Backup of Container mount points |
690cd737 EK |
332 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
333 | ||
14e97811 OB |
334 | To include a mount point in backups, enable the `backup` option for it in the |
335 | container configuration. For an existing mount point `mp0` | |
336 | ||
337 | ---- | |
338 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G | |
339 | ---- | |
340 | ||
341 | add `backup=1` to enable it. | |
342 | ||
343 | ---- | |
344 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1 | |
345 | ---- | |
346 | ||
347 | NOTE: When creating a new mount point in the GUI, this option is enabled by | |
348 | default. | |
349 | ||
350 | To disable backups for a mount point, add `backup=0` in the way described above, | |
351 | or uncheck the *Backup* checkbox on the GUI. | |
690cd737 EK |
352 | |
353 | Replication of Containers mount points | |
354 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
355 | ||
14e97811 OB |
356 | By default, additional mount points are replicated when the Root Disk is |
357 | replicated. If you want the {pve} storage replication mechanism to skip a mount | |
358 | point, you can set the *Skip replication* option for that mount point. + | |
359 | As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a | |
360 | mount point to a different type of storage when the container has replication | |
361 | configured requires to have *Skip replication* enabled for that mount point. | |
690cd737 | 362 | |
f3afbb70 | 363 | [[pct_settings]] |
4f785ca7 DM |
364 | Container Settings |
365 | ------------------ | |
366 | ||
304eb5a9 EK |
367 | [[pct_general]] |
368 | General Settings | |
369 | ~~~~~~~~~~~~~~~~ | |
370 | ||
1ff5e4e8 | 371 | [thumbnail="screenshot/gui-create-ct-general.png"] |
2225402c | 372 | |
304eb5a9 EK |
373 | General settings of a container include |
374 | ||
375 | * the *Node* : the physical server on which the container will run | |
376 | * the *CT ID*: a unique number in this {pve} installation used to identify your container | |
377 | * *Hostname*: the hostname of the container | |
378 | * *Resource Pool*: a logical group of containers and VMs | |
379 | * *Password*: the root password of the container | |
380 | * *SSH Public Key*: a public key for connecting to the root account over SSH | |
381 | * *Unprivileged container*: this option allows to choose at creation time | |
382 | if you want to create a privileged or unprivileged container. | |
383 | ||
14e97811 OB |
384 | Unprivileged Containers |
385 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
386 | ||
387 | Unprivileged containers use a new kernel feature called user namespaces. The | |
388 | root UID 0 inside the container is mapped to an unprivileged user outside the | |
389 | container. This means that most security issues (container escape, resource | |
390 | abuse, etc.) in these containers will affect a random unprivileged user, and | |
391 | would be a generic kernel security bug rather than an LXC issue. The LXC team | |
392 | thinks unprivileged containers are safe by design. | |
393 | ||
394 | This is the default option when creating a new container. | |
395 | ||
396 | NOTE: If the container uses systemd as an init system, please be | |
397 | aware the systemd version running inside the container should be equal to | |
398 | or greater than 220. | |
399 | ||
304eb5a9 EK |
400 | |
401 | Privileged Containers | |
402 | ^^^^^^^^^^^^^^^^^^^^^ | |
403 | ||
14e97811 OB |
404 | Security in containers is achieved by using mandatory access control |
405 | (AppArmor), SecComp filters and namespaces. The LXC team considers this kind of | |
406 | container as unsafe, and they will not consider new container escape exploits | |
407 | to be security issues worthy of a CVE and quick fix. That's why privileged | |
408 | containers should only be used in trusted environments. | |
304eb5a9 | 409 | |
14e97811 OB |
410 | WARNING: Although it is not recommended, AppArmor can be disabled for a |
411 | container. This brings security risks with it. Some syscalls can lead to | |
412 | privilege escalation when executed within a container if the system is | |
413 | misconfigured or if a LXC or Linux Kernel vulnerability exists. | |
304eb5a9 | 414 | |
14e97811 OB |
415 | To disable AppArmor for a container, add the following line to the container |
416 | configuration file located at `/etc/pve/lxc/CTID.conf`: | |
417 | ||
418 | ---- | |
419 | lxc.apparmor_profile = unconfined | |
420 | ---- | |
421 | ||
422 | Please note that this is not recommended for production use. | |
304eb5a9 | 423 | |
304eb5a9 | 424 | |
304eb5a9 | 425 | |
9a5e9443 | 426 | [[pct_cpu]] |
9a5e9443 DM |
427 | CPU |
428 | ~~~ | |
429 | ||
1ff5e4e8 | 430 | [thumbnail="screenshot/gui-create-ct-cpu.png"] |
097aa949 | 431 | |
14e97811 OB |
432 | You can restrict the number of visible CPUs inside the container using the |
433 | `cores` option. This is implemented using the Linux 'cpuset' cgroup | |
434 | (**c**ontrol *group*). A special task inside `pvestatd` tries to distribute | |
435 | running containers among available CPUs. To view the assigned CPUs run | |
436 | the following command: | |
9a5e9443 DM |
437 | |
438 | ---- | |
439 | # pct cpusets | |
440 | --------------------- | |
441 | 102: 6 7 | |
442 | 105: 2 3 4 5 | |
443 | 108: 0 1 | |
444 | --------------------- | |
445 | ---- | |
446 | ||
14e97811 OB |
447 | Containers use the host kernel directly. All tasks inside a container are |
448 | handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely | |
449 | **F**air **S**cheduler) scheduler by default, which has additional bandwidth | |
450 | control options. | |
9a5e9443 DM |
451 | |
452 | [horizontal] | |
0725e3c6 DM |
453 | |
454 | `cpulimit`: :: You can use this option to further limit assigned CPU | |
9a5e9443 DM |
455 | time. Please note that this is a floating point number, so it is |
456 | perfectly valid to assign two cores to a container, but restrict | |
457 | overall CPU consumption to half a core. | |
458 | + | |
459 | ---- | |
460 | cores: 2 | |
461 | cpulimit: 0.5 | |
462 | ---- | |
463 | ||
0725e3c6 | 464 | `cpuunits`: :: This is a relative weight passed to the kernel |
9a5e9443 DM |
465 | scheduler. The larger the number is, the more CPU time this container |
466 | gets. Number is relative to the weights of all the other running | |
467 | containers. The default is 1024. You can use this setting to | |
468 | prioritize some containers. | |
469 | ||
470 | ||
471 | [[pct_memory]] | |
472 | Memory | |
473 | ~~~~~~ | |
474 | ||
1ff5e4e8 | 475 | [thumbnail="screenshot/gui-create-ct-memory.png"] |
097aa949 | 476 | |
9a5e9443 DM |
477 | Container memory is controlled using the cgroup memory controller. |
478 | ||
479 | [horizontal] | |
480 | ||
0725e3c6 | 481 | `memory`: :: Limit overall memory usage. This corresponds |
9a5e9443 DM |
482 | to the `memory.limit_in_bytes` cgroup setting. |
483 | ||
0725e3c6 | 484 | `swap`: :: Allows the container to use additional swap memory from the |
9a5e9443 DM |
485 | host swap space. This corresponds to the `memory.memsw.limit_in_bytes` |
486 | cgroup setting, which is set to the sum of both value (`memory + | |
487 | swap`). | |
488 | ||
4f785ca7 DM |
489 | |
490 | [[pct_mount_points]] | |
9e44e493 DM |
491 | Mount Points |
492 | ~~~~~~~~~~~~ | |
eeecce95 | 493 | |
1ff5e4e8 | 494 | [thumbnail="screenshot/gui-create-ct-root-disk.png"] |
097aa949 | 495 | |
14e97811 OB |
496 | The root mount point is configured with the `rootfs` property. You can |
497 | configure up to 256 additional mount points. The corresponding options | |
498 | are called `mp0` to `mp255`. They can contain the following settings: | |
01639994 FG |
499 | |
500 | include::pct-mountpoint-opts.adoc[] | |
501 | ||
14e97811 OB |
502 | Currently there are three types of mount points: storage backed |
503 | mount points, bind mounts, and device mounts. | |
9e44e493 | 504 | |
5eba0743 | 505 | .Typical container `rootfs` configuration |
4c3b5c77 DM |
506 | ---- |
507 | rootfs: thin1:base-100-disk-1,size=8G | |
508 | ---- | |
509 | ||
510 | ||
5eba0743 | 511 | Storage Backed Mount Points |
4c3b5c77 | 512 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
01639994 | 513 | |
9e44e493 | 514 | Storage backed mount points are managed by the {pve} storage subsystem and come |
eeecce95 WB |
515 | in three different flavors: |
516 | ||
5eba0743 | 517 | - Image based: these are raw images containing a single ext4 formatted file |
eeecce95 | 518 | system. |
5eba0743 | 519 | - ZFS subvolumes: these are technically bind mounts, but with managed storage, |
eeecce95 WB |
520 | and thus allow resizing and snapshotting. |
521 | - Directories: passing `size=0` triggers a special case where instead of a raw | |
522 | image a directory is created. | |
523 | ||
03782251 FG |
524 | NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed |
525 | mount point volumes will automatically allocate a volume of the specified size | |
526 | on the specified storage. E.g., calling | |
527 | `pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume | |
528 | on the storage `thin1` and replace the volume ID place holder `10` with the | |
529 | allocated volume ID. | |
530 | ||
4c3b5c77 | 531 | |
5eba0743 | 532 | Bind Mount Points |
4c3b5c77 | 533 | ^^^^^^^^^^^^^^^^^ |
01639994 | 534 | |
9baca183 FG |
535 | Bind mounts allow you to access arbitrary directories from your Proxmox VE host |
536 | inside a container. Some potential use cases are: | |
537 | ||
538 | - Accessing your home directory in the guest | |
539 | - Accessing an USB device directory in the guest | |
acccc49b | 540 | - Accessing an NFS mount from the host in the guest |
9baca183 | 541 | |
eeecce95 | 542 | Bind mounts are considered to not be managed by the storage subsystem, so you |
9baca183 | 543 | cannot make snapshots or deal with quotas from inside the container. With |
eeecce95 | 544 | unprivileged containers you might run into permission problems caused by the |
9baca183 FG |
545 | user mapping and cannot use ACLs. |
546 | ||
8c1189b6 | 547 | NOTE: The contents of bind mount points are not backed up when using `vzdump`. |
eeecce95 | 548 | |
6b707f2c FG |
549 | WARNING: For security reasons, bind mounts should only be established |
550 | using source directories especially reserved for this purpose, e.g., a | |
551 | directory hierarchy under `/mnt/bindmounts`. Never bind mount system | |
552 | directories like `/`, `/var` or `/etc` into a container - this poses a | |
9baca183 FG |
553 | great security risk. |
554 | ||
555 | NOTE: The bind mount source path must not contain any symlinks. | |
556 | ||
557 | For example, to make the directory `/mnt/bindmounts/shared` accessible in the | |
558 | container with ID `100` under the path `/shared`, use a configuration line like | |
8c1189b6 FG |
559 | `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`. |
560 | Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to | |
9baca183 | 561 | achieve the same result. |
6b707f2c | 562 | |
4c3b5c77 | 563 | |
5eba0743 | 564 | Device Mount Points |
4c3b5c77 | 565 | ^^^^^^^^^^^^^^^^^^^ |
fe154a4f | 566 | |
7432d78e FG |
567 | Device mount points allow to mount block devices of the host directly into the |
568 | container. Similar to bind mounts, device mounts are not managed by {PVE}'s | |
569 | storage subsystem, but the `quota` and `acl` options will be honored. | |
570 | ||
571 | NOTE: Device mount points should only be used under special circumstances. In | |
572 | most cases a storage backed mount point offers the same performance and a lot | |
573 | more features. | |
574 | ||
8c1189b6 | 575 | NOTE: The contents of device mount points are not backed up when using `vzdump`. |
01639994 | 576 | |
4c3b5c77 | 577 | |
80c0adcb | 578 | [[pct_container_network]] |
f5c351f0 DM |
579 | Network |
580 | ~~~~~~~ | |
04c569f6 | 581 | |
1ff5e4e8 | 582 | [thumbnail="screenshot/gui-create-ct-network.png"] |
097aa949 | 583 | |
bac8c385 | 584 | You can configure up to 10 network interfaces for a single |
8c1189b6 | 585 | container. The corresponding options are called `net0` to `net9`, and |
bac8c385 DM |
586 | they can contain the following setting: |
587 | ||
588 | include::pct-network-opts.adoc[] | |
04c569f6 DM |
589 | |
590 | ||
139a9019 DM |
591 | [[pct_startup_and_shutdown]] |
592 | Automatic Start and Shutdown of Containers | |
593 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
594 | ||
14e97811 OB |
595 | To automatically start a container when the host system boots, select the |
596 | option 'Start at boot' in the 'Options' panel of the container in the web | |
597 | interface or run the following command: | |
139a9019 | 598 | |
14e97811 OB |
599 | ---- |
600 | # pct set CTID -onboot 1 | |
601 | ---- | |
139a9019 | 602 | |
4dbeb548 DM |
603 | .Start and Shutdown Order |
604 | // use the screenshot from qemu - its the same | |
1ff5e4e8 | 605 | [thumbnail="screenshot/gui-qemu-edit-start-order.png"] |
4dbeb548 | 606 | |
139a9019 | 607 | If you want to fine tune the boot order of your containers, you can use the following |
14e97811 | 608 | parameters: |
139a9019 | 609 | |
14e97811 | 610 | * *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if |
139a9019 DM |
611 | you want the CT to be the first to be started. (We use the reverse startup |
612 | order for shutdown, so a container with a start order of 1 would be the last to | |
613 | be shut down) | |
614 | * *Startup delay*: Defines the interval between this container start and subsequent | |
14e97811 | 615 | containers starts. For example, set it to 240 if you want to wait 240 seconds before starting |
139a9019 DM |
616 | other containers. |
617 | * *Shutdown timeout*: Defines the duration in seconds {pve} should wait | |
618 | for the container to be offline after issuing a shutdown command. | |
619 | By default this value is set to 60, which means that {pve} will issue a | |
620 | shutdown request, wait 60s for the machine to be offline, and if after 60s | |
621 | the machine is still online will notify that the shutdown action failed. | |
622 | ||
623 | Please note that containers without a Start/Shutdown order parameter will always | |
624 | start after those where the parameter is set, and this parameter only | |
625 | makes sense between the machines running locally on a host, and not | |
626 | cluster-wide. | |
627 | ||
c2c8eb89 DC |
628 | Hookscripts |
629 | ~~~~~~~~~~~ | |
630 | ||
631 | You can add a hook script to CTs with the config property `hookscript`. | |
632 | ||
14e97811 OB |
633 | ---- |
634 | # pct set 100 -hookscript local:snippets/hookscript.pl | |
635 | ---- | |
c2c8eb89 DC |
636 | |
637 | It will be called during various phases of the guests lifetime. | |
638 | For an example and documentation see the example script under | |
639 | `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. | |
139a9019 | 640 | |
51e33128 FG |
641 | Backup and Restore |
642 | ------------------ | |
643 | ||
5eba0743 | 644 | |
2175e37b FG |
645 | Container Backup |
646 | ~~~~~~~~~~~~~~~~ | |
647 | ||
8c1189b6 FG |
648 | It is possible to use the `vzdump` tool for container backup. Please |
649 | refer to the `vzdump` manual page for details. | |
650 | ||
51e33128 | 651 | |
2175e37b FG |
652 | Restoring Container Backups |
653 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
654 | ||
8c1189b6 FG |
655 | Restoring container backups made with `vzdump` is possible using the |
656 | `pct restore` command. By default, `pct restore` will attempt to restore as much | |
2175e37b FG |
657 | of the backed up container configuration as possible. It is possible to override |
658 | the backed up configuration by manually setting container options on the command | |
8c1189b6 | 659 | line (see the `pct` manual page for details). |
2175e37b | 660 | |
8c1189b6 | 661 | NOTE: `pvesm extractconfig` can be used to view the backed up configuration |
2175e37b FG |
662 | contained in a vzdump archive. |
663 | ||
664 | There are two basic restore modes, only differing by their handling of mount | |
665 | points: | |
666 | ||
4c3b5c77 | 667 | |
8c1189b6 FG |
668 | ``Simple'' Restore Mode |
669 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b FG |
670 | |
671 | If neither the `rootfs` parameter nor any of the optional `mpX` parameters | |
672 | are explicitly set, the mount point configuration from the backed up | |
673 | configuration file is restored using the following steps: | |
674 | ||
675 | . Extract mount points and their options from backup | |
676 | . Create volumes for storage backed mount points (on storage provided with the | |
677 | `storage` parameter, or default local storage if unset) | |
678 | . Extract files from backup archive | |
679 | . Add bind and device mount points to restored configuration (limited to root user) | |
680 | ||
681 | NOTE: Since bind and device mount points are never backed up, no files are | |
682 | restored in the last step, but only the configuration options. The assumption | |
683 | is that such mount points are either backed up with another mechanism (e.g., | |
684 | NFS space that is bind mounted into many containers), or not intended to be | |
685 | backed up at all. | |
686 | ||
687 | This simple mode is also used by the container restore operations in the web | |
688 | interface. | |
689 | ||
4c3b5c77 | 690 | |
8c1189b6 FG |
691 | ``Advanced'' Restore Mode |
692 | ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b FG |
693 | |
694 | By setting the `rootfs` parameter (and optionally, any combination of `mpX` | |
8c1189b6 | 695 | parameters), the `pct restore` command is automatically switched into an |
2175e37b FG |
696 | advanced mode. This advanced mode completely ignores the `rootfs` and `mpX` |
697 | configuration options contained in the backup archive, and instead only | |
698 | uses the options explicitly provided as parameters. | |
699 | ||
700 | This mode allows flexible configuration of mount point settings at restore time, | |
701 | for example: | |
702 | ||
703 | * Set target storages, volume sizes and other options for each mount point | |
704 | individually | |
705 | * Redistribute backed up files according to new mount point scheme | |
706 | * Restore to device and/or bind mount points (limited to root user) | |
707 | ||
51e33128 | 708 | |
8c1189b6 | 709 | Managing Containers with `pct` |
04c569f6 DM |
710 | ------------------------------ |
711 | ||
14e97811 OB |
712 | The "Proxmox Container Toolkit" (`pct`) is the command line tool to manage {pve} |
713 | containers. It enables you to create or destroy containers, as well as control the | |
714 | container execution (start, stop, reboot, migrate, etc.). It can be used to set | |
715 | parameters in the config file of a container, for example the network | |
716 | configuration or memory limits. | |
5eba0743 | 717 | |
04c569f6 DM |
718 | CLI Usage Examples |
719 | ~~~~~~~~~~~~~~~~~~ | |
720 | ||
721 | Create a container based on a Debian template (provided you have | |
5eba0743 | 722 | already downloaded the template via the web interface) |
04c569f6 | 723 | |
14e97811 OB |
724 | ---- |
725 | # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz | |
726 | ---- | |
04c569f6 DM |
727 | |
728 | Start container 100 | |
729 | ||
14e97811 OB |
730 | ---- |
731 | # pct start 100 | |
732 | ---- | |
04c569f6 DM |
733 | |
734 | Start a login session via getty | |
735 | ||
14e97811 OB |
736 | ---- |
737 | # pct console 100 | |
738 | ---- | |
04c569f6 DM |
739 | |
740 | Enter the LXC namespace and run a shell as root user | |
741 | ||
14e97811 OB |
742 | ---- |
743 | # pct enter 100 | |
744 | ---- | |
04c569f6 DM |
745 | |
746 | Display the configuration | |
747 | ||
14e97811 OB |
748 | ---- |
749 | # pct config 100 | |
750 | ---- | |
04c569f6 | 751 | |
8c1189b6 | 752 | Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, |
04c569f6 DM |
753 | set the address and gateway, while it's running |
754 | ||
14e97811 OB |
755 | ---- |
756 | # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1 | |
757 | ---- | |
04c569f6 DM |
758 | |
759 | Reduce the memory of the container to 512MB | |
760 | ||
14e97811 OB |
761 | ---- |
762 | # pct set 100 -memory 512 | |
763 | ---- | |
0585f29a | 764 | |
04c569f6 | 765 | |
fe57a420 FG |
766 | Obtaining Debugging Logs |
767 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
768 | ||
769 | In case `pct start` is unable to start a specific container, it might be | |
770 | helpful to collect debugging output by running `lxc-start` (replace `ID` with | |
771 | the container's ID): | |
772 | ||
14e97811 OB |
773 | ---- |
774 | # lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log | |
775 | ---- | |
fe57a420 | 776 | |
14e97811 OB |
777 | This command will attempt to start the container in foreground mode, |
778 | to stop the container run `pct shutdown ID` or `pct stop ID` in a | |
779 | second terminal. | |
fe57a420 FG |
780 | |
781 | The collected debug log is written to `/tmp/lxc-ID.log`. | |
782 | ||
783 | NOTE: If you have changed the container's configuration since the last start | |
784 | attempt with `pct start`, you need to run `pct start` at least once to also | |
785 | update the configuration used by `lxc-start`. | |
786 | ||
33f50e04 DC |
787 | [[pct_migration]] |
788 | Migration | |
789 | --------- | |
790 | ||
791 | If you have a cluster, you can migrate your Containers with | |
792 | ||
14e97811 OB |
793 | ---- |
794 | # pct migrate <ctid> <target> | |
795 | ---- | |
33f50e04 DC |
796 | |
797 | This works as long as your Container is offline. If it has local volumes or | |
14e97811 | 798 | mount points defined, the migration will copy the content over the network to |
ba021358 | 799 | the target host if the same storage is defined there. |
33f50e04 DC |
800 | |
801 | If you want to migrate online Containers, the only way is to use | |
802 | restart migration. This can be initiated with the -restart flag and the optional | |
803 | -timeout parameter. | |
804 | ||
805 | A restart migration will shut down the Container and kill it after the specified | |
806 | timeout (the default is 180 seconds). Then it will migrate the Container | |
807 | like an offline migration and when finished, it starts the Container on the | |
808 | target node. | |
c7bc47af DM |
809 | |
810 | [[pct_configuration]] | |
811 | Configuration | |
812 | ------------- | |
813 | ||
814 | The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, | |
815 | where `<CTID>` is the numeric ID of the given container. Like all | |
816 | other files stored inside `/etc/pve/`, they get automatically | |
817 | replicated to all other cluster nodes. | |
818 | ||
819 | NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be | |
820 | unique cluster wide. | |
821 | ||
822 | .Example Container Configuration | |
823 | ---- | |
824 | ostype: debian | |
825 | arch: amd64 | |
826 | hostname: www | |
827 | memory: 512 | |
828 | swap: 512 | |
829 | net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth | |
830 | rootfs: local:107/vm-107-disk-1.raw,size=7G | |
831 | ---- | |
832 | ||
14e97811 OB |
833 | The configuration files are simple text files. You can edit them |
834 | using a normal text editor (`vi`, `nano`, etc). This is sometimes | |
c7bc47af DM |
835 | useful to do small corrections, but keep in mind that you need to |
836 | restart the container to apply such changes. | |
837 | ||
838 | For that reason, it is usually better to use the `pct` command to | |
839 | generate and modify those files, or do the whole thing using the GUI. | |
840 | Our toolkit is smart enough to instantaneously apply most changes to | |
841 | running containers. This feature is called "hot plug", and there is no | |
842 | need to restart the container in that case. | |
843 | ||
14e97811 OB |
844 | In cases where a change cannot be hot plugged, it will be registered |
845 | as a pending change (shown in red color in the GUI). They will only | |
846 | be applied after rebooting the container. | |
847 | ||
c7bc47af DM |
848 | |
849 | File Format | |
850 | ~~~~~~~~~~~ | |
851 | ||
14e97811 OB |
852 | The container configuration file uses a simple colon separated |
853 | key/value format. Each line has the following format: | |
c7bc47af DM |
854 | |
855 | ----- | |
856 | # this is a comment | |
857 | OPTION: value | |
858 | ----- | |
859 | ||
860 | Blank lines in those files are ignored, and lines starting with a `#` | |
861 | character are treated as comments and are also ignored. | |
862 | ||
863 | It is possible to add low-level, LXC style configuration directly, for | |
864 | example: | |
865 | ||
14e97811 OB |
866 | ---- |
867 | lxc.init_cmd: /sbin/my_own_init | |
868 | ---- | |
c7bc47af DM |
869 | |
870 | or | |
871 | ||
14e97811 OB |
872 | ---- |
873 | lxc.init_cmd = /sbin/my_own_init | |
874 | ---- | |
c7bc47af | 875 | |
14e97811 | 876 | The settings are passed directly to the LXC low-level tools. |
c7bc47af DM |
877 | |
878 | ||
879 | [[pct_snapshots]] | |
880 | Snapshots | |
881 | ~~~~~~~~~ | |
882 | ||
883 | When you create a snapshot, `pct` stores the configuration at snapshot | |
884 | time into a separate snapshot section within the same configuration | |
885 | file. For example, after creating a snapshot called ``testsnapshot'', | |
886 | your configuration file will look like this: | |
887 | ||
888 | .Container configuration with snapshot | |
889 | ---- | |
890 | memory: 512 | |
891 | swap: 512 | |
892 | parent: testsnaphot | |
893 | ... | |
894 | ||
895 | [testsnaphot] | |
896 | memory: 512 | |
897 | swap: 512 | |
898 | snaptime: 1457170803 | |
899 | ... | |
900 | ---- | |
901 | ||
902 | There are a few snapshot related properties like `parent` and | |
903 | `snaptime`. The `parent` property is used to store the parent/child | |
904 | relationship between snapshots. `snaptime` is the snapshot creation | |
905 | time stamp (Unix epoch). | |
906 | ||
907 | ||
908 | [[pct_options]] | |
909 | Options | |
910 | ~~~~~~~ | |
911 | ||
912 | include::pct.conf.5-opts.adoc[] | |
913 | ||
914 | ||
2a11aa70 DM |
915 | Locks |
916 | ----- | |
917 | ||
918 | Container migrations, snapshots and backups (`vzdump`) set a lock to | |
919 | prevent incompatible concurrent actions on the affected container. Sometimes | |
920 | you need to remove such a lock manually (e.g., after a power failure). | |
921 | ||
14e97811 OB |
922 | ---- |
923 | # pct unlock <CTID> | |
924 | ---- | |
2a11aa70 | 925 | |
14e97811 | 926 | CAUTION: Only do this if you are sure the action which set the lock is |
2a11aa70 DM |
927 | no longer running. |
928 | ||
fe57a420 | 929 | |
0c6b782f | 930 | ifdef::manvolnum[] |
3bd9d0cf DM |
931 | |
932 | Files | |
933 | ------ | |
934 | ||
935 | `/etc/pve/lxc/<CTID>.conf`:: | |
936 | ||
937 | Configuration file for the container '<CTID>'. | |
938 | ||
939 | ||
0c6b782f DM |
940 | include::pve-copyright.adoc[] |
941 | endif::manvolnum[] | |
942 | ||
943 | ||
944 | ||
945 | ||
946 | ||
947 | ||
948 |