]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
btrfs: document df weirdness and how to better get usage
[pve-docs.git] / pct.adoc
1 [[chapter_pct]]
2 ifdef::manvolnum[]
3 pct(1)
4 ======
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
13 SYNOPSIS
14 --------
15
16 include::pct.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 Proxmox Container Toolkit
24 =========================
25 :pve-toplevel:
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :title: Linux Container
29 endif::wiki[]
30
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
35
36 The runtime costs for containers is low, usually negligible. However, there are
37 some drawbacks that need be considered:
38
39 * Only Linux distributions can be run in Proxmox Containers. It is not possible to run
40 other operating systems like, for example, FreeBSD or Microsoft Windows
41 inside a container.
42
43 * For security reasons, access to host resources needs to be restricted.
44 Therefore, containers run in their own separate namespaces. Additionally some
45 syscalls (user space requests to the Linux kernel) are not allowed within containers.
46
47 {pve} uses https://linuxcontainers.org/lxc/introduction/[Linux Containers (LXC)] as its underlying
48 container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49 usage and management of LXC, by providing an interface that abstracts
50 complex tasks.
51
52 Containers are tightly integrated with {pve}. This means that they are aware of
53 the cluster setup, and they can use the same network and storage resources as
54 virtual machines. You can also use the {pve} firewall, or manage containers
55 using the HA framework.
56
57 Our primary goal is to offer an environment that provides the benefits of using a
58 VM, but without the additional overhead. This means that Proxmox Containers can
59 be categorized as ``System Containers'', rather than ``Application Containers''.
60
61 NOTE: If you want to run application containers, for example, 'Docker' images, it
62 is recommended that you run them inside a Proxmox Qemu VM. This will give you
63 all the advantages of application containerization, while also providing the
64 benefits that VMs offer, such as strong isolation from the host and the ability
65 to live-migrate, which otherwise isn't possible with containers.
66
67
68 Technology Overview
69 -------------------
70
71 * LXC (https://linuxcontainers.org/)
72
73 * Integrated into {pve} graphical web user interface (GUI)
74
75 * Easy to use command line tool `pct`
76
77 * Access via {pve} REST API
78
79 * 'lxcfs' to provide containerized /proc file system
80
81 * Control groups ('cgroups') for resource isolation and limitation
82
83 * 'AppArmor' and 'seccomp' to improve security
84
85 * Modern Linux kernels
86
87 * Image based deployment (templates)
88
89 * Uses {pve} xref:chapter_storage[storage library]
90
91 * Container setup from host (network, DNS, storage, etc.)
92
93
94 [[pct_container_images]]
95 Container Images
96 ----------------
97
98 Container images, sometimes also referred to as ``templates'' or
99 ``appliances'', are `tar` archives which contain everything to run a container.
100
101 {pve} itself provides a variety of basic templates for the most common Linux
102 distributions. They can be downloaded using the GUI or the `pveam` (short for
103 {pve} Appliance Manager) command line utility.
104 Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
105 are also available to download.
106
107 The list of available templates is updated daily through the 'pve-daily-update'
108 timer. You can also trigger an update manually by executing:
109
110 ----
111 # pveam update
112 ----
113
114 To view the list of available images run:
115
116 ----
117 # pveam available
118 ----
119
120 You can restrict this large list by specifying the `section` you are
121 interested in, for example basic `system` images:
122
123 .List available system images
124 ----
125 # pveam available --section system
126 system alpine-3.12-default_20200823_amd64.tar.xz
127 system alpine-3.13-default_20210419_amd64.tar.xz
128 system alpine-3.14-default_20210623_amd64.tar.xz
129 system archlinux-base_20210420-1_amd64.tar.gz
130 system centos-7-default_20190926_amd64.tar.xz
131 system centos-8-default_20201210_amd64.tar.xz
132 system debian-9.0-standard_9.7-1_amd64.tar.gz
133 system debian-10-standard_10.7-1_amd64.tar.gz
134 system devuan-3.0-standard_3.0_amd64.tar.gz
135 system fedora-33-default_20201115_amd64.tar.xz
136 system fedora-34-default_20210427_amd64.tar.xz
137 system gentoo-current-default_20200310_amd64.tar.xz
138 system opensuse-15.2-default_20200824_amd64.tar.xz
139 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
140 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
141 system ubuntu-20.04-standard_20.04-1_amd64.tar.gz
142 system ubuntu-20.10-standard_20.10-1_amd64.tar.gz
143 system ubuntu-21.04-standard_21.04-1_amd64.tar.gz
144 ----
145
146 Before you can use such a template, you need to download them into one of your
147 storages. If you're unsure to which one, you can simply use the `local` named
148 storage for that purpose. For clustered installations, it is preferred to use a
149 shared storage so that all nodes can access those images.
150
151 ----
152 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
153 ----
154
155 You are now ready to create containers using that image, and you can list all
156 downloaded images on storage `local` with:
157
158 ----
159 # pveam list local
160 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
161 ----
162
163 TIP: You can also use the {pve} web interface GUI to download, list and delete
164 container templates.
165
166 `pct` uses them to create a new container, for example:
167
168 ----
169 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
170 ----
171
172 The above command shows you the full {pve} volume identifiers. They include the
173 storage name, and most other {pve} commands can use them. For example you can
174 delete that image later with:
175
176 ----
177 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
178 ----
179
180
181 [[pct_settings]]
182 Container Settings
183 ------------------
184
185 [[pct_general]]
186 General Settings
187 ~~~~~~~~~~~~~~~~
188
189 [thumbnail="screenshot/gui-create-ct-general.png"]
190
191 General settings of a container include
192
193 * the *Node* : the physical server on which the container will run
194 * the *CT ID*: a unique number in this {pve} installation used to identify your
195 container
196 * *Hostname*: the hostname of the container
197 * *Resource Pool*: a logical group of containers and VMs
198 * *Password*: the root password of the container
199 * *SSH Public Key*: a public key for connecting to the root account over SSH
200 * *Unprivileged container*: this option allows to choose at creation time
201 if you want to create a privileged or unprivileged container.
202
203 Unprivileged Containers
204 ^^^^^^^^^^^^^^^^^^^^^^^
205
206 Unprivileged containers use a new kernel feature called user namespaces.
207 The root UID 0 inside the container is mapped to an unprivileged user outside
208 the container. This means that most security issues (container escape, resource
209 abuse, etc.) in these containers will affect a random unprivileged user, and
210 would be a generic kernel security bug rather than an LXC issue. The LXC team
211 thinks unprivileged containers are safe by design.
212
213 This is the default option when creating a new container.
214
215 NOTE: If the container uses systemd as an init system, please be aware the
216 systemd version running inside the container should be equal to or greater than
217 220.
218
219
220 Privileged Containers
221 ^^^^^^^^^^^^^^^^^^^^^
222
223 Security in containers is achieved by using mandatory access control 'AppArmor'
224 restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
225 considers this kind of container as unsafe, and they will not consider new
226 container escape exploits to be security issues worthy of a CVE and quick fix.
227 That's why privileged containers should only be used in trusted environments.
228
229
230 [[pct_cpu]]
231 CPU
232 ~~~
233
234 [thumbnail="screenshot/gui-create-ct-cpu.png"]
235
236 You can restrict the number of visible CPUs inside the container using the
237 `cores` option. This is implemented using the Linux 'cpuset' cgroup
238 (**c**ontrol *group*).
239 A special task inside `pvestatd` tries to distribute running containers among
240 available CPUs periodically.
241 To view the assigned CPUs run the following command:
242
243 ----
244 # pct cpusets
245 ---------------------
246 102: 6 7
247 105: 2 3 4 5
248 108: 0 1
249 ---------------------
250 ----
251
252 Containers use the host kernel directly. All tasks inside a container are
253 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
254 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
255 control options.
256
257 [horizontal]
258
259 `cpulimit`: :: You can use this option to further limit assigned CPU time.
260 Please note that this is a floating point number, so it is perfectly valid to
261 assign two cores to a container, but restrict overall CPU consumption to half a
262 core.
263 +
264 ----
265 cores: 2
266 cpulimit: 0.5
267 ----
268
269 `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
270 larger the number is, the more CPU time this container gets. Number is relative
271 to the weights of all the other running containers. The default is 1024. You
272 can use this setting to prioritize some containers.
273
274
275 [[pct_memory]]
276 Memory
277 ~~~~~~
278
279 [thumbnail="screenshot/gui-create-ct-memory.png"]
280
281 Container memory is controlled using the cgroup memory controller.
282
283 [horizontal]
284
285 `memory`: :: Limit overall memory usage. This corresponds to the
286 `memory.limit_in_bytes` cgroup setting.
287
288 `swap`: :: Allows the container to use additional swap memory from the host
289 swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
290 setting, which is set to the sum of both value (`memory + swap`).
291
292
293 [[pct_mount_points]]
294 Mount Points
295 ~~~~~~~~~~~~
296
297 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
298
299 The root mount point is configured with the `rootfs` property. You can
300 configure up to 256 additional mount points. The corresponding options are
301 called `mp0` to `mp255`. They can contain the following settings:
302
303 include::pct-mountpoint-opts.adoc[]
304
305 Currently there are three types of mount points: storage backed mount points,
306 bind mounts, and device mounts.
307
308 .Typical container `rootfs` configuration
309 ----
310 rootfs: thin1:base-100-disk-1,size=8G
311 ----
312
313
314 Storage Backed Mount Points
315 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
316
317 Storage backed mount points are managed by the {pve} storage subsystem and come
318 in three different flavors:
319
320 - Image based: these are raw images containing a single ext4 formatted file
321 system.
322 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
323 and thus allow resizing and snapshotting.
324 - Directories: passing `size=0` triggers a special case where instead of a raw
325 image a directory is created.
326
327 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
328 mount point volumes will automatically allocate a volume of the specified size
329 on the specified storage. For example, calling
330
331 ----
332 pct set 100 -mp0 thin1:10,mp=/path/in/container
333 ----
334
335 will allocate a 10GB volume on the storage `thin1` and replace the volume ID
336 place holder `10` with the allocated volume ID, and setup the moutpoint in the
337 container at `/path/in/container`
338
339
340 Bind Mount Points
341 ^^^^^^^^^^^^^^^^^
342
343 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
344 inside a container. Some potential use cases are:
345
346 - Accessing your home directory in the guest
347 - Accessing an USB device directory in the guest
348 - Accessing an NFS mount from the host in the guest
349
350 Bind mounts are considered to not be managed by the storage subsystem, so you
351 cannot make snapshots or deal with quotas from inside the container. With
352 unprivileged containers you might run into permission problems caused by the
353 user mapping and cannot use ACLs.
354
355 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
356
357 WARNING: For security reasons, bind mounts should only be established using
358 source directories especially reserved for this purpose, e.g., a directory
359 hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
360 `/`, `/var` or `/etc` into a container - this poses a great security risk.
361
362 NOTE: The bind mount source path must not contain any symlinks.
363
364 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
365 container with ID `100` under the path `/shared`, use a configuration line like
366 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
367 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
368 achieve the same result.
369
370
371 Device Mount Points
372 ^^^^^^^^^^^^^^^^^^^
373
374 Device mount points allow to mount block devices of the host directly into the
375 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
376 storage subsystem, but the `quota` and `acl` options will be honored.
377
378 NOTE: Device mount points should only be used under special circumstances. In
379 most cases a storage backed mount point offers the same performance and a lot
380 more features.
381
382 NOTE: The contents of device mount points are not backed up when using
383 `vzdump`.
384
385
386 [[pct_container_network]]
387 Network
388 ~~~~~~~
389
390 [thumbnail="screenshot/gui-create-ct-network.png"]
391
392 You can configure up to 10 network interfaces for a single container.
393 The corresponding options are called `net0` to `net9`, and they can contain the
394 following setting:
395
396 include::pct-network-opts.adoc[]
397
398
399 [[pct_startup_and_shutdown]]
400 Automatic Start and Shutdown of Containers
401 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
402
403 To automatically start a container when the host system boots, select the
404 option 'Start at boot' in the 'Options' panel of the container in the web
405 interface or run the following command:
406
407 ----
408 # pct set CTID -onboot 1
409 ----
410
411 .Start and Shutdown Order
412 // use the screenshot from qemu - its the same
413 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
414
415 If you want to fine tune the boot order of your containers, you can use the
416 following parameters:
417
418 * *Start/Shutdown order*: Defines the start order priority. For example, set it
419 to 1 if you want the CT to be the first to be started. (We use the reverse
420 startup order for shutdown, so a container with a start order of 1 would be
421 the last to be shut down)
422 * *Startup delay*: Defines the interval between this container start and
423 subsequent containers starts. For example, set it to 240 if you want to wait
424 240 seconds before starting other containers.
425 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
426 for the container to be offline after issuing a shutdown command.
427 By default this value is set to 60, which means that {pve} will issue a
428 shutdown request, wait 60s for the machine to be offline, and if after 60s
429 the machine is still online will notify that the shutdown action failed.
430
431 Please note that containers without a Start/Shutdown order parameter will
432 always start after those where the parameter is set, and this parameter only
433 makes sense between the machines running locally on a host, and not
434 cluster-wide.
435
436 Hookscripts
437 ~~~~~~~~~~~
438
439 You can add a hook script to CTs with the config property `hookscript`.
440
441 ----
442 # pct set 100 -hookscript local:snippets/hookscript.pl
443 ----
444
445 It will be called during various phases of the guests lifetime. For an example
446 and documentation see the example script under
447 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
448
449 Security Considerations
450 -----------------------
451
452 Containers use the kernel of the host system. This exposes an attack surface
453 for malicious users. In general, full virtual machines provide better
454 isolation. This should be considered if containers are provided to unknown or
455 untrusted people.
456
457 To reduce the attack surface, LXC uses many security features like AppArmor,
458 CGroups and kernel namespaces.
459
460 AppArmor
461 ~~~~~~~~
462
463 AppArmor profiles are used to restrict access to possibly dangerous actions.
464 Some system calls, i.e. `mount`, are prohibited from execution.
465
466 To trace AppArmor activity, use:
467
468 ----
469 # dmesg | grep apparmor
470 ----
471
472 Although it is not recommended, AppArmor can be disabled for a container. This
473 brings security risks with it. Some syscalls can lead to privilege escalation
474 when executed within a container if the system is misconfigured or if a LXC or
475 Linux Kernel vulnerability exists.
476
477 To disable AppArmor for a container, add the following line to the container
478 configuration file located at `/etc/pve/lxc/CTID.conf`:
479
480 ----
481 lxc.apparmor.profile = unconfined
482 ----
483
484 WARNING: Please note that this is not recommended for production use.
485
486
487 [[pct_cgroup]]
488 Control Groups ('cgroup')
489 ~~~~~~~~~~~~~~~~~~~~~~~~~
490
491 'cgroup' is a kernel
492 mechanism used to hierarchically organize processes and distribute system
493 resources.
494
495 The main resources controlled via 'cgroups' are CPU time, memory and swap
496 limits, and access to device nodes. 'cgroups' are also used to "freeze" a
497 container before taking snapshots.
498
499 There are 2 versions of 'cgroups' currently available,
500 https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy]
501 and
502 https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2'].
503
504 Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a
505 "hybrid" setup was used, where resource control was mainly done in 'cgroupv1'
506 with an additional 'cgroupv2' controller which could take over some subsystems
507 via the 'cgroup_no_v1' kernel command line parameter. (See the
508 https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel
509 parameter documentation] for details.)
510
511 [[pct_cgroup_compat]]
512 CGroup Version Compatibility
513 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
514 The main difference between pure 'cgroupv2' and the old hybrid environments
515 regarding {pve} is that with 'cgroupv2' memory and swap are now controlled
516 independently. The memory and swap settings for containers can map directly to
517 these values, whereas previously only the memory limit and the limit of the
518 *sum* of memory and swap could be limited.
519
520 Another important difference is that the 'devices' controller is configured in a
521 completely different way. Because of this, file system quotas are currently not
522 supported in a pure 'cgroupv2' environment.
523
524 'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2'
525 environment. Containers running 'systemd' version 231 or newer support
526 'cgroupv2' footnote:[this includes all newest major versions of container
527 templates shipped by {pve}], as do containers not using 'systemd' as init
528 system footnote:[for example Alpine Linux].
529
530 [NOTE]
531 ====
532 CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases,
533 which have a 'systemd' version that is too old to run in a 'cgroupv2'
534 environment, you can either
535
536 * Upgrade the whole distribution to a newer release. For the examples above, that
537 could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like
538 AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and
539 security fixes, often also new features, and moving the EOL date in the future.
540
541 * Upgrade the Containers systemd version. If the distribution provides a
542 backports repository this can be an easy and quick stop-gap measurement.
543
544 * Move the container, or its services, to a Virtual Machine. Virtual Machines
545 have a much less interaction with the host, that's why one can install
546 decades old OS versions just fine there.
547
548 * Switch back to the legacy 'cgroup' controller. Note that while it can be a
549 valid solution, it's not a permanent one. There's a high likelihood that a
550 future {pve} major release, for example 8.0, cannot support the legacy
551 controller anymore.
552 ====
553
554 [[pct_cgroup_change_version]]
555 Changing CGroup Version
556 ^^^^^^^^^^^^^^^^^^^^^^^
557
558 TIP: If file system quotas are not required and all containers support 'cgroupv2',
559 it is recommended to stick to the new default.
560
561 To switch back to the previous version the following kernel command line
562 parameter can be used:
563
564 ----
565 systemd.unified_cgroup_hierarchy=0
566 ----
567
568 See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot
569 command line on where to add the parameter.
570
571 // TODO: seccomp a bit more.
572 // TODO: pve-lxc-syscalld
573
574
575 Guest Operating System Configuration
576 ------------------------------------
577
578 {pve} tries to detect the Linux distribution in the container, and modifies
579 some files. Here is a short list of things done at container startup:
580
581 set /etc/hostname:: to set the container name
582
583 modify /etc/hosts:: to allow lookup of the local hostname
584
585 network setup:: pass the complete network setup to the container
586
587 configure DNS:: pass information about DNS servers
588
589 adapt the init system:: for example, fix the number of spawned getty processes
590
591 set the root password:: when creating a new container
592
593 rewrite ssh_host_keys:: so that each container has unique keys
594
595 randomize crontab:: so that cron does not start at the same time on all containers
596
597 Changes made by {PVE} are enclosed by comment markers:
598
599 ----
600 # --- BEGIN PVE ---
601 <data>
602 # --- END PVE ---
603 ----
604
605 Those markers will be inserted at a reasonable location in the file. If such a
606 section already exists, it will be updated in place and will not be moved.
607
608 Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
609 For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
610 file will not be touched. This can be a simple empty file created via:
611
612 ----
613 # touch /etc/.pve-ignore.hosts
614 ----
615
616 Most modifications are OS dependent, so they differ between different
617 distributions and versions. You can completely disable modifications by
618 manually setting the `ostype` to `unmanaged`.
619
620 OS type detection is done by testing for certain files inside the
621 container. {pve} first checks the `/etc/os-release` file
622 footnote:[/etc/os-release replaces the multitude of per-distribution
623 release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
624 If that file is not present, or it does not contain a clearly recognizable
625 distribution identifier the following distribution specific release files are
626 checked.
627
628 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
629
630 Debian:: test /etc/debian_version
631
632 Fedora:: test /etc/fedora-release
633
634 RedHat or CentOS:: test /etc/redhat-release
635
636 ArchLinux:: test /etc/arch-release
637
638 Alpine:: test /etc/alpine-release
639
640 Gentoo:: test /etc/gentoo-release
641
642 NOTE: Container start fails if the configured `ostype` differs from the auto
643 detected type.
644
645
646 [[pct_container_storage]]
647 Container Storage
648 -----------------
649
650 The {pve} LXC container storage model is more flexible than traditional
651 container storage models. A container can have multiple mount points. This
652 makes it possible to use the best suited storage for each application.
653
654 For example the root file system of the container can be on slow and cheap
655 storage while the database can be on fast and distributed storage via a second
656 mount point. See section <<pct_mount_points, Mount Points>> for further
657 details.
658
659 Any storage type supported by the {pve} storage library can be used. This means
660 that containers can be stored on local (for example `lvm`, `zfs` or directory),
661 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
662 Ceph. Advanced storage features like snapshots or clones can be used if the
663 underlying storage supports them. The `vzdump` backup tool can use snapshots to
664 provide consistent container backups.
665
666 Furthermore, local devices or local directories can be mounted directly using
667 'bind mounts'. This gives access to local resources inside a container with
668 practically zero overhead. Bind mounts can be used as an easy way to share data
669 between containers.
670
671
672 FUSE Mounts
673 ~~~~~~~~~~~
674
675 WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
676 usage of FUSE mounts inside a container is strongly advised against, as
677 containers need to be frozen for suspend or snapshot mode backups.
678
679 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
680 technologies, it is possible to establish the FUSE mount on the Proxmox host
681 and use a bind mount point to make it accessible inside the container.
682
683
684 Using Quotas Inside Containers
685 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
686
687 Quotas allow to set limits inside a container for the amount of disk space that
688 each user can use.
689
690 NOTE: This currently requires the use of legacy 'cgroups'.
691
692 NOTE: This only works on ext4 image based storage types and currently only
693 works with privileged containers.
694
695 Activating the `quota` option causes the following mount options to be used for
696 a mount point:
697 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
698
699 This allows quotas to be used like on any other system. You can initialize the
700 `/aquota.user` and `/aquota.group` files by running:
701
702 ----
703 # quotacheck -cmug /
704 # quotaon /
705 ----
706
707 Then edit the quotas using the `edquota` command. Refer to the documentation of
708 the distribution running inside the container for details.
709
710 NOTE: You need to run the above commands for every mount point by passing the
711 mount point's path instead of just `/`.
712
713
714 Using ACLs Inside Containers
715 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
716
717 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
718 containers. ACLs allow you to set more detailed file ownership than the
719 traditional user/group/others model.
720
721
722 Backup of Container mount points
723 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
724
725 To include a mount point in backups, enable the `backup` option for it in the
726 container configuration. For an existing mount point `mp0`
727
728 ----
729 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
730 ----
731
732 add `backup=1` to enable it.
733
734 ----
735 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
736 ----
737
738 NOTE: When creating a new mount point in the GUI, this option is enabled by
739 default.
740
741 To disable backups for a mount point, add `backup=0` in the way described
742 above, or uncheck the *Backup* checkbox on the GUI.
743
744 Replication of Containers mount points
745 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
746
747 By default, additional mount points are replicated when the Root Disk is
748 replicated. If you want the {pve} storage replication mechanism to skip a mount
749 point, you can set the *Skip replication* option for that mount point.
750 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
751 mount point to a different type of storage when the container has replication
752 configured requires to have *Skip replication* enabled for that mount point.
753
754
755 Backup and Restore
756 ------------------
757
758
759 Container Backup
760 ~~~~~~~~~~~~~~~~
761
762 It is possible to use the `vzdump` tool for container backup. Please refer to
763 the `vzdump` manual page for details.
764
765
766 Restoring Container Backups
767 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
768
769 Restoring container backups made with `vzdump` is possible using the `pct
770 restore` command. By default, `pct restore` will attempt to restore as much of
771 the backed up container configuration as possible. It is possible to override
772 the backed up configuration by manually setting container options on the
773 command line (see the `pct` manual page for details).
774
775 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
776 contained in a vzdump archive.
777
778 There are two basic restore modes, only differing by their handling of mount
779 points:
780
781
782 ``Simple'' Restore Mode
783 ^^^^^^^^^^^^^^^^^^^^^^^
784
785 If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
786 explicitly set, the mount point configuration from the backed up configuration
787 file is restored using the following steps:
788
789 . Extract mount points and their options from backup
790 . Create volumes for storage backed mount points (on storage provided with the
791 `storage` parameter, or default local storage if unset)
792 . Extract files from backup archive
793 . Add bind and device mount points to restored configuration (limited to root
794 user)
795
796 NOTE: Since bind and device mount points are never backed up, no files are
797 restored in the last step, but only the configuration options. The assumption
798 is that such mount points are either backed up with another mechanism (e.g.,
799 NFS space that is bind mounted into many containers), or not intended to be
800 backed up at all.
801
802 This simple mode is also used by the container restore operations in the web
803 interface.
804
805
806 ``Advanced'' Restore Mode
807 ^^^^^^^^^^^^^^^^^^^^^^^^^
808
809 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
810 parameters), the `pct restore` command is automatically switched into an
811 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
812 configuration options contained in the backup archive, and instead only uses
813 the options explicitly provided as parameters.
814
815 This mode allows flexible configuration of mount point settings at restore
816 time, for example:
817
818 * Set target storages, volume sizes and other options for each mount point
819 individually
820 * Redistribute backed up files according to new mount point scheme
821 * Restore to device and/or bind mount points (limited to root user)
822
823
824 Managing Containers with `pct`
825 ------------------------------
826
827 The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
828 {pve} containers. It enables you to create or destroy containers, as well as
829 control the container execution (start, stop, reboot, migrate, etc.). It can be
830 used to set parameters in the config file of a container, for example the
831 network configuration or memory limits.
832
833 CLI Usage Examples
834 ~~~~~~~~~~~~~~~~~~
835
836 Create a container based on a Debian template (provided you have already
837 downloaded the template via the web interface)
838
839 ----
840 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
841 ----
842
843 Start container 100
844
845 ----
846 # pct start 100
847 ----
848
849 Start a login session via getty
850
851 ----
852 # pct console 100
853 ----
854
855 Enter the LXC namespace and run a shell as root user
856
857 ----
858 # pct enter 100
859 ----
860
861 Display the configuration
862
863 ----
864 # pct config 100
865 ----
866
867 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
868 the address and gateway, while it's running
869
870 ----
871 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
872 ----
873
874 Reduce the memory of the container to 512MB
875
876 ----
877 # pct set 100 -memory 512
878 ----
879
880 Destroying a container always removes it from Access Control Lists and it always
881 removes the firewall configuration of the container. You have to activate
882 '--purge', if you want to additionally remove the container from replication jobs,
883 backup jobs and HA resource configurations.
884
885 ----
886 # pct destroy 100 --purge
887 ----
888
889
890
891 Obtaining Debugging Logs
892 ~~~~~~~~~~~~~~~~~~~~~~~~
893
894 In case `pct start` is unable to start a specific container, it might be
895 helpful to collect debugging output by passing the `--debug` flag (replace `CTID` with
896 the container's CTID):
897
898 ----
899 # pct start CTID --debug
900 ----
901
902 Alternatively, you can use the following `lxc-start` command, which will save
903 the debug log to the file specified by the `-o` output option:
904
905 ----
906 # lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log
907 ----
908
909 This command will attempt to start the container in foreground mode, to stop
910 the container run `pct shutdown CTID` or `pct stop CTID` in a second terminal.
911
912 The collected debug log is written to `/tmp/lxc-CTID.log`.
913
914 NOTE: If you have changed the container's configuration since the last start
915 attempt with `pct start`, you need to run `pct start` at least once to also
916 update the configuration used by `lxc-start`.
917
918 [[pct_migration]]
919 Migration
920 ---------
921
922 If you have a cluster, you can migrate your Containers with
923
924 ----
925 # pct migrate <ctid> <target>
926 ----
927
928 This works as long as your Container is offline. If it has local volumes or
929 mount points defined, the migration will copy the content over the network to
930 the target host if the same storage is defined there.
931
932 Running containers cannot live-migrated due to technical limitations. You can
933 do a restart migration, which shuts down, moves and then starts a container
934 again on the target node. As containers are very lightweight, this results
935 normally only in a downtime of some hundreds of milliseconds.
936
937 A restart migration can be done through the web interface or by using the
938 `--restart` flag with the `pct migrate` command.
939
940 A restart migration will shut down the Container and kill it after the
941 specified timeout (the default is 180 seconds). Then it will migrate the
942 Container like an offline migration and when finished, it starts the Container
943 on the target node.
944
945 [[pct_configuration]]
946 Configuration
947 -------------
948
949 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
950 `<CTID>` is the numeric ID of the given container. Like all other files stored
951 inside `/etc/pve/`, they get automatically replicated to all other cluster
952 nodes.
953
954 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
955 unique cluster wide.
956
957 .Example Container Configuration
958 ----
959 ostype: debian
960 arch: amd64
961 hostname: www
962 memory: 512
963 swap: 512
964 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
965 rootfs: local:107/vm-107-disk-1.raw,size=7G
966 ----
967
968 The configuration files are simple text files. You can edit them using a normal
969 text editor, for example, `vi` or `nano`.
970 This is sometimes useful to do small corrections, but keep in mind that you
971 need to restart the container to apply such changes.
972
973 For that reason, it is usually better to use the `pct` command to generate and
974 modify those files, or do the whole thing using the GUI.
975 Our toolkit is smart enough to instantaneously apply most changes to running
976 containers. This feature is called ``hot plug'', and there is no need to restart
977 the container in that case.
978
979 In cases where a change cannot be hot-plugged, it will be registered as a
980 pending change (shown in red color in the GUI).
981 They will only be applied after rebooting the container.
982
983
984 File Format
985 ~~~~~~~~~~~
986
987 The container configuration file uses a simple colon separated key/value
988 format. Each line has the following format:
989
990 -----
991 # this is a comment
992 OPTION: value
993 -----
994
995 Blank lines in those files are ignored, and lines starting with a `#` character
996 are treated as comments and are also ignored.
997
998 It is possible to add low-level, LXC style configuration directly, for example:
999
1000 ----
1001 lxc.init_cmd: /sbin/my_own_init
1002 ----
1003
1004 or
1005
1006 ----
1007 lxc.init_cmd = /sbin/my_own_init
1008 ----
1009
1010 The settings are passed directly to the LXC low-level tools.
1011
1012
1013 [[pct_snapshots]]
1014 Snapshots
1015 ~~~~~~~~~
1016
1017 When you create a snapshot, `pct` stores the configuration at snapshot time
1018 into a separate snapshot section within the same configuration file. For
1019 example, after creating a snapshot called ``testsnapshot'', your configuration
1020 file will look like this:
1021
1022 .Container configuration with snapshot
1023 ----
1024 memory: 512
1025 swap: 512
1026 parent: testsnaphot
1027 ...
1028
1029 [testsnaphot]
1030 memory: 512
1031 swap: 512
1032 snaptime: 1457170803
1033 ...
1034 ----
1035
1036 There are a few snapshot related properties like `parent` and `snaptime`. The
1037 `parent` property is used to store the parent/child relationship between
1038 snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
1039
1040
1041 [[pct_options]]
1042 Options
1043 ~~~~~~~
1044
1045 include::pct.conf.5-opts.adoc[]
1046
1047
1048 Locks
1049 -----
1050
1051 Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
1052 incompatible concurrent actions on the affected container. Sometimes you need
1053 to remove such a lock manually (e.g., after a power failure).
1054
1055 ----
1056 # pct unlock <CTID>
1057 ----
1058
1059 CAUTION: Only do this if you are sure the action which set the lock is no
1060 longer running.
1061
1062
1063 ifdef::manvolnum[]
1064
1065 Files
1066 ------
1067
1068 `/etc/pve/lxc/<CTID>.conf`::
1069
1070 Configuration file for the container '<CTID>'.
1071
1072
1073 include::pve-copyright.adoc[]
1074 endif::manvolnum[]