10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
16 include::pct.1-synopsis.adoc[]
23 Proxmox Container Toolkit
24 =========================
28 :title: Linux Container
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
36 The runtime costs for containers is low, usually negligible. However, there are
37 some drawbacks that need be considered:
39 * Only Linux distributions can be run in Proxmox Containers. It is not possible to run
40 other operating systems like, for example, FreeBSD or Microsoft Windows
43 * For security reasons, access to host resources needs to be restricted.
44 Therefore, containers run in their own separate namespaces. Additionally some
45 syscalls (user space requests to the Linux kernel) are not allowed within containers.
47 {pve} uses https://linuxcontainers.org/lxc/introduction/[Linux Containers (LXC)] as its underlying
48 container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49 usage and management of LXC, by providing an interface that abstracts
52 Containers are tightly integrated with {pve}. This means that they are aware of
53 the cluster setup, and they can use the same network and storage resources as
54 virtual machines. You can also use the {pve} firewall, or manage containers
55 using the HA framework.
57 Our primary goal is to offer an environment that provides the benefits of using a
58 VM, but without the additional overhead. This means that Proxmox Containers can
59 be categorized as ``System Containers'', rather than ``Application Containers''.
61 NOTE: If you want to run application containers, for example, 'Docker' images, it
62 is recommended that you run them inside a Proxmox Qemu VM. This will give you
63 all the advantages of application containerization, while also providing the
64 benefits that VMs offer, such as strong isolation from the host and the ability
65 to live-migrate, which otherwise isn't possible with containers.
71 * LXC (https://linuxcontainers.org/)
73 * Integrated into {pve} graphical web user interface (GUI)
75 * Easy to use command line tool `pct`
77 * Access via {pve} REST API
79 * 'lxcfs' to provide containerized /proc file system
81 * Control groups ('cgroups') for resource isolation and limitation
83 * 'AppArmor' and 'seccomp' to improve security
85 * Modern Linux kernels
87 * Image based deployment (xref:pct_supported_distributions[templates])
89 * Uses {pve} xref:chapter_storage[storage library]
91 * Container setup from host (network, DNS, storage, etc.)
94 [[pct_supported_distributions]]
95 Supported Distributions
96 -----------------------
98 List of officially supported distributions can be found below.
100 Templates for the following distributions are available through our
101 repositories. You can use xref:pct_container_images[pveam] tool or the
102 Graphical User Interface to download them.
107 [quote, 'https://alpinelinux.org']
109 "Alpine Linux is a security-oriented, lightweight Linux distribution based on
110 musl libc and busybox."
113 https://alpinelinux.org/releases/
118 [quote, 'https://wiki.archlinux.org/title/Arch_Linux']
120 "a lightweight and flexible Linux® distribution that tries to Keep It Simple."
124 CentOS, Almalinux, Rocky Linux
125 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
127 CentOS / CentOS Stream
128 ^^^^^^^^^^^^^^^^^^^^^^
130 [quote, 'https://centos.org']
132 "The CentOS Linux distribution is a stable, predictable, manageable and
133 reproducible platform derived from the sources of Red Hat Enterprise Linux
137 https://wiki.centos.org/About/Product
142 [quote, 'https://almalinux.org']
144 "An Open Source, community owned and governed, forever-free
145 enterprise Linux distribution, focused on long-term stability, providing a
146 robust production-grade platform. AlmaLinux OS is 1:1 binary compatible with
147 RHEL® and pre-Stream CentOS."
151 https://en.wikipedia.org/wiki/AlmaLinux#Releases
156 [quote, 'https://rockylinux.org']
158 "Rocky Linux is a community enterprise operating system designed
159 to be 100% bug-for-bug compatible with America's top enterprise Linux
160 distribution now that its downstream partner has shifted direction."
163 https://en.wikipedia.org/wiki/Rocky_Linux#Releases
168 [quote, 'https://www.debian.org/intro/index#software']
170 "Debian is a free operating system, developed and maintained by the Debian
171 project. A free Linux distribution with thousands of applications to meet our
175 https://www.debian.org/releases/stable/releasenotes
180 [quote, 'https://www.devuan.org']
182 "Devuan GNU+Linux is a fork of Debian without systemd that allows users to
183 reclaim control over their system by avoiding unnecessary entanglements and
184 ensuring Init Freedom."
191 [quote, 'https://getfedora.org']
193 "Fedora creates an innovative, free, and open source platform for hardware,
194 clouds, and containers that enables software developers and community members
195 to build tailored solutions for their users."
198 https://fedoraproject.org/wiki/Releases
203 [quote, 'https://www.gentoo.org']
205 "a highly flexible, source-based Linux distribution."
211 [quote, 'https://www.opensuse.org']
213 "The makers' choice for sysadmins, developers and desktop users."
216 https://get.opensuse.org/leap/
221 [quote, 'https://docs.ubuntu.com/']
223 "The world’s most popular Linux for desktop computing."
226 https://wiki.ubuntu.com/Releases
228 [[pct_container_images]]
232 Container images, sometimes also referred to as ``templates'' or
233 ``appliances'', are `tar` archives which contain everything to run a container.
235 {pve} itself provides a variety of basic templates for the
236 xref:pct_supported_distributions[most common Linux distributions]. They can be
237 downloaded using the GUI or the `pveam` (short for {pve} Appliance Manager)
238 command line utility. Additionally, https://www.turnkeylinux.org/[TurnKey
239 Linux] container templates are also available to download.
241 The list of available templates is updated daily through the 'pve-daily-update'
242 timer. You can also trigger an update manually by executing:
248 To view the list of available images run:
254 You can restrict this large list by specifying the `section` you are
255 interested in, for example basic `system` images:
257 .List available system images
259 # pveam available --section system
260 system alpine-3.12-default_20200823_amd64.tar.xz
261 system alpine-3.13-default_20210419_amd64.tar.xz
262 system alpine-3.14-default_20210623_amd64.tar.xz
263 system archlinux-base_20210420-1_amd64.tar.gz
264 system centos-7-default_20190926_amd64.tar.xz
265 system centos-8-default_20201210_amd64.tar.xz
266 system debian-9.0-standard_9.7-1_amd64.tar.gz
267 system debian-10-standard_10.7-1_amd64.tar.gz
268 system devuan-3.0-standard_3.0_amd64.tar.gz
269 system fedora-33-default_20201115_amd64.tar.xz
270 system fedora-34-default_20210427_amd64.tar.xz
271 system gentoo-current-default_20200310_amd64.tar.xz
272 system opensuse-15.2-default_20200824_amd64.tar.xz
273 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
274 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
275 system ubuntu-20.04-standard_20.04-1_amd64.tar.gz
276 system ubuntu-20.10-standard_20.10-1_amd64.tar.gz
277 system ubuntu-21.04-standard_21.04-1_amd64.tar.gz
280 Before you can use such a template, you need to download them into one of your
281 storages. If you're unsure to which one, you can simply use the `local` named
282 storage for that purpose. For clustered installations, it is preferred to use a
283 shared storage so that all nodes can access those images.
286 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
289 You are now ready to create containers using that image, and you can list all
290 downloaded images on storage `local` with:
294 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
297 TIP: You can also use the {pve} web interface GUI to download, list and delete
300 `pct` uses them to create a new container, for example:
303 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
306 The above command shows you the full {pve} volume identifiers. They include the
307 storage name, and most other {pve} commands can use them. For example you can
308 delete that image later with:
311 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
323 [thumbnail="screenshot/gui-create-ct-general.png"]
325 General settings of a container include
327 * the *Node* : the physical server on which the container will run
328 * the *CT ID*: a unique number in this {pve} installation used to identify your
330 * *Hostname*: the hostname of the container
331 * *Resource Pool*: a logical group of containers and VMs
332 * *Password*: the root password of the container
333 * *SSH Public Key*: a public key for connecting to the root account over SSH
334 * *Unprivileged container*: this option allows to choose at creation time
335 if you want to create a privileged or unprivileged container.
337 Unprivileged Containers
338 ^^^^^^^^^^^^^^^^^^^^^^^
340 Unprivileged containers use a new kernel feature called user namespaces.
341 The root UID 0 inside the container is mapped to an unprivileged user outside
342 the container. This means that most security issues (container escape, resource
343 abuse, etc.) in these containers will affect a random unprivileged user, and
344 would be a generic kernel security bug rather than an LXC issue. The LXC team
345 thinks unprivileged containers are safe by design.
347 This is the default option when creating a new container.
349 NOTE: If the container uses systemd as an init system, please be aware the
350 systemd version running inside the container should be equal to or greater than
354 Privileged Containers
355 ^^^^^^^^^^^^^^^^^^^^^
357 Security in containers is achieved by using mandatory access control 'AppArmor'
358 restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
359 considers this kind of container as unsafe, and they will not consider new
360 container escape exploits to be security issues worthy of a CVE and quick fix.
361 That's why privileged containers should only be used in trusted environments.
368 [thumbnail="screenshot/gui-create-ct-cpu.png"]
370 You can restrict the number of visible CPUs inside the container using the
371 `cores` option. This is implemented using the Linux 'cpuset' cgroup
372 (**c**ontrol *group*).
373 A special task inside `pvestatd` tries to distribute running containers among
374 available CPUs periodically.
375 To view the assigned CPUs run the following command:
379 ---------------------
383 ---------------------
386 Containers use the host kernel directly. All tasks inside a container are
387 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
388 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
393 `cpulimit`: :: You can use this option to further limit assigned CPU time.
394 Please note that this is a floating point number, so it is perfectly valid to
395 assign two cores to a container, but restrict overall CPU consumption to half a
403 `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
404 larger the number is, the more CPU time this container gets. Number is relative
405 to the weights of all the other running containers. The default is 1024. You
406 can use this setting to prioritize some containers.
413 [thumbnail="screenshot/gui-create-ct-memory.png"]
415 Container memory is controlled using the cgroup memory controller.
419 `memory`: :: Limit overall memory usage. This corresponds to the
420 `memory.limit_in_bytes` cgroup setting.
422 `swap`: :: Allows the container to use additional swap memory from the host
423 swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
424 setting, which is set to the sum of both value (`memory + swap`).
431 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
433 The root mount point is configured with the `rootfs` property. You can
434 configure up to 256 additional mount points. The corresponding options are
435 called `mp0` to `mp255`. They can contain the following settings:
437 include::pct-mountpoint-opts.adoc[]
439 Currently there are three types of mount points: storage backed mount points,
440 bind mounts, and device mounts.
442 .Typical container `rootfs` configuration
444 rootfs: thin1:base-100-disk-1,size=8G
448 Storage Backed Mount Points
449 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
451 Storage backed mount points are managed by the {pve} storage subsystem and come
452 in three different flavors:
454 - Image based: these are raw images containing a single ext4 formatted file
456 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
457 and thus allow resizing and snapshotting.
458 - Directories: passing `size=0` triggers a special case where instead of a raw
459 image a directory is created.
461 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
462 mount point volumes will automatically allocate a volume of the specified size
463 on the specified storage. For example, calling
466 pct set 100 -mp0 thin1:10,mp=/path/in/container
469 will allocate a 10GB volume on the storage `thin1` and replace the volume ID
470 place holder `10` with the allocated volume ID, and setup the moutpoint in the
471 container at `/path/in/container`
477 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
478 inside a container. Some potential use cases are:
480 - Accessing your home directory in the guest
481 - Accessing an USB device directory in the guest
482 - Accessing an NFS mount from the host in the guest
484 Bind mounts are considered to not be managed by the storage subsystem, so you
485 cannot make snapshots or deal with quotas from inside the container. With
486 unprivileged containers you might run into permission problems caused by the
487 user mapping and cannot use ACLs.
489 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
491 WARNING: For security reasons, bind mounts should only be established using
492 source directories especially reserved for this purpose, e.g., a directory
493 hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
494 `/`, `/var` or `/etc` into a container - this poses a great security risk.
496 NOTE: The bind mount source path must not contain any symlinks.
498 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
499 container with ID `100` under the path `/shared`, use a configuration line like
500 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
501 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
502 achieve the same result.
508 Device mount points allow to mount block devices of the host directly into the
509 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
510 storage subsystem, but the `quota` and `acl` options will be honored.
512 NOTE: Device mount points should only be used under special circumstances. In
513 most cases a storage backed mount point offers the same performance and a lot
516 NOTE: The contents of device mount points are not backed up when using
520 [[pct_container_network]]
524 [thumbnail="screenshot/gui-create-ct-network.png"]
526 You can configure up to 10 network interfaces for a single container.
527 The corresponding options are called `net0` to `net9`, and they can contain the
530 include::pct-network-opts.adoc[]
533 [[pct_startup_and_shutdown]]
534 Automatic Start and Shutdown of Containers
535 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
537 To automatically start a container when the host system boots, select the
538 option 'Start at boot' in the 'Options' panel of the container in the web
539 interface or run the following command:
542 # pct set CTID -onboot 1
545 .Start and Shutdown Order
546 // use the screenshot from qemu - its the same
547 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
549 If you want to fine tune the boot order of your containers, you can use the
550 following parameters:
552 * *Start/Shutdown order*: Defines the start order priority. For example, set it
553 to 1 if you want the CT to be the first to be started. (We use the reverse
554 startup order for shutdown, so a container with a start order of 1 would be
555 the last to be shut down)
556 * *Startup delay*: Defines the interval between this container start and
557 subsequent containers starts. For example, set it to 240 if you want to wait
558 240 seconds before starting other containers.
559 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
560 for the container to be offline after issuing a shutdown command.
561 By default this value is set to 60, which means that {pve} will issue a
562 shutdown request, wait 60s for the machine to be offline, and if after 60s
563 the machine is still online will notify that the shutdown action failed.
565 Please note that containers without a Start/Shutdown order parameter will
566 always start after those where the parameter is set, and this parameter only
567 makes sense between the machines running locally on a host, and not
570 If you require a delay between the host boot and the booting of the first
571 container, see the section on
572 xref:first_guest_boot_delay[Proxmox VE Node Management].
578 You can add a hook script to CTs with the config property `hookscript`.
581 # pct set 100 -hookscript local:snippets/hookscript.pl
584 It will be called during various phases of the guests lifetime. For an example
585 and documentation see the example script under
586 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
588 Security Considerations
589 -----------------------
591 Containers use the kernel of the host system. This exposes an attack surface
592 for malicious users. In general, full virtual machines provide better
593 isolation. This should be considered if containers are provided to unknown or
596 To reduce the attack surface, LXC uses many security features like AppArmor,
597 CGroups and kernel namespaces.
602 AppArmor profiles are used to restrict access to possibly dangerous actions.
603 Some system calls, i.e. `mount`, are prohibited from execution.
605 To trace AppArmor activity, use:
608 # dmesg | grep apparmor
611 Although it is not recommended, AppArmor can be disabled for a container. This
612 brings security risks with it. Some syscalls can lead to privilege escalation
613 when executed within a container if the system is misconfigured or if a LXC or
614 Linux Kernel vulnerability exists.
616 To disable AppArmor for a container, add the following line to the container
617 configuration file located at `/etc/pve/lxc/CTID.conf`:
620 lxc.apparmor.profile = unconfined
623 WARNING: Please note that this is not recommended for production use.
627 Control Groups ('cgroup')
628 ~~~~~~~~~~~~~~~~~~~~~~~~~
631 mechanism used to hierarchically organize processes and distribute system
634 The main resources controlled via 'cgroups' are CPU time, memory and swap
635 limits, and access to device nodes. 'cgroups' are also used to "freeze" a
636 container before taking snapshots.
638 There are 2 versions of 'cgroups' currently available,
639 https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy]
641 https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2'].
643 Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a
644 "hybrid" setup was used, where resource control was mainly done in 'cgroupv1'
645 with an additional 'cgroupv2' controller which could take over some subsystems
646 via the 'cgroup_no_v1' kernel command line parameter. (See the
647 https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel
648 parameter documentation] for details.)
650 [[pct_cgroup_compat]]
651 CGroup Version Compatibility
652 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
653 The main difference between pure 'cgroupv2' and the old hybrid environments
654 regarding {pve} is that with 'cgroupv2' memory and swap are now controlled
655 independently. The memory and swap settings for containers can map directly to
656 these values, whereas previously only the memory limit and the limit of the
657 *sum* of memory and swap could be limited.
659 Another important difference is that the 'devices' controller is configured in a
660 completely different way. Because of this, file system quotas are currently not
661 supported in a pure 'cgroupv2' environment.
663 'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2'
664 environment. Containers running 'systemd' version 231 or newer support
665 'cgroupv2' footnote:[this includes all newest major versions of container
666 templates shipped by {pve}], as do containers not using 'systemd' as init
667 system footnote:[for example Alpine Linux].
671 CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases,
672 which have a 'systemd' version that is too old to run in a 'cgroupv2'
673 environment, you can either
675 * Upgrade the whole distribution to a newer release. For the examples above, that
676 could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like
677 AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and
678 security fixes, often also new features, and moving the EOL date in the future.
680 * Upgrade the Containers systemd version. If the distribution provides a
681 backports repository this can be an easy and quick stop-gap measurement.
683 * Move the container, or its services, to a Virtual Machine. Virtual Machines
684 have a much less interaction with the host, that's why one can install
685 decades old OS versions just fine there.
687 * Switch back to the legacy 'cgroup' controller. Note that while it can be a
688 valid solution, it's not a permanent one. There's a high likelihood that a
689 future {pve} major release, for example 8.0, cannot support the legacy
693 [[pct_cgroup_change_version]]
694 Changing CGroup Version
695 ^^^^^^^^^^^^^^^^^^^^^^^
697 TIP: If file system quotas are not required and all containers support 'cgroupv2',
698 it is recommended to stick to the new default.
700 To switch back to the previous version the following kernel command line
701 parameter can be used:
704 systemd.unified_cgroup_hierarchy=0
707 See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot
708 command line on where to add the parameter.
710 // TODO: seccomp a bit more.
711 // TODO: pve-lxc-syscalld
714 Guest Operating System Configuration
715 ------------------------------------
717 {pve} tries to detect the Linux distribution in the container, and modifies
718 some files. Here is a short list of things done at container startup:
720 set /etc/hostname:: to set the container name
722 modify /etc/hosts:: to allow lookup of the local hostname
724 network setup:: pass the complete network setup to the container
726 configure DNS:: pass information about DNS servers
728 adapt the init system:: for example, fix the number of spawned getty processes
730 set the root password:: when creating a new container
732 rewrite ssh_host_keys:: so that each container has unique keys
734 randomize crontab:: so that cron does not start at the same time on all containers
736 Changes made by {PVE} are enclosed by comment markers:
744 Those markers will be inserted at a reasonable location in the file. If such a
745 section already exists, it will be updated in place and will not be moved.
747 Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
748 For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
749 file will not be touched. This can be a simple empty file created via:
752 # touch /etc/.pve-ignore.hosts
755 Most modifications are OS dependent, so they differ between different
756 distributions and versions. You can completely disable modifications by
757 manually setting the `ostype` to `unmanaged`.
759 OS type detection is done by testing for certain files inside the
760 container. {pve} first checks the `/etc/os-release` file
761 footnote:[/etc/os-release replaces the multitude of per-distribution
762 release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
763 If that file is not present, or it does not contain a clearly recognizable
764 distribution identifier the following distribution specific release files are
767 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
769 Debian:: test /etc/debian_version
771 Fedora:: test /etc/fedora-release
773 RedHat or CentOS:: test /etc/redhat-release
775 ArchLinux:: test /etc/arch-release
777 Alpine:: test /etc/alpine-release
779 Gentoo:: test /etc/gentoo-release
781 NOTE: Container start fails if the configured `ostype` differs from the auto
785 [[pct_container_storage]]
789 The {pve} LXC container storage model is more flexible than traditional
790 container storage models. A container can have multiple mount points. This
791 makes it possible to use the best suited storage for each application.
793 For example the root file system of the container can be on slow and cheap
794 storage while the database can be on fast and distributed storage via a second
795 mount point. See section <<pct_mount_points, Mount Points>> for further
798 Any storage type supported by the {pve} storage library can be used. This means
799 that containers can be stored on local (for example `lvm`, `zfs` or directory),
800 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
801 Ceph. Advanced storage features like snapshots or clones can be used if the
802 underlying storage supports them. The `vzdump` backup tool can use snapshots to
803 provide consistent container backups.
805 Furthermore, local devices or local directories can be mounted directly using
806 'bind mounts'. This gives access to local resources inside a container with
807 practically zero overhead. Bind mounts can be used as an easy way to share data
814 WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
815 usage of FUSE mounts inside a container is strongly advised against, as
816 containers need to be frozen for suspend or snapshot mode backups.
818 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
819 technologies, it is possible to establish the FUSE mount on the Proxmox host
820 and use a bind mount point to make it accessible inside the container.
823 Using Quotas Inside Containers
824 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
826 Quotas allow to set limits inside a container for the amount of disk space that
829 NOTE: This currently requires the use of legacy 'cgroups'.
831 NOTE: This only works on ext4 image based storage types and currently only
832 works with privileged containers.
834 Activating the `quota` option causes the following mount options to be used for
836 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
838 This allows quotas to be used like on any other system. You can initialize the
839 `/aquota.user` and `/aquota.group` files by running:
846 Then edit the quotas using the `edquota` command. Refer to the documentation of
847 the distribution running inside the container for details.
849 NOTE: You need to run the above commands for every mount point by passing the
850 mount point's path instead of just `/`.
853 Using ACLs Inside Containers
854 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
856 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
857 containers. ACLs allow you to set more detailed file ownership than the
858 traditional user/group/others model.
861 Backup of Container mount points
862 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
864 To include a mount point in backups, enable the `backup` option for it in the
865 container configuration. For an existing mount point `mp0`
868 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
871 add `backup=1` to enable it.
874 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
877 NOTE: When creating a new mount point in the GUI, this option is enabled by
880 To disable backups for a mount point, add `backup=0` in the way described
881 above, or uncheck the *Backup* checkbox on the GUI.
883 Replication of Containers mount points
884 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
886 By default, additional mount points are replicated when the Root Disk is
887 replicated. If you want the {pve} storage replication mechanism to skip a mount
888 point, you can set the *Skip replication* option for that mount point.
889 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
890 mount point to a different type of storage when the container has replication
891 configured requires to have *Skip replication* enabled for that mount point.
901 It is possible to use the `vzdump` tool for container backup. Please refer to
902 the `vzdump` manual page for details.
905 Restoring Container Backups
906 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
908 Restoring container backups made with `vzdump` is possible using the `pct
909 restore` command. By default, `pct restore` will attempt to restore as much of
910 the backed up container configuration as possible. It is possible to override
911 the backed up configuration by manually setting container options on the
912 command line (see the `pct` manual page for details).
914 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
915 contained in a vzdump archive.
917 There are two basic restore modes, only differing by their handling of mount
921 ``Simple'' Restore Mode
922 ^^^^^^^^^^^^^^^^^^^^^^^
924 If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
925 explicitly set, the mount point configuration from the backed up configuration
926 file is restored using the following steps:
928 . Extract mount points and their options from backup
929 . Create volumes for storage backed mount points on the storage provided with
930 the `storage` parameter (default: `local`).
931 . Extract files from backup archive
932 . Add bind and device mount points to restored configuration (limited to root
935 NOTE: Since bind and device mount points are never backed up, no files are
936 restored in the last step, but only the configuration options. The assumption
937 is that such mount points are either backed up with another mechanism (e.g.,
938 NFS space that is bind mounted into many containers), or not intended to be
941 This simple mode is also used by the container restore operations in the web
945 ``Advanced'' Restore Mode
946 ^^^^^^^^^^^^^^^^^^^^^^^^^
948 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
949 parameters), the `pct restore` command is automatically switched into an
950 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
951 configuration options contained in the backup archive, and instead only uses
952 the options explicitly provided as parameters.
954 This mode allows flexible configuration of mount point settings at restore
957 * Set target storages, volume sizes and other options for each mount point
959 * Redistribute backed up files according to new mount point scheme
960 * Restore to device and/or bind mount points (limited to root user)
963 Managing Containers with `pct`
964 ------------------------------
966 The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
967 {pve} containers. It enables you to create or destroy containers, as well as
968 control the container execution (start, stop, reboot, migrate, etc.). It can be
969 used to set parameters in the config file of a container, for example the
970 network configuration or memory limits.
975 Create a container based on a Debian template (provided you have already
976 downloaded the template via the web interface)
979 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
988 Start a login session via getty
994 Enter the LXC namespace and run a shell as root user
1000 Display the configuration
1006 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
1007 the address and gateway, while it's running
1010 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
1013 Reduce the memory of the container to 512MB
1016 # pct set 100 -memory 512
1019 Destroying a container always removes it from Access Control Lists and it always
1020 removes the firewall configuration of the container. You have to activate
1021 '--purge', if you want to additionally remove the container from replication jobs,
1022 backup jobs and HA resource configurations.
1025 # pct destroy 100 --purge
1030 Obtaining Debugging Logs
1031 ~~~~~~~~~~~~~~~~~~~~~~~~
1033 In case `pct start` is unable to start a specific container, it might be
1034 helpful to collect debugging output by passing the `--debug` flag (replace `CTID` with
1035 the container's CTID):
1038 # pct start CTID --debug
1041 Alternatively, you can use the following `lxc-start` command, which will save
1042 the debug log to the file specified by the `-o` output option:
1045 # lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log
1048 This command will attempt to start the container in foreground mode, to stop
1049 the container run `pct shutdown CTID` or `pct stop CTID` in a second terminal.
1051 The collected debug log is written to `/tmp/lxc-CTID.log`.
1053 NOTE: If you have changed the container's configuration since the last start
1054 attempt with `pct start`, you need to run `pct start` at least once to also
1055 update the configuration used by `lxc-start`.
1061 If you have a cluster, you can migrate your Containers with
1064 # pct migrate <ctid> <target>
1067 This works as long as your Container is offline. If it has local volumes or
1068 mount points defined, the migration will copy the content over the network to
1069 the target host if the same storage is defined there.
1071 Running containers cannot live-migrated due to technical limitations. You can
1072 do a restart migration, which shuts down, moves and then starts a container
1073 again on the target node. As containers are very lightweight, this results
1074 normally only in a downtime of some hundreds of milliseconds.
1076 A restart migration can be done through the web interface or by using the
1077 `--restart` flag with the `pct migrate` command.
1079 A restart migration will shut down the Container and kill it after the
1080 specified timeout (the default is 180 seconds). Then it will migrate the
1081 Container like an offline migration and when finished, it starts the Container
1084 [[pct_configuration]]
1088 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
1089 `<CTID>` is the numeric ID of the given container. Like all other files stored
1090 inside `/etc/pve/`, they get automatically replicated to all other cluster
1093 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
1094 unique cluster wide.
1096 .Example Container Configuration
1103 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
1104 rootfs: local:107/vm-107-disk-1.raw,size=7G
1107 The configuration files are simple text files. You can edit them using a normal
1108 text editor, for example, `vi` or `nano`.
1109 This is sometimes useful to do small corrections, but keep in mind that you
1110 need to restart the container to apply such changes.
1112 For that reason, it is usually better to use the `pct` command to generate and
1113 modify those files, or do the whole thing using the GUI.
1114 Our toolkit is smart enough to instantaneously apply most changes to running
1115 containers. This feature is called ``hot plug'', and there is no need to restart
1116 the container in that case.
1118 In cases where a change cannot be hot-plugged, it will be registered as a
1119 pending change (shown in red color in the GUI).
1120 They will only be applied after rebooting the container.
1126 The container configuration file uses a simple colon separated key/value
1127 format. Each line has the following format:
1134 Blank lines in those files are ignored, and lines starting with a `#` character
1135 are treated as comments and are also ignored.
1137 It is possible to add low-level, LXC style configuration directly, for example:
1140 lxc.init_cmd: /sbin/my_own_init
1146 lxc.init_cmd = /sbin/my_own_init
1149 The settings are passed directly to the LXC low-level tools.
1156 When you create a snapshot, `pct` stores the configuration at snapshot time
1157 into a separate snapshot section within the same configuration file. For
1158 example, after creating a snapshot called ``testsnapshot'', your configuration
1159 file will look like this:
1161 .Container configuration with snapshot
1171 snaptime: 1457170803
1175 There are a few snapshot related properties like `parent` and `snaptime`. The
1176 `parent` property is used to store the parent/child relationship between
1177 snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
1184 include::pct.conf.5-opts.adoc[]
1190 Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
1191 incompatible concurrent actions on the affected container. Sometimes you need
1192 to remove such a lock manually (e.g., after a power failure).
1198 CAUTION: Only do this if you are sure the action which set the lock is no
1207 `/etc/pve/lxc/<CTID>.conf`::
1209 Configuration file for the container '<CTID>'.
1212 include::pve-copyright.adoc[]