10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
16 include::pct.1-synopsis.adoc[]
23 Proxmox Container Toolkit
24 =========================
28 :title: Linux Container
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
36 The runtime costs for containers is low, usually negligible. However, there are
37 some drawbacks that need be considered:
39 * Only Linux distributions can be run in containers.It is not possible to run
40 other Operating Systems like, for example, FreeBSD or Microsoft Windows
43 * For security reasons, access to host resources needs to be restricted.
44 Containers run in their own separate namespaces. Additionally some syscalls
45 are not allowed within containers.
47 {pve} uses https://linuxcontainers.org/[Linux Containers (LXC)] as underlying
48 container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49 usage and management of LXC containers.
51 Containers are tightly integrated with {pve}. This means that they are aware of
52 the cluster setup, and they can use the same network and storage resources as
53 virtual machines. You can also use the {pve} firewall, or manage containers
54 using the HA framework.
56 Our primary goal is to offer an environment as one would get from a VM, but
57 without the additional overhead. We call this ``System Containers''.
59 NOTE: If you want to run micro-containers, for example, 'Docker' or 'rkt', it
60 is best to run them inside a VM.
66 * LXC (https://linuxcontainers.org/)
68 * Integrated into {pve} graphical web user interface (GUI)
70 * Easy to use command line tool `pct`
72 * Access via {pve} REST API
74 * 'lxcfs' to provide containerized /proc file system
76 * Control groups ('cgroups') for resource isolation and limitation
78 * 'AppArmor' and 'seccomp' to improve security
80 * Modern Linux kernels
82 * Image based deployment (templates)
84 * Uses {pve} xref:chapter_storage[storage library]
86 * Container setup from host (network, DNS, storage, etc.)
88 Security Considerations
89 -----------------------
91 Containers use the kernel of the host system. This creates a big attack
92 surface for malicious users. This should be considered if containers
93 are provided to untrustworthy people. In general, full
94 virtual machines provide better isolation.
96 However, LXC uses many security features like AppArmor, CGroups and kernel
97 namespaces to reduce the attack surface.
99 AppArmor profiles are used to restrict access to possibly dangerous actions.
100 Some system calls, i.e. `mount`, are prohibited from execution.
102 To trace AppArmor activity, use:
105 # dmesg | grep apparmor
108 Guest Operating System Configuration
109 ------------------------------------
111 {pve} tries to detect the Linux distribution in the container, and modifies some
112 files. Here is a short list of things done at container startup:
114 set /etc/hostname:: to set the container name
116 modify /etc/hosts:: to allow lookup of the local hostname
118 network setup:: pass the complete network setup to the container
120 configure DNS:: pass information about DNS servers
122 adapt the init system:: for example, fix the number of spawned getty processes
124 set the root password:: when creating a new container
126 rewrite ssh_host_keys:: so that each container has unique keys
128 randomize crontab:: so that cron does not start at the same time on all containers
130 Changes made by {PVE} are enclosed by comment markers:
138 Those markers will be inserted at a reasonable location in the
139 file. If such a section already exists, it will be updated in place
140 and will not be moved.
142 Modification of a file can be prevented by adding a `.pve-ignore.`
143 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
144 exists then the `/etc/hosts` file will not be touched. This can be a
145 simple empty file created via:
148 # touch /etc/.pve-ignore.hosts
151 Most modifications are OS dependent, so they differ between different
152 distributions and versions. You can completely disable modifications
153 by manually setting the `ostype` to `unmanaged`.
155 OS type detection is done by testing for certain files inside the
158 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
160 Debian:: test /etc/debian_version
162 Fedora:: test /etc/fedora-release
164 RedHat or CentOS:: test /etc/redhat-release
166 ArchLinux:: test /etc/arch-release
168 Alpine:: test /etc/alpine-release
170 Gentoo:: test /etc/gentoo-release
172 NOTE: Container start fails if the configured `ostype` differs from the auto
176 [[pct_container_images]]
180 Container images, sometimes also referred to as ``templates'' or
181 ``appliances'', are `tar` archives which contain everything to run a
182 container. `pct` uses them to create a new container, for example:
185 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
188 {pve} itself provides a variety of basic templates for the most common
189 Linux distributions. They can be downloaded using the GUI or the
190 `pveam` (short for {pve} Appliance Manager) command line utility.
191 Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
192 container templates are also available to download.
194 The list of available templates is updated daily via cron. To trigger it manually:
200 To view the list of available images run:
206 You can restrict this large list by specifying the `section` you are
207 interested in, for example basic `system` images:
209 .List available system images
211 # pveam available --section system
212 system alpine-3.10-default_20190626_amd64.tar.xz
213 system alpine-3.9-default_20190224_amd64.tar.xz
214 system archlinux-base_20190924-1_amd64.tar.gz
215 system centos-6-default_20191016_amd64.tar.xz
216 system centos-7-default_20190926_amd64.tar.xz
217 system centos-8-default_20191016_amd64.tar.xz
218 system debian-10.0-standard_10.0-1_amd64.tar.gz
219 system debian-8.0-standard_8.11-1_amd64.tar.gz
220 system debian-9.0-standard_9.7-1_amd64.tar.gz
221 system fedora-30-default_20190718_amd64.tar.xz
222 system fedora-31-default_20191029_amd64.tar.xz
223 system gentoo-current-default_20190718_amd64.tar.xz
224 system opensuse-15.0-default_20180907_amd64.tar.xz
225 system opensuse-15.1-default_20190719_amd64.tar.xz
226 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
227 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
228 system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
229 system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
232 Before you can use such a template, you need to download them into one
233 of your storages. You can simply use storage `local` for that
234 purpose. For clustered installations, it is preferred to use a shared
235 storage so that all nodes can access those images.
238 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
241 You are now ready to create containers using that image, and you can
242 list all downloaded images on storage `local` with:
246 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
249 The above command shows you the full {pve} volume identifiers. They include
250 the storage name, and most other {pve} commands can use them. For
251 example you can delete that image later with:
254 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
257 [[pct_container_storage]]
261 The {pve} LXC container storage model is more flexible than traditional
262 container storage models. A container can have multiple mount points. This makes
263 it possible to use the best suited storage for each application.
265 For example the root file system of the container can be on slow and cheap
266 storage while the database can be on fast and distributed storage via a second
267 mount point. See section <<pct_mount_points, Mount Points>> for further details.
269 Any storage type supported by the {pve} storage library can be used. This means
270 that containers can be stored on local (for example `lvm`, `zfs` or directory),
271 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
272 Ceph. Advanced storage features like snapshots or clones can be used if the
273 underlying storage supports them. The `vzdump` backup tool can use snapshots to
274 provide consistent container backups.
276 Furthermore, local devices or local directories can be mounted directly using
277 'bind mounts'. This gives access to local resources inside a container with
278 practically zero overhead. Bind mounts can be used as an easy way to share data
285 WARNING: Because of existing issues in the Linux kernel's freezer
286 subsystem the usage of FUSE mounts inside a container is strongly
287 advised against, as containers need to be frozen for suspend or
288 snapshot mode backups.
290 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
291 technologies, it is possible to establish the FUSE mount on the Proxmox host
292 and use a bind mount point to make it accessible inside the container.
295 Using Quotas Inside Containers
296 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
298 Quotas allow to set limits inside a container for the amount of disk
299 space that each user can use.
301 NOTE: This only works on ext4 image based storage types and currently only works
302 with privileged containers.
304 Activating the `quota` option causes the following mount options to be
305 used for a mount point:
306 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
308 This allows quotas to be used like on any other system. You
309 can initialize the `/aquota.user` and `/aquota.group` files by running
316 and edit the quotas via the `edquota` command. Refer to the documentation
317 of the distribution running inside the container for details.
319 NOTE: You need to run the above commands for every mount point by passing
320 the mount point's path instead of just `/`.
323 Using ACLs Inside Containers
324 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
326 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
327 containers. ACLs allow you to set more detailed file ownership than the
328 traditional user/group/others model.
331 Backup of Container mount points
332 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
334 To include a mount point in backups, enable the `backup` option for it in the
335 container configuration. For an existing mount point `mp0`
338 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
341 add `backup=1` to enable it.
344 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
347 NOTE: When creating a new mount point in the GUI, this option is enabled by
350 To disable backups for a mount point, add `backup=0` in the way described above,
351 or uncheck the *Backup* checkbox on the GUI.
353 Replication of Containers mount points
354 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
356 By default, additional mount points are replicated when the Root Disk is
357 replicated. If you want the {pve} storage replication mechanism to skip a mount
358 point, you can set the *Skip replication* option for that mount point. +
359 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
360 mount point to a different type of storage when the container has replication
361 configured requires to have *Skip replication* enabled for that mount point.
371 [thumbnail="screenshot/gui-create-ct-general.png"]
373 General settings of a container include
375 * the *Node* : the physical server on which the container will run
376 * the *CT ID*: a unique number in this {pve} installation used to identify your container
377 * *Hostname*: the hostname of the container
378 * *Resource Pool*: a logical group of containers and VMs
379 * *Password*: the root password of the container
380 * *SSH Public Key*: a public key for connecting to the root account over SSH
381 * *Unprivileged container*: this option allows to choose at creation time
382 if you want to create a privileged or unprivileged container.
384 Unprivileged Containers
385 ^^^^^^^^^^^^^^^^^^^^^^^
387 Unprivileged containers use a new kernel feature called user namespaces. The
388 root UID 0 inside the container is mapped to an unprivileged user outside the
389 container. This means that most security issues (container escape, resource
390 abuse, etc.) in these containers will affect a random unprivileged user, and
391 would be a generic kernel security bug rather than an LXC issue. The LXC team
392 thinks unprivileged containers are safe by design.
394 This is the default option when creating a new container.
396 NOTE: If the container uses systemd as an init system, please be
397 aware the systemd version running inside the container should be equal to
401 Privileged Containers
402 ^^^^^^^^^^^^^^^^^^^^^
404 Security in containers is achieved by using mandatory access control
405 (AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
406 container as unsafe, and they will not consider new container escape exploits
407 to be security issues worthy of a CVE and quick fix. That's why privileged
408 containers should only be used in trusted environments.
410 WARNING: Although it is not recommended, AppArmor can be disabled for a
411 container. This brings security risks with it. Some syscalls can lead to
412 privilege escalation when executed within a container if the system is
413 misconfigured or if a LXC or Linux Kernel vulnerability exists.
415 To disable AppArmor for a container, add the following line to the container
416 configuration file located at `/etc/pve/lxc/CTID.conf`:
419 lxc.apparmor_profile = unconfined
422 Please note that this is not recommended for production use.
430 [thumbnail="screenshot/gui-create-ct-cpu.png"]
432 You can restrict the number of visible CPUs inside the container using the
433 `cores` option. This is implemented using the Linux 'cpuset' cgroup
434 (**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
435 running containers among available CPUs. To view the assigned CPUs run
436 the following command:
440 ---------------------
444 ---------------------
447 Containers use the host kernel directly. All tasks inside a container are
448 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
449 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
454 `cpulimit`: :: You can use this option to further limit assigned CPU
455 time. Please note that this is a floating point number, so it is
456 perfectly valid to assign two cores to a container, but restrict
457 overall CPU consumption to half a core.
464 `cpuunits`: :: This is a relative weight passed to the kernel
465 scheduler. The larger the number is, the more CPU time this container
466 gets. Number is relative to the weights of all the other running
467 containers. The default is 1024. You can use this setting to
468 prioritize some containers.
475 [thumbnail="screenshot/gui-create-ct-memory.png"]
477 Container memory is controlled using the cgroup memory controller.
481 `memory`: :: Limit overall memory usage. This corresponds
482 to the `memory.limit_in_bytes` cgroup setting.
484 `swap`: :: Allows the container to use additional swap memory from the
485 host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
486 cgroup setting, which is set to the sum of both value (`memory +
494 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
496 The root mount point is configured with the `rootfs` property. You can
497 configure up to 256 additional mount points. The corresponding options
498 are called `mp0` to `mp255`. They can contain the following settings:
500 include::pct-mountpoint-opts.adoc[]
502 Currently there are three types of mount points: storage backed
503 mount points, bind mounts, and device mounts.
505 .Typical container `rootfs` configuration
507 rootfs: thin1:base-100-disk-1,size=8G
511 Storage Backed Mount Points
512 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
514 Storage backed mount points are managed by the {pve} storage subsystem and come
515 in three different flavors:
517 - Image based: these are raw images containing a single ext4 formatted file
519 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
520 and thus allow resizing and snapshotting.
521 - Directories: passing `size=0` triggers a special case where instead of a raw
522 image a directory is created.
524 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
525 mount point volumes will automatically allocate a volume of the specified size
526 on the specified storage. E.g., calling
527 `pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
528 on the storage `thin1` and replace the volume ID place holder `10` with the
535 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
536 inside a container. Some potential use cases are:
538 - Accessing your home directory in the guest
539 - Accessing an USB device directory in the guest
540 - Accessing an NFS mount from the host in the guest
542 Bind mounts are considered to not be managed by the storage subsystem, so you
543 cannot make snapshots or deal with quotas from inside the container. With
544 unprivileged containers you might run into permission problems caused by the
545 user mapping and cannot use ACLs.
547 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
549 WARNING: For security reasons, bind mounts should only be established
550 using source directories especially reserved for this purpose, e.g., a
551 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
552 directories like `/`, `/var` or `/etc` into a container - this poses a
555 NOTE: The bind mount source path must not contain any symlinks.
557 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
558 container with ID `100` under the path `/shared`, use a configuration line like
559 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
560 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
561 achieve the same result.
567 Device mount points allow to mount block devices of the host directly into the
568 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
569 storage subsystem, but the `quota` and `acl` options will be honored.
571 NOTE: Device mount points should only be used under special circumstances. In
572 most cases a storage backed mount point offers the same performance and a lot
575 NOTE: The contents of device mount points are not backed up when using `vzdump`.
578 [[pct_container_network]]
582 [thumbnail="screenshot/gui-create-ct-network.png"]
584 You can configure up to 10 network interfaces for a single
585 container. The corresponding options are called `net0` to `net9`, and
586 they can contain the following setting:
588 include::pct-network-opts.adoc[]
591 [[pct_startup_and_shutdown]]
592 Automatic Start and Shutdown of Containers
593 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
595 To automatically start a container when the host system boots, select the
596 option 'Start at boot' in the 'Options' panel of the container in the web
597 interface or run the following command:
600 # pct set CTID -onboot 1
603 .Start and Shutdown Order
604 // use the screenshot from qemu - its the same
605 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
607 If you want to fine tune the boot order of your containers, you can use the following
610 * *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
611 you want the CT to be the first to be started. (We use the reverse startup
612 order for shutdown, so a container with a start order of 1 would be the last to
614 * *Startup delay*: Defines the interval between this container start and subsequent
615 containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
617 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
618 for the container to be offline after issuing a shutdown command.
619 By default this value is set to 60, which means that {pve} will issue a
620 shutdown request, wait 60s for the machine to be offline, and if after 60s
621 the machine is still online will notify that the shutdown action failed.
623 Please note that containers without a Start/Shutdown order parameter will always
624 start after those where the parameter is set, and this parameter only
625 makes sense between the machines running locally on a host, and not
631 You can add a hook script to CTs with the config property `hookscript`.
634 # pct set 100 -hookscript local:snippets/hookscript.pl
637 It will be called during various phases of the guests lifetime.
638 For an example and documentation see the example script under
639 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
648 It is possible to use the `vzdump` tool for container backup. Please
649 refer to the `vzdump` manual page for details.
652 Restoring Container Backups
653 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
655 Restoring container backups made with `vzdump` is possible using the
656 `pct restore` command. By default, `pct restore` will attempt to restore as much
657 of the backed up container configuration as possible. It is possible to override
658 the backed up configuration by manually setting container options on the command
659 line (see the `pct` manual page for details).
661 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
662 contained in a vzdump archive.
664 There are two basic restore modes, only differing by their handling of mount
668 ``Simple'' Restore Mode
669 ^^^^^^^^^^^^^^^^^^^^^^^
671 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
672 are explicitly set, the mount point configuration from the backed up
673 configuration file is restored using the following steps:
675 . Extract mount points and their options from backup
676 . Create volumes for storage backed mount points (on storage provided with the
677 `storage` parameter, or default local storage if unset)
678 . Extract files from backup archive
679 . Add bind and device mount points to restored configuration (limited to root user)
681 NOTE: Since bind and device mount points are never backed up, no files are
682 restored in the last step, but only the configuration options. The assumption
683 is that such mount points are either backed up with another mechanism (e.g.,
684 NFS space that is bind mounted into many containers), or not intended to be
687 This simple mode is also used by the container restore operations in the web
691 ``Advanced'' Restore Mode
692 ^^^^^^^^^^^^^^^^^^^^^^^^^
694 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
695 parameters), the `pct restore` command is automatically switched into an
696 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
697 configuration options contained in the backup archive, and instead only
698 uses the options explicitly provided as parameters.
700 This mode allows flexible configuration of mount point settings at restore time,
703 * Set target storages, volume sizes and other options for each mount point
705 * Redistribute backed up files according to new mount point scheme
706 * Restore to device and/or bind mount points (limited to root user)
709 Managing Containers with `pct`
710 ------------------------------
712 The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
713 {pve} containers. It enables you to create or destroy containers, as well as
714 control the container execution (start, stop, reboot, migrate, etc.). It can be
715 used to set parameters in the config file of a container, for example the
716 network configuration or memory limits.
721 Create a container based on a Debian template (provided you have
722 already downloaded the template via the web interface)
725 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
734 Start a login session via getty
740 Enter the LXC namespace and run a shell as root user
746 Display the configuration
752 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
753 set the address and gateway, while it's running
756 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
759 Reduce the memory of the container to 512MB
762 # pct set 100 -memory 512
766 Obtaining Debugging Logs
767 ~~~~~~~~~~~~~~~~~~~~~~~~
769 In case `pct start` is unable to start a specific container, it might be
770 helpful to collect debugging output by running `lxc-start` (replace `ID` with
774 # lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
777 This command will attempt to start the container in foreground mode,
778 to stop the container run `pct shutdown ID` or `pct stop ID` in a
781 The collected debug log is written to `/tmp/lxc-ID.log`.
783 NOTE: If you have changed the container's configuration since the last start
784 attempt with `pct start`, you need to run `pct start` at least once to also
785 update the configuration used by `lxc-start`.
791 If you have a cluster, you can migrate your Containers with
794 # pct migrate <ctid> <target>
797 This works as long as your Container is offline. If it has local volumes or
798 mount points defined, the migration will copy the content over the network to
799 the target host if the same storage is defined there.
801 If you want to migrate online Containers, the only way is to use
802 restart migration. This can be initiated with the -restart flag and the optional
805 A restart migration will shut down the Container and kill it after the specified
806 timeout (the default is 180 seconds). Then it will migrate the Container
807 like an offline migration and when finished, it starts the Container on the
810 [[pct_configuration]]
814 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
815 where `<CTID>` is the numeric ID of the given container. Like all
816 other files stored inside `/etc/pve/`, they get automatically
817 replicated to all other cluster nodes.
819 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
822 .Example Container Configuration
829 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
830 rootfs: local:107/vm-107-disk-1.raw,size=7G
833 The configuration files are simple text files. You can edit them
834 using a normal text editor (`vi`, `nano`, etc). This is sometimes
835 useful to do small corrections, but keep in mind that you need to
836 restart the container to apply such changes.
838 For that reason, it is usually better to use the `pct` command to
839 generate and modify those files, or do the whole thing using the GUI.
840 Our toolkit is smart enough to instantaneously apply most changes to
841 running containers. This feature is called "hot plug", and there is no
842 need to restart the container in that case.
844 In cases where a change cannot be hot plugged, it will be registered
845 as a pending change (shown in red color in the GUI). They will only
846 be applied after rebooting the container.
852 The container configuration file uses a simple colon separated
853 key/value format. Each line has the following format:
860 Blank lines in those files are ignored, and lines starting with a `#`
861 character are treated as comments and are also ignored.
863 It is possible to add low-level, LXC style configuration directly, for
867 lxc.init_cmd: /sbin/my_own_init
873 lxc.init_cmd = /sbin/my_own_init
876 The settings are passed directly to the LXC low-level tools.
883 When you create a snapshot, `pct` stores the configuration at snapshot
884 time into a separate snapshot section within the same configuration
885 file. For example, after creating a snapshot called ``testsnapshot'',
886 your configuration file will look like this:
888 .Container configuration with snapshot
902 There are a few snapshot related properties like `parent` and
903 `snaptime`. The `parent` property is used to store the parent/child
904 relationship between snapshots. `snaptime` is the snapshot creation
905 time stamp (Unix epoch).
912 include::pct.conf.5-opts.adoc[]
918 Container migrations, snapshots and backups (`vzdump`) set a lock to
919 prevent incompatible concurrent actions on the affected container. Sometimes
920 you need to remove such a lock manually (e.g., after a power failure).
926 CAUTION: Only do this if you are sure the action which set the lock is
935 `/etc/pve/lxc/<CTID>.conf`::
937 Configuration file for the container '<CTID>'.
940 include::pve-copyright.adoc[]