10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
16 include::pct.1-synopsis.adoc[]
23 Proxmox Container Toolkit
24 =========================
28 :title: Linux Container
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
36 The runtime costs for containers is low, usually negligible. However, there are
37 some drawbacks that need be considered:
39 * Only Linux distributions can be run in containers.It is not possible to run
40 other Operating Systems like, for example, FreeBSD or Microsoft Windows
43 * For security reasons, access to host resources needs to be restricted.
44 Containers run in their own separate namespaces. Additionally some syscalls
45 are not allowed within containers.
47 {pve} uses https://linuxcontainers.org/[Linux Containers (LXC)] as underlying
48 container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49 usage and management of LXC containers.
51 Containers are tightly integrated with {pve}. This means that they are aware of
52 the cluster setup, and they can use the same network and storage resources as
53 virtual machines. You can also use the {pve} firewall, or manage containers
54 using the HA framework.
56 Our primary goal is to offer an environment as one would get from a VM, but
57 without the additional overhead. We call this ``System Containers''.
59 NOTE: If you want to run micro-containers, for example, 'Docker' or 'rkt', it
60 is best to run them inside a VM.
66 * LXC (https://linuxcontainers.org/)
68 * Integrated into {pve} graphical web user interface (GUI)
70 * Easy to use command line tool `pct`
72 * Access via {pve} REST API
74 * 'lxcfs' to provide containerized /proc file system
76 * Control groups ('cgroups') for resource isolation and limitation
78 * 'AppArmor' and 'seccomp' to improve security
80 * Modern Linux kernels
82 * Image based deployment (templates)
84 * Uses {pve} xref:chapter_storage[storage library]
86 * Container setup from host (network, DNS, storage, etc.)
89 Security Considerations
90 -----------------------
92 Containers use the kernel of the host system. This creates a big attack surface
93 for malicious users. This should be considered if containers are provided to
94 untrustworthy people. In general, full virtual machines provide better
97 However, LXC uses many security features like AppArmor, CGroups and kernel
98 namespaces to reduce the attack surface.
100 AppArmor profiles are used to restrict access to possibly dangerous actions.
101 Some system calls, i.e. `mount`, are prohibited from execution.
103 To trace AppArmor activity, use:
106 # dmesg | grep apparmor
109 [[pct_container_images]]
113 Container images, sometimes also referred to as ``templates'' or
114 ``appliances'', are `tar` archives which contain everything to run a container.
115 `pct` uses them to create a new container, for example:
118 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
121 {pve} itself provides a variety of basic templates for the most common Linux
122 distributions. They can be downloaded using the GUI or the `pveam` (short for
123 {pve} Appliance Manager) command line utility.
124 Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
125 are also available to download.
127 The list of available templates is updated daily via cron. To trigger it
134 To view the list of available images run:
140 You can restrict this large list by specifying the `section` you are
141 interested in, for example basic `system` images:
143 .List available system images
145 # pveam available --section system
146 system alpine-3.10-default_20190626_amd64.tar.xz
147 system alpine-3.9-default_20190224_amd64.tar.xz
148 system archlinux-base_20190924-1_amd64.tar.gz
149 system centos-6-default_20191016_amd64.tar.xz
150 system centos-7-default_20190926_amd64.tar.xz
151 system centos-8-default_20191016_amd64.tar.xz
152 system debian-10.0-standard_10.0-1_amd64.tar.gz
153 system debian-8.0-standard_8.11-1_amd64.tar.gz
154 system debian-9.0-standard_9.7-1_amd64.tar.gz
155 system fedora-30-default_20190718_amd64.tar.xz
156 system fedora-31-default_20191029_amd64.tar.xz
157 system gentoo-current-default_20190718_amd64.tar.xz
158 system opensuse-15.0-default_20180907_amd64.tar.xz
159 system opensuse-15.1-default_20190719_amd64.tar.xz
160 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
161 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
162 system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
163 system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
166 Before you can use such a template, you need to download them into one of your
167 storages. You can simply use storage `local` for that purpose. For clustered
168 installations, it is preferred to use a shared storage so that all nodes can
172 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
175 You are now ready to create containers using that image, and you can list all
176 downloaded images on storage `local` with:
180 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
183 The above command shows you the full {pve} volume identifiers. They include the
184 storage name, and most other {pve} commands can use them. For example you can
185 delete that image later with:
188 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
191 [[pct_container_storage]]
195 The {pve} LXC container storage model is more flexible than traditional
196 container storage models. A container can have multiple mount points. This
197 makes it possible to use the best suited storage for each application.
199 For example the root file system of the container can be on slow and cheap
200 storage while the database can be on fast and distributed storage via a second
201 mount point. See section <<pct_mount_points, Mount Points>> for further
204 Any storage type supported by the {pve} storage library can be used. This means
205 that containers can be stored on local (for example `lvm`, `zfs` or directory),
206 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
207 Ceph. Advanced storage features like snapshots or clones can be used if the
208 underlying storage supports them. The `vzdump` backup tool can use snapshots to
209 provide consistent container backups.
211 Furthermore, local devices or local directories can be mounted directly using
212 'bind mounts'. This gives access to local resources inside a container with
213 practically zero overhead. Bind mounts can be used as an easy way to share data
220 WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
221 usage of FUSE mounts inside a container is strongly advised against, as
222 containers need to be frozen for suspend or snapshot mode backups.
224 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
225 technologies, it is possible to establish the FUSE mount on the Proxmox host
226 and use a bind mount point to make it accessible inside the container.
229 Using Quotas Inside Containers
230 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 Quotas allow to set limits inside a container for the amount of disk space that
235 NOTE: This only works on ext4 image based storage types and currently only
236 works with privileged containers.
238 Activating the `quota` option causes the following mount options to be used for
240 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
242 This allows quotas to be used like on any other system. You can initialize the
243 `/aquota.user` and `/aquota.group` files by running:
250 Then edit the quotas using the `edquota` command. Refer to the documentation of
251 the distribution running inside the container for details.
253 NOTE: You need to run the above commands for every mount point by passing the
254 mount point's path instead of just `/`.
257 Using ACLs Inside Containers
258 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
260 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
261 containers. ACLs allow you to set more detailed file ownership than the
262 traditional user/group/others model.
265 Backup of Container mount points
266 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
268 To include a mount point in backups, enable the `backup` option for it in the
269 container configuration. For an existing mount point `mp0`
272 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
275 add `backup=1` to enable it.
278 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
281 NOTE: When creating a new mount point in the GUI, this option is enabled by
284 To disable backups for a mount point, add `backup=0` in the way described
285 above, or uncheck the *Backup* checkbox on the GUI.
287 Replication of Containers mount points
288 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
290 By default, additional mount points are replicated when the Root Disk is
291 replicated. If you want the {pve} storage replication mechanism to skip a mount
292 point, you can set the *Skip replication* option for that mount point.
293 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
294 mount point to a different type of storage when the container has replication
295 configured requires to have *Skip replication* enabled for that mount point.
305 [thumbnail="screenshot/gui-create-ct-general.png"]
307 General settings of a container include
309 * the *Node* : the physical server on which the container will run
310 * the *CT ID*: a unique number in this {pve} installation used to identify your
312 * *Hostname*: the hostname of the container
313 * *Resource Pool*: a logical group of containers and VMs
314 * *Password*: the root password of the container
315 * *SSH Public Key*: a public key for connecting to the root account over SSH
316 * *Unprivileged container*: this option allows to choose at creation time
317 if you want to create a privileged or unprivileged container.
319 Unprivileged Containers
320 ^^^^^^^^^^^^^^^^^^^^^^^
322 Unprivileged containers use a new kernel feature called user namespaces.
323 The root UID 0 inside the container is mapped to an unprivileged user outside
324 the container. This means that most security issues (container escape, resource
325 abuse, etc.) in these containers will affect a random unprivileged user, and
326 would be a generic kernel security bug rather than an LXC issue. The LXC team
327 thinks unprivileged containers are safe by design.
329 This is the default option when creating a new container.
331 NOTE: If the container uses systemd as an init system, please be aware the
332 systemd version running inside the container should be equal to or greater than
336 Privileged Containers
337 ^^^^^^^^^^^^^^^^^^^^^
339 Security in containers is achieved by using mandatory access control
340 ('AppArmor'), 'seccomp' filters and namespaces. The LXC team considers this
341 kind of container as unsafe, and they will not consider new container escape
342 exploits to be security issues worthy of a CVE and quick fix. That's why
343 privileged containers should only be used in trusted environments.
345 Although it is not recommended, AppArmor can be disabled for a container. This
346 brings security risks with it. Some syscalls can lead to privilege escalation
347 when executed within a container if the system is misconfigured or if a LXC or
348 Linux Kernel vulnerability exists.
350 To disable AppArmor for a container, add the following line to the container
351 configuration file located at `/etc/pve/lxc/CTID.conf`:
354 lxc.apparmor_profile = unconfined
357 WARNING: Please note that this is not recommended for production use.
364 [thumbnail="screenshot/gui-create-ct-cpu.png"]
366 You can restrict the number of visible CPUs inside the container using the
367 `cores` option. This is implemented using the Linux 'cpuset' cgroup
368 (**c**ontrol *group*).
369 A special task inside `pvestatd` tries to distribute running containers among
370 available CPUs periodically.
371 To view the assigned CPUs run the following command:
375 ---------------------
379 ---------------------
382 Containers use the host kernel directly. All tasks inside a container are
383 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
384 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
389 `cpulimit`: :: You can use this option to further limit assigned CPU time.
390 Please note that this is a floating point number, so it is perfectly valid to
391 assign two cores to a container, but restrict overall CPU consumption to half a
399 `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
400 larger the number is, the more CPU time this container gets. Number is relative
401 to the weights of all the other running containers. The default is 1024. You
402 can use this setting to prioritize some containers.
409 [thumbnail="screenshot/gui-create-ct-memory.png"]
411 Container memory is controlled using the cgroup memory controller.
415 `memory`: :: Limit overall memory usage. This corresponds to the
416 `memory.limit_in_bytes` cgroup setting.
418 `swap`: :: Allows the container to use additional swap memory from the host
419 swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
420 setting, which is set to the sum of both value (`memory + swap`).
427 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
429 The root mount point is configured with the `rootfs` property. You can
430 configure up to 256 additional mount points. The corresponding options are
431 called `mp0` to `mp255`. They can contain the following settings:
433 include::pct-mountpoint-opts.adoc[]
435 Currently there are three types of mount points: storage backed mount points,
436 bind mounts, and device mounts.
438 .Typical container `rootfs` configuration
440 rootfs: thin1:base-100-disk-1,size=8G
444 Storage Backed Mount Points
445 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
447 Storage backed mount points are managed by the {pve} storage subsystem and come
448 in three different flavors:
450 - Image based: these are raw images containing a single ext4 formatted file
452 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
453 and thus allow resizing and snapshotting.
454 - Directories: passing `size=0` triggers a special case where instead of a raw
455 image a directory is created.
457 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
458 mount point volumes will automatically allocate a volume of the specified size
459 on the specified storage. For example, calling
462 pct set 100 -mp0 thin1:10,mp=/path/in/container
465 will allocate a 10GB volume on the storage `thin1` and replace the volume ID
466 place holder `10` with the allocated volume ID, and setup the moutpoint in the
467 container at `/path/in/container`
473 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
474 inside a container. Some potential use cases are:
476 - Accessing your home directory in the guest
477 - Accessing an USB device directory in the guest
478 - Accessing an NFS mount from the host in the guest
480 Bind mounts are considered to not be managed by the storage subsystem, so you
481 cannot make snapshots or deal with quotas from inside the container. With
482 unprivileged containers you might run into permission problems caused by the
483 user mapping and cannot use ACLs.
485 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
487 WARNING: For security reasons, bind mounts should only be established using
488 source directories especially reserved for this purpose, e.g., a directory
489 hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
490 `/`, `/var` or `/etc` into a container - this poses a great security risk.
492 NOTE: The bind mount source path must not contain any symlinks.
494 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
495 container with ID `100` under the path `/shared`, use a configuration line like
496 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
497 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
498 achieve the same result.
504 Device mount points allow to mount block devices of the host directly into the
505 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
506 storage subsystem, but the `quota` and `acl` options will be honored.
508 NOTE: Device mount points should only be used under special circumstances. In
509 most cases a storage backed mount point offers the same performance and a lot
512 NOTE: The contents of device mount points are not backed up when using
516 [[pct_container_network]]
520 [thumbnail="screenshot/gui-create-ct-network.png"]
522 You can configure up to 10 network interfaces for a single container.
523 The corresponding options are called `net0` to `net9`, and they can contain the
526 include::pct-network-opts.adoc[]
529 [[pct_startup_and_shutdown]]
530 Automatic Start and Shutdown of Containers
531 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
533 To automatically start a container when the host system boots, select the
534 option 'Start at boot' in the 'Options' panel of the container in the web
535 interface or run the following command:
538 # pct set CTID -onboot 1
541 .Start and Shutdown Order
542 // use the screenshot from qemu - its the same
543 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
545 If you want to fine tune the boot order of your containers, you can use the
546 following parameters:
548 * *Start/Shutdown order*: Defines the start order priority. For example, set it
549 to 1 if you want the CT to be the first to be started. (We use the reverse
550 startup order for shutdown, so a container with a start order of 1 would be
551 the last to be shut down)
552 * *Startup delay*: Defines the interval between this container start and
553 subsequent containers starts. For example, set it to 240 if you want to wait
554 240 seconds before starting other containers.
555 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
556 for the container to be offline after issuing a shutdown command.
557 By default this value is set to 60, which means that {pve} will issue a
558 shutdown request, wait 60s for the machine to be offline, and if after 60s
559 the machine is still online will notify that the shutdown action failed.
561 Please note that containers without a Start/Shutdown order parameter will
562 always start after those where the parameter is set, and this parameter only
563 makes sense between the machines running locally on a host, and not
569 You can add a hook script to CTs with the config property `hookscript`.
572 # pct set 100 -hookscript local:snippets/hookscript.pl
575 It will be called during various phases of the guests lifetime. For an example
576 and documentation see the example script under
577 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
579 Guest Operating System Configuration
580 ------------------------------------
582 {pve} tries to detect the Linux distribution in the container, and modifies
583 some files. Here is a short list of things done at container startup:
585 set /etc/hostname:: to set the container name
587 modify /etc/hosts:: to allow lookup of the local hostname
589 network setup:: pass the complete network setup to the container
591 configure DNS:: pass information about DNS servers
593 adapt the init system:: for example, fix the number of spawned getty processes
595 set the root password:: when creating a new container
597 rewrite ssh_host_keys:: so that each container has unique keys
599 randomize crontab:: so that cron does not start at the same time on all containers
601 Changes made by {PVE} are enclosed by comment markers:
609 Those markers will be inserted at a reasonable location in the file. If such a
610 section already exists, it will be updated in place and will not be moved.
612 Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
613 For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
614 file will not be touched. This can be a simple empty file created via:
617 # touch /etc/.pve-ignore.hosts
620 Most modifications are OS dependent, so they differ between different
621 distributions and versions. You can completely disable modifications by
622 manually setting the `ostype` to `unmanaged`.
624 OS type detection is done by testing for certain files inside the
625 container. {pve} first checks the `/etc/os-release` file
626 footnote:[/etc/os-release replaces the multitude of per-distribution
627 release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
628 If that file is not present, or it does not contain a clearly recognizable
629 distribution identifier the following distribution specific release files are
632 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
634 Debian:: test /etc/debian_version
636 Fedora:: test /etc/fedora-release
638 RedHat or CentOS:: test /etc/redhat-release
640 ArchLinux:: test /etc/arch-release
642 Alpine:: test /etc/alpine-release
644 Gentoo:: test /etc/gentoo-release
646 NOTE: Container start fails if the configured `ostype` differs from the auto
657 It is possible to use the `vzdump` tool for container backup. Please refer to
658 the `vzdump` manual page for details.
661 Restoring Container Backups
662 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
664 Restoring container backups made with `vzdump` is possible using the `pct
665 restore` command. By default, `pct restore` will attempt to restore as much of
666 the backed up container configuration as possible. It is possible to override
667 the backed up configuration by manually setting container options on the
668 command line (see the `pct` manual page for details).
670 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
671 contained in a vzdump archive.
673 There are two basic restore modes, only differing by their handling of mount
677 ``Simple'' Restore Mode
678 ^^^^^^^^^^^^^^^^^^^^^^^
680 If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
681 explicitly set, the mount point configuration from the backed up configuration
682 file is restored using the following steps:
684 . Extract mount points and their options from backup
685 . Create volumes for storage backed mount points (on storage provided with the
686 `storage` parameter, or default local storage if unset)
687 . Extract files from backup archive
688 . Add bind and device mount points to restored configuration (limited to root
691 NOTE: Since bind and device mount points are never backed up, no files are
692 restored in the last step, but only the configuration options. The assumption
693 is that such mount points are either backed up with another mechanism (e.g.,
694 NFS space that is bind mounted into many containers), or not intended to be
697 This simple mode is also used by the container restore operations in the web
701 ``Advanced'' Restore Mode
702 ^^^^^^^^^^^^^^^^^^^^^^^^^
704 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
705 parameters), the `pct restore` command is automatically switched into an
706 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
707 configuration options contained in the backup archive, and instead only uses
708 the options explicitly provided as parameters.
710 This mode allows flexible configuration of mount point settings at restore
713 * Set target storages, volume sizes and other options for each mount point
715 * Redistribute backed up files according to new mount point scheme
716 * Restore to device and/or bind mount points (limited to root user)
719 Managing Containers with `pct`
720 ------------------------------
722 The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
723 {pve} containers. It enables you to create or destroy containers, as well as
724 control the container execution (start, stop, reboot, migrate, etc.). It can be
725 used to set parameters in the config file of a container, for example the
726 network configuration or memory limits.
731 Create a container based on a Debian template (provided you have already
732 downloaded the template via the web interface)
735 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
744 Start a login session via getty
750 Enter the LXC namespace and run a shell as root user
756 Display the configuration
762 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
763 the address and gateway, while it's running
766 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
769 Reduce the memory of the container to 512MB
772 # pct set 100 -memory 512
776 Obtaining Debugging Logs
777 ~~~~~~~~~~~~~~~~~~~~~~~~
779 In case `pct start` is unable to start a specific container, it might be
780 helpful to collect debugging output by running `lxc-start` (replace `ID` with
784 # lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
787 This command will attempt to start the container in foreground mode, to stop
788 the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
790 The collected debug log is written to `/tmp/lxc-ID.log`.
792 NOTE: If you have changed the container's configuration since the last start
793 attempt with `pct start`, you need to run `pct start` at least once to also
794 update the configuration used by `lxc-start`.
800 If you have a cluster, you can migrate your Containers with
803 # pct migrate <ctid> <target>
806 This works as long as your Container is offline. If it has local volumes or
807 mount points defined, the migration will copy the content over the network to
808 the target host if the same storage is defined there.
810 Running containers cannot live-migrated due to techincal limitations. You can
811 do a restart migration, which shuts down, moves and then starts a container
812 again on the target node. As containers are very lightweight, this results
813 normally only in a downtime of some hundreds of milliseconds.
815 A restart migration can be done through the web interface or by using the
816 `--restart` flag with the `pct migrate` command.
818 A restart migration will shut down the Container and kill it after the
819 specified timeout (the default is 180 seconds). Then it will migrate the
820 Container like an offline migration and when finished, it starts the Container
823 [[pct_configuration]]
827 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
828 `<CTID>` is the numeric ID of the given container. Like all other files stored
829 inside `/etc/pve/`, they get automatically replicated to all other cluster
832 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
835 .Example Container Configuration
842 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
843 rootfs: local:107/vm-107-disk-1.raw,size=7G
846 The configuration files are simple text files. You can edit them using a normal
847 text editor, for example, `vi` or `nano`.
848 This is sometimes useful to do small corrections, but keep in mind that you
849 need to restart the container to apply such changes.
851 For that reason, it is usually better to use the `pct` command to generate and
852 modify those files, or do the whole thing using the GUI.
853 Our toolkit is smart enough to instantaneously apply most changes to running
854 containers. This feature is called ``hot plug'', and there is no need to restart
855 the container in that case.
857 In cases where a change cannot be hot-plugged, it will be registered as a
858 pending change (shown in red color in the GUI).
859 They will only be applied after rebooting the container.
865 The container configuration file uses a simple colon separated key/value
866 format. Each line has the following format:
873 Blank lines in those files are ignored, and lines starting with a `#` character
874 are treated as comments and are also ignored.
876 It is possible to add low-level, LXC style configuration directly, for example:
879 lxc.init_cmd: /sbin/my_own_init
885 lxc.init_cmd = /sbin/my_own_init
888 The settings are passed directly to the LXC low-level tools.
895 When you create a snapshot, `pct` stores the configuration at snapshot time
896 into a separate snapshot section within the same configuration file. For
897 example, after creating a snapshot called ``testsnapshot'', your configuration
898 file will look like this:
900 .Container configuration with snapshot
914 There are a few snapshot related properties like `parent` and `snaptime`. The
915 `parent` property is used to store the parent/child relationship between
916 snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
923 include::pct.conf.5-opts.adoc[]
929 Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
930 incompatible concurrent actions on the affected container. Sometimes you need
931 to remove such a lock manually (e.g., after a power failure).
937 CAUTION: Only do this if you are sure the action which set the lock is no
946 `/etc/pve/lxc/<CTID>.conf`::
948 Configuration file for the container '<CTID>'.
951 include::pve-copyright.adoc[]