+NOTE: The contents of device mount points are not backed up when using
+`vzdump`.
+
+
+[[pct_container_network]]
+Network
+~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-network.png"]
+
+You can configure up to 10 network interfaces for a single container.
+The corresponding options are called `net0` to `net9`, and they can contain the
+following setting:
+
+include::pct-network-opts.adoc[]
+
+
+[[pct_startup_and_shutdown]]
+Automatic Start and Shutdown of Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To automatically start a container when the host system boots, select the
+option 'Start at boot' in the 'Options' panel of the container in the web
+interface or run the following command:
+
+----
+# pct set CTID -onboot 1
+----
+
+.Start and Shutdown Order
+// use the screenshot from qemu - its the same
+[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
+
+If you want to fine tune the boot order of your containers, you can use the
+following parameters:
+
+* *Start/Shutdown order*: Defines the start order priority. For example, set it
+ to 1 if you want the CT to be the first to be started. (We use the reverse
+ startup order for shutdown, so a container with a start order of 1 would be
+ the last to be shut down)
+* *Startup delay*: Defines the interval between this container start and
+ subsequent containers starts. For example, set it to 240 if you want to wait
+ 240 seconds before starting other containers.
+* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
+ for the container to be offline after issuing a shutdown command.
+ By default this value is set to 60, which means that {pve} will issue a
+ shutdown request, wait 60s for the machine to be offline, and if after 60s
+ the machine is still online will notify that the shutdown action failed.
+
+Please note that containers without a Start/Shutdown order parameter will
+always start after those where the parameter is set, and this parameter only
+makes sense between the machines running locally on a host, and not
+cluster-wide.
+
+If you require a delay between the host boot and the booting of the first
+container, see the section on
+xref:first_guest_boot_delay[Proxmox VE Node Management].
+
+
+Hookscripts
+~~~~~~~~~~~
+
+You can add a hook script to CTs with the config property `hookscript`.
+
+----
+# pct set 100 -hookscript local:snippets/hookscript.pl
+----
+
+It will be called during various phases of the guests lifetime. For an example
+and documentation see the example script under
+`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+
+Security Considerations
+-----------------------
+
+Containers use the kernel of the host system. This exposes an attack surface
+for malicious users. In general, full virtual machines provide better
+isolation. This should be considered if containers are provided to unknown or
+untrusted people.
+
+To reduce the attack surface, LXC uses many security features like AppArmor,
+CGroups and kernel namespaces.
+
+AppArmor
+~~~~~~~~
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
+
+To trace AppArmor activity, use:
+
+----
+# dmesg | grep apparmor
+----
+
+Although it is not recommended, AppArmor can be disabled for a container. This
+brings security risks with it. Some syscalls can lead to privilege escalation
+when executed within a container if the system is misconfigured or if a LXC or
+Linux Kernel vulnerability exists.
+
+To disable AppArmor for a container, add the following line to the container
+configuration file located at `/etc/pve/lxc/CTID.conf`:
+
+----
+lxc.apparmor.profile = unconfined
+----
+
+WARNING: Please note that this is not recommended for production use.
+
+
+[[pct_cgroup]]
+Control Groups ('cgroup')
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+'cgroup' is a kernel
+mechanism used to hierarchically organize processes and distribute system
+resources.
+
+The main resources controlled via 'cgroups' are CPU time, memory and swap
+limits, and access to device nodes. 'cgroups' are also used to "freeze" a
+container before taking snapshots.
+
+There are 2 versions of 'cgroups' currently available,
+https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy]
+and
+https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2'].
+
+Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a
+"hybrid" setup was used, where resource control was mainly done in 'cgroupv1'
+with an additional 'cgroupv2' controller which could take over some subsystems
+via the 'cgroup_no_v1' kernel command-line parameter. (See the
+https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel
+parameter documentation] for details.)
+
+[[pct_cgroup_compat]]
+CGroup Version Compatibility
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The main difference between pure 'cgroupv2' and the old hybrid environments
+regarding {pve} is that with 'cgroupv2' memory and swap are now controlled
+independently. The memory and swap settings for containers can map directly to
+these values, whereas previously only the memory limit and the limit of the
+*sum* of memory and swap could be limited.
+
+Another important difference is that the 'devices' controller is configured in a
+completely different way. Because of this, file system quotas are currently not
+supported in a pure 'cgroupv2' environment.
+
+'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2'
+environment. Containers running 'systemd' version 231 or newer support
+'cgroupv2' footnote:[this includes all newest major versions of container
+templates shipped by {pve}], as do containers not using 'systemd' as init
+system footnote:[for example Alpine Linux].
+
+[NOTE]
+====
+CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases,
+which have a 'systemd' version that is too old to run in a 'cgroupv2'
+environment, you can either
+
+* Upgrade the whole distribution to a newer release. For the examples above, that
+ could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like
+ AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and
+ security fixes, often also new features, and moving the EOL date in the future.
+
+* Upgrade the Containers systemd version. If the distribution provides a
+ backports repository this can be an easy and quick stop-gap measurement.
+
+* Move the container, or its services, to a Virtual Machine. Virtual Machines
+ have a much less interaction with the host, that's why one can install
+ decades old OS versions just fine there.
+
+* Switch back to the legacy 'cgroup' controller. Note that while it can be a
+ valid solution, it's not a permanent one. Starting from {pve} 9.0, the legacy
+ controller will not be supported anymore.
+====
+
+[[pct_cgroup_change_version]]
+Changing CGroup Version
+^^^^^^^^^^^^^^^^^^^^^^^
+
+TIP: If file system quotas are not required and all containers support 'cgroupv2',
+it is recommended to stick to the new default.
+
+To switch back to the previous version the following kernel command-line
+parameter can be used:
+
+----
+systemd.unified_cgroup_hierarchy=0
+----
+
+See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot
+command line on where to add the parameter.
+
+// TODO: seccomp a bit more.
+// TODO: pve-lxc-syscalld
+
+
+Guest Operating System Configuration
+------------------------------------
+
+{pve} tries to detect the Linux distribution in the container, and modifies
+some files. Here is a short list of things done at container startup:
+
+set /etc/hostname:: to set the container name
+
+modify /etc/hosts:: to allow lookup of the local hostname
+
+network setup:: pass the complete network setup to the container
+
+configure DNS:: pass information about DNS servers
+
+adapt the init system:: for example, fix the number of spawned getty processes
+
+set the root password:: when creating a new container
+
+rewrite ssh_host_keys:: so that each container has unique keys
+
+randomize crontab:: so that cron does not start at the same time on all containers
+
+Changes made by {PVE} are enclosed by comment markers:
+
+----
+# --- BEGIN PVE ---
+<data>
+# --- END PVE ---
+----
+
+Those markers will be inserted at a reasonable location in the file. If such a
+section already exists, it will be updated in place and will not be moved.
+
+Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
+For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
+file will not be touched. This can be a simple empty file created via:
+
+----
+# touch /etc/.pve-ignore.hosts
+----
+
+Most modifications are OS dependent, so they differ between different
+distributions and versions. You can completely disable modifications by
+manually setting the `ostype` to `unmanaged`.
+
+OS type detection is done by testing for certain files inside the
+container. {pve} first checks the `/etc/os-release` file
+footnote:[/etc/os-release replaces the multitude of per-distribution
+release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
+If that file is not present, or it does not contain a clearly recognizable
+distribution identifier the following distribution specific release files are
+checked.
+
+Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
+
+Debian:: test /etc/debian_version
+
+Fedora:: test /etc/fedora-release
+
+RedHat or CentOS:: test /etc/redhat-release
+
+ArchLinux:: test /etc/arch-release
+
+Alpine:: test /etc/alpine-release
+
+Gentoo:: test /etc/gentoo-release
+
+NOTE: Container start fails if the configured `ostype` differs from the auto
+detected type.
+
+
+[[pct_container_storage]]
+Container Storage
+-----------------
+
+The {pve} LXC container storage model is more flexible than traditional
+container storage models. A container can have multiple mount points. This
+makes it possible to use the best suited storage for each application.
+
+For example the root file system of the container can be on slow and cheap
+storage while the database can be on fast and distributed storage via a second
+mount point. See section <<pct_mount_points, Mount Points>> for further
+details.
+
+Any storage type supported by the {pve} storage library can be used. This means
+that containers can be stored on local (for example `lvm`, `zfs` or directory),
+shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
+Ceph. Advanced storage features like snapshots or clones can be used if the
+underlying storage supports them. The `vzdump` backup tool can use snapshots to
+provide consistent container backups.
+
+Furthermore, local devices or local directories can be mounted directly using
+'bind mounts'. This gives access to local resources inside a container with
+practically zero overhead. Bind mounts can be used as an easy way to share data
+between containers.