+will allocate a 10GB volume on the storage `thin1` and replace the volume ID
+place holder `10` with the allocated volume ID, and setup the moutpoint in the
+container at `/path/in/container`
+
+
+Bind Mount Points
+^^^^^^^^^^^^^^^^^
+
+Bind mounts allow you to access arbitrary directories from your Proxmox VE host
+inside a container. Some potential use cases are:
+
+- Accessing your home directory in the guest
+- Accessing an USB device directory in the guest
+- Accessing an NFS mount from the host in the guest
+
+Bind mounts are considered to not be managed by the storage subsystem, so you
+cannot make snapshots or deal with quotas from inside the container. With
+unprivileged containers you might run into permission problems caused by the
+user mapping and cannot use ACLs.
+
+NOTE: The contents of bind mount points are not backed up when using `vzdump`.
+
+WARNING: For security reasons, bind mounts should only be established using
+source directories especially reserved for this purpose, e.g., a directory
+hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
+`/`, `/var` or `/etc` into a container - this poses a great security risk.
+
+NOTE: The bind mount source path must not contain any symlinks.
+
+For example, to make the directory `/mnt/bindmounts/shared` accessible in the
+container with ID `100` under the path `/shared`, use a configuration line like
+`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
+Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
+achieve the same result.
+
+
+Device Mount Points
+^^^^^^^^^^^^^^^^^^^
+
+Device mount points allow to mount block devices of the host directly into the
+container. Similar to bind mounts, device mounts are not managed by {PVE}'s
+storage subsystem, but the `quota` and `acl` options will be honored.
+
+NOTE: Device mount points should only be used under special circumstances. In
+most cases a storage backed mount point offers the same performance and a lot
+more features.
+
+NOTE: The contents of device mount points are not backed up when using
+`vzdump`.
+
+
+[[pct_container_network]]
+Network
+~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-network.png"]
+
+You can configure up to 10 network interfaces for a single container.
+The corresponding options are called `net0` to `net9`, and they can contain the
+following setting:
+
+include::pct-network-opts.adoc[]
+
+
+[[pct_startup_and_shutdown]]
+Automatic Start and Shutdown of Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To automatically start a container when the host system boots, select the
+option 'Start at boot' in the 'Options' panel of the container in the web
+interface or run the following command:
+
+----
+# pct set CTID -onboot 1
+----
+
+.Start and Shutdown Order
+// use the screenshot from qemu - its the same
+[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
+
+If you want to fine tune the boot order of your containers, you can use the
+following parameters:
+
+* *Start/Shutdown order*: Defines the start order priority. For example, set it
+ to 1 if you want the CT to be the first to be started. (We use the reverse
+ startup order for shutdown, so a container with a start order of 1 would be
+ the last to be shut down)
+* *Startup delay*: Defines the interval between this container start and
+ subsequent containers starts. For example, set it to 240 if you want to wait
+ 240 seconds before starting other containers.
+* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
+ for the container to be offline after issuing a shutdown command.
+ By default this value is set to 60, which means that {pve} will issue a
+ shutdown request, wait 60s for the machine to be offline, and if after 60s
+ the machine is still online will notify that the shutdown action failed.
+
+Please note that containers without a Start/Shutdown order parameter will
+always start after those where the parameter is set, and this parameter only
+makes sense between the machines running locally on a host, and not
+cluster-wide.
+
+Hookscripts
+~~~~~~~~~~~
+
+You can add a hook script to CTs with the config property `hookscript`.
+
+----
+# pct set 100 -hookscript local:snippets/hookscript.pl
+----
+
+It will be called during various phases of the guests lifetime. For an example
+and documentation see the example script under
+`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+
+Security Considerations
+-----------------------
+
+Containers use the kernel of the host system. This exposes an attack surface
+for malicious users. In general, full virtual machines provide better
+isolation. This should be considered if containers are provided to unknown or
+untrusted people.
+
+To reduce the attack surface, LXC uses many security features like AppArmor,
+CGroups and kernel namespaces.
+
+AppArmor
+~~~~~~~~
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
+
+To trace AppArmor activity, use:
+
+----
+# dmesg | grep apparmor
+----
+
+Although it is not recommended, AppArmor can be disabled for a container. This
+brings security risks with it. Some syscalls can lead to privilege escalation
+when executed within a container if the system is misconfigured or if a LXC or
+Linux Kernel vulnerability exists.
+
+To disable AppArmor for a container, add the following line to the container
+configuration file located at `/etc/pve/lxc/CTID.conf`:
+
+----
+lxc.apparmor.profile = unconfined
+----
+
+WARNING: Please note that this is not recommended for production use.
+
+
+// TODO: describe cgroups + seccomp a bit more.
+// TODO: pve-lxc-syscalld
+