+Backup of Containers mount points
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default additional mount points besides the Root Disk mount point are not
+included in backups. You can reverse this default behavior by setting the
+*Backup* option on a mount point.
+// see PVE::VZDump::LXC::prepare()
+
+Replication of Containers mount points
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default additional mount points are replicated when the Root Disk
+is replicated. If you want the {pve} storage replication mechanism to skip a
+ mount point when starting a replication job, you can set the
+*Skip replication* option on that mount point. +
+As of {pve} 5.0, replication requires a storage of type `zfspool`, so adding a
+ mount point to a different type of storage when the container has replication
+ configured requires to *Skip replication* for that mount point.
+
+
+[[pct_settings]]
+Container Settings
+------------------
+
+[[pct_general]]
+General Settings
+~~~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-general.png"]
+
+General settings of a container include
+
+* the *Node* : the physical server on which the container will run
+* the *CT ID*: a unique number in this {pve} installation used to identify your container
+* *Hostname*: the hostname of the container
+* *Resource Pool*: a logical group of containers and VMs
+* *Password*: the root password of the container
+* *SSH Public Key*: a public key for connecting to the root account over SSH
+* *Unprivileged container*: this option allows to choose at creation time
+if you want to create a privileged or unprivileged container.
+
+
+Privileged Containers
+^^^^^^^^^^^^^^^^^^^^^
+
+Security is done by dropping capabilities, using mandatory access
+control (AppArmor), SecComp filters and namespaces. The LXC team
+considers this kind of container as unsafe, and they will not consider
+new container escape exploits to be security issues worthy of a CVE
+and quick fix. So you should use this kind of containers only inside a
+trusted environment, or when no untrusted task is running as root in
+the container.
+
+
+Unprivileged Containers
+^^^^^^^^^^^^^^^^^^^^^^^
+
+This kind of containers use a new kernel feature called user
+namespaces. The root UID 0 inside the container is mapped to an
+unprivileged user outside the container. This means that most security
+issues (container escape, resource abuse, ...) in those containers
+will affect a random unprivileged user, and so would be a generic
+kernel security bug rather than an LXC issue. The LXC team thinks
+unprivileged containers are safe by design.
+
+NOTE: If the container uses systemd as an init system, please be
+aware the systemd version running inside the container should be equal
+or greater than 220.
+
+[[pct_cpu]]
+CPU
+~~~
+
+[thumbnail="screenshot/gui-create-ct-cpu.png"]
+
+You can restrict the number of visible CPUs inside the container using
+the `cores` option. This is implemented using the Linux 'cpuset'
+cgroup (**c**ontrol *group*). A special task inside `pvestatd` tries
+to distribute running containers among available CPUs. You can view
+the assigned CPUs using the following command:
+
+----
+# pct cpusets
+ ---------------------
+ 102: 6 7
+ 105: 2 3 4 5
+ 108: 0 1
+ ---------------------
+----
+
+Containers use the host kernel directly, so all task inside a
+container are handled by the host CPU scheduler. {pve} uses the Linux
+'CFS' (**C**ompletely **F**air **S**cheduler) scheduler by default,
+which has additional bandwidth control options.
+
+[horizontal]
+
+`cpulimit`: :: You can use this option to further limit assigned CPU
+time. Please note that this is a floating point number, so it is
+perfectly valid to assign two cores to a container, but restrict
+overall CPU consumption to half a core.
++
+----
+cores: 2
+cpulimit: 0.5
+----
+
+`cpuunits`: :: This is a relative weight passed to the kernel
+scheduler. The larger the number is, the more CPU time this container
+gets. Number is relative to the weights of all the other running
+containers. The default is 1024. You can use this setting to
+prioritize some containers.
+
+
+[[pct_memory]]
+Memory
+~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-memory.png"]
+
+Container memory is controlled using the cgroup memory controller.
+
+[horizontal]
+
+`memory`: :: Limit overall memory usage. This corresponds
+to the `memory.limit_in_bytes` cgroup setting.
+
+`swap`: :: Allows the container to use additional swap memory from the
+host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
+cgroup setting, which is set to the sum of both value (`memory +
+swap`).
+
+
+[[pct_mount_points]]
+Mount Points
+~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-root-disk.png"]
+
+The root mount point is configured with the `rootfs` property, and you can
+configure up to 10 additional mount points. The corresponding options
+are called `mp0` to `mp9`, and they can contain the following setting:
+
+include::pct-mountpoint-opts.adoc[]
+
+Currently there are basically three types of mount points: storage backed
+mount points, bind mounts and device mounts.
+
+.Typical container `rootfs` configuration
+----
+rootfs: thin1:base-100-disk-1,size=8G
+----
+
+
+Storage Backed Mount Points
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Storage backed mount points are managed by the {pve} storage subsystem and come
+in three different flavors:
+
+- Image based: these are raw images containing a single ext4 formatted file
+ system.
+- ZFS subvolumes: these are technically bind mounts, but with managed storage,
+ and thus allow resizing and snapshotting.
+- Directories: passing `size=0` triggers a special case where instead of a raw
+ image a directory is created.
+
+NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
+mount point volumes will automatically allocate a volume of the specified size
+on the specified storage. E.g., calling
+`pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
+on the storage `thin1` and replace the volume ID place holder `10` with the
+allocated volume ID.
+
+
+Bind Mount Points
+^^^^^^^^^^^^^^^^^
+
+Bind mounts allow you to access arbitrary directories from your Proxmox VE host
+inside a container. Some potential use cases are:
+
+- Accessing your home directory in the guest
+- Accessing an USB device directory in the guest
+- Accessing an NFS mount from the host in the guest
+
+Bind mounts are considered to not be managed by the storage subsystem, so you
+cannot make snapshots or deal with quotas from inside the container. With
+unprivileged containers you might run into permission problems caused by the
+user mapping and cannot use ACLs.
+
+NOTE: The contents of bind mount points are not backed up when using `vzdump`.
+
+WARNING: For security reasons, bind mounts should only be established
+using source directories especially reserved for this purpose, e.g., a
+directory hierarchy under `/mnt/bindmounts`. Never bind mount system
+directories like `/`, `/var` or `/etc` into a container - this poses a
+great security risk.
+
+NOTE: The bind mount source path must not contain any symlinks.
+
+For example, to make the directory `/mnt/bindmounts/shared` accessible in the
+container with ID `100` under the path `/shared`, use a configuration line like
+`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
+Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
+achieve the same result.
+
+
+Device Mount Points
+^^^^^^^^^^^^^^^^^^^
+
+Device mount points allow to mount block devices of the host directly into the
+container. Similar to bind mounts, device mounts are not managed by {PVE}'s
+storage subsystem, but the `quota` and `acl` options will be honored.
+
+NOTE: Device mount points should only be used under special circumstances. In
+most cases a storage backed mount point offers the same performance and a lot
+more features.
+
+NOTE: The contents of device mount points are not backed up when using `vzdump`.
+
+
+[[pct_container_network]]
+Network
+~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-network.png"]