+Containers use the host kernel directly. All tasks inside a container are
+handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
+**F**air **S**cheduler) scheduler by default, which has additional bandwidth
+control options.
+
+[horizontal]
+
+`cpulimit`: :: You can use this option to further limit assigned CPU time.
+Please note that this is a floating point number, so it is perfectly valid to
+assign two cores to a container, but restrict overall CPU consumption to half a
+core.
++
+----
+cores: 2
+cpulimit: 0.5
+----
+
+`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
+larger the number is, the more CPU time this container gets. Number is relative
+to the weights of all the other running containers. The default is 1024. You
+can use this setting to prioritize some containers.
+
+
+[[pct_memory]]
+Memory
+~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-memory.png"]
+
+Container memory is controlled using the cgroup memory controller.
+
+[horizontal]
+
+`memory`: :: Limit overall memory usage. This corresponds to the
+`memory.limit_in_bytes` cgroup setting.
+
+`swap`: :: Allows the container to use additional swap memory from the host
+swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
+setting, which is set to the sum of both value (`memory + swap`).
+
+
+[[pct_mount_points]]
+Mount Points
+~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-root-disk.png"]
+
+The root mount point is configured with the `rootfs` property. You can
+configure up to 256 additional mount points. The corresponding options are
+called `mp0` to `mp255`. They can contain the following settings:
+
+include::pct-mountpoint-opts.adoc[]
+
+Currently there are three types of mount points: storage backed mount points,
+bind mounts, and device mounts.
+
+.Typical container `rootfs` configuration
+----
+rootfs: thin1:base-100-disk-1,size=8G
+----
+
+
+Storage Backed Mount Points
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Storage backed mount points are managed by the {pve} storage subsystem and come
+in three different flavors:
+
+- Image based: these are raw images containing a single ext4 formatted file
+ system.
+- ZFS subvolumes: these are technically bind mounts, but with managed storage,
+ and thus allow resizing and snapshotting.
+- Directories: passing `size=0` triggers a special case where instead of a raw
+ image a directory is created.
+
+NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
+mount point volumes will automatically allocate a volume of the specified size
+on the specified storage. For example, calling
+
+----
+pct set 100 -mp0 thin1:10,mp=/path/in/container
+----
+
+will allocate a 10GB volume on the storage `thin1` and replace the volume ID
+place holder `10` with the allocated volume ID, and setup the moutpoint in the
+container at `/path/in/container`
+
+
+Bind Mount Points
+^^^^^^^^^^^^^^^^^
+
+Bind mounts allow you to access arbitrary directories from your Proxmox VE host
+inside a container. Some potential use cases are:
+
+- Accessing your home directory in the guest
+- Accessing an USB device directory in the guest
+- Accessing an NFS mount from the host in the guest
+
+Bind mounts are considered to not be managed by the storage subsystem, so you
+cannot make snapshots or deal with quotas from inside the container. With
+unprivileged containers you might run into permission problems caused by the
+user mapping and cannot use ACLs.
+
+NOTE: The contents of bind mount points are not backed up when using `vzdump`.
+
+WARNING: For security reasons, bind mounts should only be established using
+source directories especially reserved for this purpose, e.g., a directory
+hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
+`/`, `/var` or `/etc` into a container - this poses a great security risk.
+
+NOTE: The bind mount source path must not contain any symlinks.
+
+For example, to make the directory `/mnt/bindmounts/shared` accessible in the
+container with ID `100` under the path `/shared`, use a configuration line like
+`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
+Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
+achieve the same result.
+
+
+Device Mount Points
+^^^^^^^^^^^^^^^^^^^
+
+Device mount points allow to mount block devices of the host directly into the
+container. Similar to bind mounts, device mounts are not managed by {PVE}'s
+storage subsystem, but the `quota` and `acl` options will be honored.
+
+NOTE: Device mount points should only be used under special circumstances. In
+most cases a storage backed mount point offers the same performance and a lot
+more features.
+
+NOTE: The contents of device mount points are not backed up when using
+`vzdump`.
+
+
+[[pct_container_network]]
+Network
+~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-network.png"]
+
+You can configure up to 10 network interfaces for a single container.
+The corresponding options are called `net0` to `net9`, and they can contain the
+following setting:
+
+include::pct-network-opts.adoc[]
+
+
+[[pct_startup_and_shutdown]]
+Automatic Start and Shutdown of Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To automatically start a container when the host system boots, select the
+option 'Start at boot' in the 'Options' panel of the container in the web
+interface or run the following command:
+
+----
+# pct set CTID -onboot 1
+----
+
+.Start and Shutdown Order
+// use the screenshot from qemu - its the same
+[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
+
+If you want to fine tune the boot order of your containers, you can use the
+following parameters:
+
+* *Start/Shutdown order*: Defines the start order priority. For example, set it
+ to 1 if you want the CT to be the first to be started. (We use the reverse
+ startup order for shutdown, so a container with a start order of 1 would be
+ the last to be shut down)
+* *Startup delay*: Defines the interval between this container start and
+ subsequent containers starts. For example, set it to 240 if you want to wait
+ 240 seconds before starting other containers.
+* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
+ for the container to be offline after issuing a shutdown command.
+ By default this value is set to 60, which means that {pve} will issue a
+ shutdown request, wait 60s for the machine to be offline, and if after 60s
+ the machine is still online will notify that the shutdown action failed.
+
+Please note that containers without a Start/Shutdown order parameter will
+always start after those where the parameter is set, and this parameter only
+makes sense between the machines running locally on a host, and not
+cluster-wide.
+
+Hookscripts
+~~~~~~~~~~~
+
+You can add a hook script to CTs with the config property `hookscript`.
+
+----
+# pct set 100 -hookscript local:snippets/hookscript.pl
+----
+
+It will be called during various phases of the guests lifetime. For an example
+and documentation see the example script under
+`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+
+Security Considerations
+-----------------------
+
+Containers use the kernel of the host system. This exposes an attack surface
+for malicious users. In general, full virtual machines provide better
+isolation. This should be considered if containers are provided to unknown or
+untrusted people.
+
+To reduce the attack surface, LXC uses many security features like AppArmor,
+CGroups and kernel namespaces.
+
+AppArmor
+~~~~~~~~
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
+
+To trace AppArmor activity, use:
+
+----
+# dmesg | grep apparmor