+You can restrict this large list by specifying the `section` you are
+interested in, for example basic `system` images:
+
+.List available system images
+----
+# pveam available --section system
+system alpine-3.12-default_20200823_amd64.tar.xz
+system alpine-3.13-default_20210419_amd64.tar.xz
+system alpine-3.14-default_20210623_amd64.tar.xz
+system archlinux-base_20210420-1_amd64.tar.gz
+system centos-7-default_20190926_amd64.tar.xz
+system centos-8-default_20201210_amd64.tar.xz
+system debian-9.0-standard_9.7-1_amd64.tar.gz
+system debian-10-standard_10.7-1_amd64.tar.gz
+system devuan-3.0-standard_3.0_amd64.tar.gz
+system fedora-33-default_20201115_amd64.tar.xz
+system fedora-34-default_20210427_amd64.tar.xz
+system gentoo-current-default_20200310_amd64.tar.xz
+system opensuse-15.2-default_20200824_amd64.tar.xz
+system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
+system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
+system ubuntu-20.04-standard_20.04-1_amd64.tar.gz
+system ubuntu-20.10-standard_20.10-1_amd64.tar.gz
+system ubuntu-21.04-standard_21.04-1_amd64.tar.gz
+----
+
+Before you can use such a template, you need to download them into one of your
+storages. If you're unsure to which one, you can simply use the `local` named
+storage for that purpose. For clustered installations, it is preferred to use a
+shared storage so that all nodes can access those images.
+
+----
+# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
+----
+
+You are now ready to create containers using that image, and you can list all
+downloaded images on storage `local` with:
+
+----
+# pveam list local
+local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
+----
+
+TIP: You can also use the {pve} web interface GUI to download, list and delete
+container templates.
+
+`pct` uses them to create a new container, for example:
+
+----
+# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
+----
+
+The above command shows you the full {pve} volume identifiers. They include the
+storage name, and most other {pve} commands can use them. For example you can
+delete that image later with:
+
+----
+# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
+----
+
+
+[[pct_settings]]
+Container Settings
+------------------
+
+[[pct_general]]
+General Settings
+~~~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-general.png"]
+
+General settings of a container include
+
+* the *Node* : the physical server on which the container will run
+* the *CT ID*: a unique number in this {pve} installation used to identify your
+ container
+* *Hostname*: the hostname of the container
+* *Resource Pool*: a logical group of containers and VMs
+* *Password*: the root password of the container
+* *SSH Public Key*: a public key for connecting to the root account over SSH
+* *Unprivileged container*: this option allows to choose at creation time
+ if you want to create a privileged or unprivileged container.
+
+Unprivileged Containers
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Unprivileged containers use a new kernel feature called user namespaces.
+The root UID 0 inside the container is mapped to an unprivileged user outside
+the container. This means that most security issues (container escape, resource
+abuse, etc.) in these containers will affect a random unprivileged user, and
+would be a generic kernel security bug rather than an LXC issue. The LXC team
+thinks unprivileged containers are safe by design.
+
+This is the default option when creating a new container.
+
+NOTE: If the container uses systemd as an init system, please be aware the
+systemd version running inside the container should be equal to or greater than
+220.
+
+
+Privileged Containers
+^^^^^^^^^^^^^^^^^^^^^
+
+Security in containers is achieved by using mandatory access control 'AppArmor'
+restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
+considers this kind of container as unsafe, and they will not consider new
+container escape exploits to be security issues worthy of a CVE and quick fix.
+That's why privileged containers should only be used in trusted environments.
+
+
+[[pct_cpu]]
+CPU
+~~~
+
+[thumbnail="screenshot/gui-create-ct-cpu.png"]
+
+You can restrict the number of visible CPUs inside the container using the
+`cores` option. This is implemented using the Linux 'cpuset' cgroup
+(**c**ontrol *group*).
+A special task inside `pvestatd` tries to distribute running containers among
+available CPUs periodically.
+To view the assigned CPUs run the following command:
+
+----
+# pct cpusets
+ ---------------------
+ 102: 6 7
+ 105: 2 3 4 5
+ 108: 0 1
+ ---------------------
+----
+
+Containers use the host kernel directly. All tasks inside a container are
+handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
+**F**air **S**cheduler) scheduler by default, which has additional bandwidth
+control options.
+
+[horizontal]
+
+`cpulimit`: :: You can use this option to further limit assigned CPU time.
+Please note that this is a floating point number, so it is perfectly valid to
+assign two cores to a container, but restrict overall CPU consumption to half a
+core.
++
+----
+cores: 2
+cpulimit: 0.5
+----
+
+`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
+larger the number is, the more CPU time this container gets. Number is relative
+to the weights of all the other running containers. The default is 1024. You
+can use this setting to prioritize some containers.
+
+
+[[pct_memory]]
+Memory
+~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-memory.png"]
+
+Container memory is controlled using the cgroup memory controller.
+
+[horizontal]
+
+`memory`: :: Limit overall memory usage. This corresponds to the
+`memory.limit_in_bytes` cgroup setting.
+
+`swap`: :: Allows the container to use additional swap memory from the host
+swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
+setting, which is set to the sum of both value (`memory + swap`).
+
+
+[[pct_mount_points]]
+Mount Points
+~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-root-disk.png"]
+
+The root mount point is configured with the `rootfs` property. You can
+configure up to 256 additional mount points. The corresponding options are
+called `mp0` to `mp255`. They can contain the following settings:
+
+include::pct-mountpoint-opts.adoc[]
+
+Currently there are three types of mount points: storage backed mount points,
+bind mounts, and device mounts.
+
+.Typical container `rootfs` configuration
+----
+rootfs: thin1:base-100-disk-1,size=8G
+----
+
+
+Storage Backed Mount Points
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Storage backed mount points are managed by the {pve} storage subsystem and come
+in three different flavors:
+
+- Image based: these are raw images containing a single ext4 formatted file
+ system.
+- ZFS subvolumes: these are technically bind mounts, but with managed storage,
+ and thus allow resizing and snapshotting.
+- Directories: passing `size=0` triggers a special case where instead of a raw
+ image a directory is created.
+
+NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
+mount point volumes will automatically allocate a volume of the specified size
+on the specified storage. For example, calling
+
+----
+pct set 100 -mp0 thin1:10,mp=/path/in/container
+----
+
+will allocate a 10GB volume on the storage `thin1` and replace the volume ID
+place holder `10` with the allocated volume ID, and setup the moutpoint in the
+container at `/path/in/container`
+
+
+Bind Mount Points
+^^^^^^^^^^^^^^^^^
+
+Bind mounts allow you to access arbitrary directories from your Proxmox VE host
+inside a container. Some potential use cases are:
+
+- Accessing your home directory in the guest
+- Accessing an USB device directory in the guest
+- Accessing an NFS mount from the host in the guest
+
+Bind mounts are considered to not be managed by the storage subsystem, so you
+cannot make snapshots or deal with quotas from inside the container. With
+unprivileged containers you might run into permission problems caused by the
+user mapping and cannot use ACLs.
+
+NOTE: The contents of bind mount points are not backed up when using `vzdump`.
+
+WARNING: For security reasons, bind mounts should only be established using
+source directories especially reserved for this purpose, e.g., a directory
+hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
+`/`, `/var` or `/etc` into a container - this poses a great security risk.
+
+NOTE: The bind mount source path must not contain any symlinks.
+
+For example, to make the directory `/mnt/bindmounts/shared` accessible in the
+container with ID `100` under the path `/shared`, add a configuration line such as:
+
+----
+mp0: /mnt/bindmounts/shared,mp=/shared
+----
+
+into `/etc/pve/lxc/100.conf`.
+
+Or alternatively use the `pct` tool:
+
+----
+pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared
+----
+
+to achieve the same result.
+
+
+Device Mount Points
+^^^^^^^^^^^^^^^^^^^
+
+Device mount points allow to mount block devices of the host directly into the
+container. Similar to bind mounts, device mounts are not managed by {PVE}'s
+storage subsystem, but the `quota` and `acl` options will be honored.
+
+NOTE: Device mount points should only be used under special circumstances. In
+most cases a storage backed mount point offers the same performance and a lot
+more features.
+
+NOTE: The contents of device mount points are not backed up when using
+`vzdump`.
+
+
+[[pct_container_network]]
+Network
+~~~~~~~
+
+[thumbnail="screenshot/gui-create-ct-network.png"]
+
+You can configure up to 10 network interfaces for a single container.
+The corresponding options are called `net0` to `net9`, and they can contain the
+following setting:
+
+include::pct-network-opts.adoc[]
+
+
+[[pct_startup_and_shutdown]]
+Automatic Start and Shutdown of Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To automatically start a container when the host system boots, select the
+option 'Start at boot' in the 'Options' panel of the container in the web
+interface or run the following command:
+
+----
+# pct set CTID -onboot 1
+----
+
+.Start and Shutdown Order
+// use the screenshot from qemu - its the same
+[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
+
+If you want to fine tune the boot order of your containers, you can use the
+following parameters:
+
+* *Start/Shutdown order*: Defines the start order priority. For example, set it
+ to 1 if you want the CT to be the first to be started. (We use the reverse
+ startup order for shutdown, so a container with a start order of 1 would be
+ the last to be shut down)
+* *Startup delay*: Defines the interval between this container start and
+ subsequent containers starts. For example, set it to 240 if you want to wait
+ 240 seconds before starting other containers.
+* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
+ for the container to be offline after issuing a shutdown command.
+ By default this value is set to 60, which means that {pve} will issue a
+ shutdown request, wait 60s for the machine to be offline, and if after 60s
+ the machine is still online will notify that the shutdown action failed.
+
+Please note that containers without a Start/Shutdown order parameter will
+always start after those where the parameter is set, and this parameter only
+makes sense between the machines running locally on a host, and not
+cluster-wide.
+
+If you require a delay between the host boot and the booting of the first
+container, see the section on
+xref:first_guest_boot_delay[Proxmox VE Node Management].
+
+
+Hookscripts