ifdef::manvolnum[]
pct(1)
======
-include::attributes.txt[]
:pve-toplevel:
NAME
ifndef::manvolnum[]
Proxmox Container Toolkit
=========================
-include::attributes.txt[]
:pve-toplevel:
endif::manvolnum[]
ifdef::wiki[]
group/others model.
-[[pct_setting]]
+[[pct_settings]]
Container Settings
------------------
[[pct_cpu]]
-
CPU
~~~
+[thumbnail="gui-create-ct-cpu.png"]
+
You can restrict the number of visible CPUs inside the container using
the `cores` option. This is implemented using the Linux 'cpuset'
cgroup (**c**ontrol *group*). A special task inside `pvestatd` tries
which has additional bandwidth control options.
[horizontal]
-cpulimit: :: You can use this option to further limit assigned CPU
+
+`cpulimit`: :: You can use this option to further limit assigned CPU
time. Please note that this is a floating point number, so it is
perfectly valid to assign two cores to a container, but restrict
overall CPU consumption to half a core.
cpulimit: 0.5
----
-cpuunits: :: This is a relative weight passed to the kernel
+`cpuunits`: :: This is a relative weight passed to the kernel
scheduler. The larger the number is, the more CPU time this container
gets. Number is relative to the weights of all the other running
containers. The default is 1024. You can use this setting to
Memory
~~~~~~
+[thumbnail="gui-create-ct-memory.png"]
+
Container memory is controlled using the cgroup memory controller.
[horizontal]
-memory: :: Limit overall memory usage. This corresponds
+`memory`: :: Limit overall memory usage. This corresponds
to the `memory.limit_in_bytes` cgroup setting.
-swap: :: Allows the container to use additional swap memory from the
+`swap`: :: Allows the container to use additional swap memory from the
host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
cgroup setting, which is set to the sum of both value (`memory +
swap`).
Mount Points
~~~~~~~~~~~~
+[thumbnail="gui-create-ct-root-disk.png"]
+
The root mount point is configured with the `rootfs` property, and you can
configure up to 10 additional mount points. The corresponding options
are called `mp0` to `mp9`, and they can contain the following setting:
[[pct_container_network]]
-Container Network
-~~~~~~~~~~~~~~~~~
+Network
+~~~~~~~
+
+[thumbnail="gui-create-ct-network.png"]
You can configure up to 10 network interfaces for a single
container. The corresponding options are called `net0` to `net9`, and
include::pct-network-opts.adoc[]
+[[pct_startup_and_shutdown]]
+Automatic Start and Shutdown of Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After creating your containers, you probably want them to start automatically
+when the host system boots. For this you need to select the option 'Start at
+boot' from the 'Options' Tab of your container in the web interface, or set it with
+the following command:
+
+ pct set <ctid> -onboot 1
+
+.Start and Shutdown Order
+// use the screenshot from qemu - its the same
+[thumbnail="gui-qemu-edit-start-order.png"]
+
+If you want to fine tune the boot order of your containers, you can use the following
+parameters :
+
+* *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
+you want the CT to be the first to be started. (We use the reverse startup
+order for shutdown, so a container with a start order of 1 would be the last to
+be shut down)
+* *Startup delay*: Defines the interval between this container start and subsequent
+containers starts . E.g. set it to 240 if you want to wait 240 seconds before starting
+other containers.
+* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
+for the container to be offline after issuing a shutdown command.
+By default this value is set to 60, which means that {pve} will issue a
+shutdown request, wait 60s for the machine to be offline, and if after 60s
+the machine is still online will notify that the shutdown action failed.
+
+Please note that containers without a Start/Shutdown order parameter will always
+start after those where the parameter is set, and this parameter only
+makes sense between the machines running locally on a host, and not
+cluster-wide.
+
+
Backup and Restore
------------------