* Container setup from host (network, DNS, storage, etc.)
-Security Considerations
------------------------
-
-Containers use the kernel of the host system. This creates a big attack
-surface for malicious users. This should be considered if containers
-are provided to untrustworthy people. In general, full
-virtual machines provide better isolation.
-
-However, LXC uses many security features like AppArmor, CGroups and kernel
-namespaces to reduce the attack surface.
-
-AppArmor profiles are used to restrict access to possibly dangerous actions.
-Some system calls, i.e. `mount`, are prohibited from execution.
-
-To trace AppArmor activity, use:
-
-----
-# dmesg | grep apparmor
-----
-
-Guest Operating System Configuration
-------------------------------------
-
-{pve} tries to detect the Linux distribution in the container, and modifies some
-files. Here is a short list of things done at container startup:
-
-set /etc/hostname:: to set the container name
-
-modify /etc/hosts:: to allow lookup of the local hostname
-
-network setup:: pass the complete network setup to the container
-
-configure DNS:: pass information about DNS servers
-
-adapt the init system:: for example, fix the number of spawned getty processes
-
-set the root password:: when creating a new container
-
-rewrite ssh_host_keys:: so that each container has unique keys
-
-randomize crontab:: so that cron does not start at the same time on all containers
-
-Changes made by {PVE} are enclosed by comment markers:
-
-----
-# --- BEGIN PVE ---
-<data>
-# --- END PVE ---
-----
-
-Those markers will be inserted at a reasonable location in the
-file. If such a section already exists, it will be updated in place
-and will not be moved.
-
-Modification of a file can be prevented by adding a `.pve-ignore.`
-file for it. For instance, if the file `/etc/.pve-ignore.hosts`
-exists then the `/etc/hosts` file will not be touched. This can be a
-simple empty file created via:
-
-----
-# touch /etc/.pve-ignore.hosts
-----
-
-Most modifications are OS dependent, so they differ between different
-distributions and versions. You can completely disable modifications
-by manually setting the `ostype` to `unmanaged`.
-
-OS type detection is done by testing for certain files inside the
-container:
-
-Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
-
-Debian:: test /etc/debian_version
-
-Fedora:: test /etc/fedora-release
-
-RedHat or CentOS:: test /etc/redhat-release
-
-ArchLinux:: test /etc/arch-release
-
-Alpine:: test /etc/alpine-release
-
-Gentoo:: test /etc/gentoo-release
-
-NOTE: Container start fails if the configured `ostype` differs from the auto
-detected type.
-
[[pct_container_images]]
Container Images
----------------
Container images, sometimes also referred to as ``templates'' or
-``appliances'', are `tar` archives which contain everything to run a
-container. `pct` uses them to create a new container, for example:
+``appliances'', are `tar` archives which contain everything to run a container.
-----
-# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
-----
-
-{pve} itself provides a variety of basic templates for the most common
-Linux distributions. They can be downloaded using the GUI or the
-`pveam` (short for {pve} Appliance Manager) command line utility.
-Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
-container templates are also available to download.
+{pve} itself provides a variety of basic templates for the most common Linux
+distributions. They can be downloaded using the GUI or the `pveam` (short for
+{pve} Appliance Manager) command line utility.
+Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
+are also available to download.
-The list of available templates is updated daily via cron. To trigger it manually:
+The list of available templates is updated daily through the 'pve-daily-update'
+timer. You can also trigger an update manually by executing:
----
# pveam update
system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
----
-Before you can use such a template, you need to download them into one
-of your storages. You can simply use storage `local` for that
-purpose. For clustered installations, it is preferred to use a shared
-storage so that all nodes can access those images.
+Before you can use such a template, you need to download them into one of your
+storages. If you're unsure to which one, you can simply use the `local` named
+storage for that purpose. For clustered installations, it is preferred to use a
+shared storage so that all nodes can access those images.
----
# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
----
-You are now ready to create containers using that image, and you can
-list all downloaded images on storage `local` with:
+You are now ready to create containers using that image, and you can list all
+downloaded images on storage `local` with:
----
# pveam list local
local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
----
-The above command shows you the full {pve} volume identifiers. They include
-the storage name, and most other {pve} commands can use them. For
-example you can delete that image later with:
-
-----
-# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
-----
-
-[[pct_container_storage]]
-Container Storage
------------------
-
-The {pve} LXC container storage model is more flexible than traditional
-container storage models. A container can have multiple mount points. This makes
-it possible to use the best suited storage for each application.
-
-For example the root file system of the container can be on slow and cheap
-storage while the database can be on fast and distributed storage via a second
-mount point. See section <<pct_mount_points, Mount Points>> for further details.
-
-Any storage type supported by the {pve} storage library can be used. This means
-that containers can be stored on local (for example `lvm`, `zfs` or directory),
-shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
-Ceph. Advanced storage features like snapshots or clones can be used if the
-underlying storage supports them. The `vzdump` backup tool can use snapshots to
-provide consistent container backups.
-
-Furthermore, local devices or local directories can be mounted directly using
-'bind mounts'. This gives access to local resources inside a container with
-practically zero overhead. Bind mounts can be used as an easy way to share data
-between containers.
-
-
-FUSE Mounts
-~~~~~~~~~~~
-
-WARNING: Because of existing issues in the Linux kernel's freezer
-subsystem the usage of FUSE mounts inside a container is strongly
-advised against, as containers need to be frozen for suspend or
-snapshot mode backups.
-
-If FUSE mounts cannot be replaced by other mounting mechanisms or storage
-technologies, it is possible to establish the FUSE mount on the Proxmox host
-and use a bind mount point to make it accessible inside the container.
-
-
-Using Quotas Inside Containers
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Quotas allow to set limits inside a container for the amount of disk
-space that each user can use.
-
-NOTE: This only works on ext4 image based storage types and currently only works
-with privileged containers.
-
-Activating the `quota` option causes the following mount options to be
-used for a mount point:
-`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
-
-This allows quotas to be used like on any other system. You
-can initialize the `/aquota.user` and `/aquota.group` files by running
-
-----
-# quotacheck -cmug /
-# quotaon /
-----
-
-and edit the quotas via the `edquota` command. Refer to the documentation
-of the distribution running inside the container for details.
-
-NOTE: You need to run the above commands for every mount point by passing
-the mount point's path instead of just `/`.
-
-
-Using ACLs Inside Containers
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
-containers. ACLs allow you to set more detailed file ownership than the
-traditional user/group/others model.
-
+TIP: You can also use the {pve} web interface GUI to download, list and delete
+container templates.
-Backup of Container mount points
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To include a mount point in backups, enable the `backup` option for it in the
-container configuration. For an existing mount point `mp0`
+`pct` uses them to create a new container, for example:
----
-mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
+# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
----
-add `backup=1` to enable it.
+The above command shows you the full {pve} volume identifiers. They include the
+storage name, and most other {pve} commands can use them. For example you can
+delete that image later with:
----
-mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
+# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
----
-NOTE: When creating a new mount point in the GUI, this option is enabled by
-default.
-
-To disable backups for a mount point, add `backup=0` in the way described above,
-or uncheck the *Backup* checkbox on the GUI.
-
-Replication of Containers mount points
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-By default, additional mount points are replicated when the Root Disk is
-replicated. If you want the {pve} storage replication mechanism to skip a mount
-point, you can set the *Skip replication* option for that mount point. +
-As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
-mount point to a different type of storage when the container has replication
-configured requires to have *Skip replication* enabled for that mount point.
[[pct_settings]]
Container Settings
General settings of a container include
* the *Node* : the physical server on which the container will run
-* the *CT ID*: a unique number in this {pve} installation used to identify your container
+* the *CT ID*: a unique number in this {pve} installation used to identify your
+ container
* *Hostname*: the hostname of the container
* *Resource Pool*: a logical group of containers and VMs
* *Password*: the root password of the container
* *SSH Public Key*: a public key for connecting to the root account over SSH
* *Unprivileged container*: this option allows to choose at creation time
-if you want to create a privileged or unprivileged container.
+ if you want to create a privileged or unprivileged container.
Unprivileged Containers
^^^^^^^^^^^^^^^^^^^^^^^
-Unprivileged containers use a new kernel feature called user namespaces. The
-root UID 0 inside the container is mapped to an unprivileged user outside the
-container. This means that most security issues (container escape, resource
+Unprivileged containers use a new kernel feature called user namespaces.
+The root UID 0 inside the container is mapped to an unprivileged user outside
+the container. This means that most security issues (container escape, resource
abuse, etc.) in these containers will affect a random unprivileged user, and
would be a generic kernel security bug rather than an LXC issue. The LXC team
thinks unprivileged containers are safe by design.
This is the default option when creating a new container.
-NOTE: If the container uses systemd as an init system, please be
-aware the systemd version running inside the container should be equal to
-or greater than 220.
+NOTE: If the container uses systemd as an init system, please be aware the
+systemd version running inside the container should be equal to or greater than
+220.
Privileged Containers
^^^^^^^^^^^^^^^^^^^^^
-Security in containers is achieved by using mandatory access control
-(AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
-container as unsafe, and they will not consider new container escape exploits
-to be security issues worthy of a CVE and quick fix. That's why privileged
-containers should only be used in trusted environments.
-
-WARNING: Although it is not recommended, AppArmor can be disabled for a
-container. This brings security risks with it. Some syscalls can lead to
-privilege escalation when executed within a container if the system is
-misconfigured or if a LXC or Linux Kernel vulnerability exists.
-
-To disable AppArmor for a container, add the following line to the container
-configuration file located at `/etc/pve/lxc/CTID.conf`:
-
-----
-lxc.apparmor_profile = unconfined
-----
-
-Please note that this is not recommended for production use.
-
+Security in containers is achieved by using mandatory access control 'AppArmor'
+restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
+considers this kind of container as unsafe, and they will not consider new
+container escape exploits to be security issues worthy of a CVE and quick fix.
+That's why privileged containers should only be used in trusted environments.
[[pct_cpu]]
You can restrict the number of visible CPUs inside the container using the
`cores` option. This is implemented using the Linux 'cpuset' cgroup
-(**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
-running containers among available CPUs. To view the assigned CPUs run
-the following command:
+(**c**ontrol *group*).
+A special task inside `pvestatd` tries to distribute running containers among
+available CPUs periodically.
+To view the assigned CPUs run the following command:
----
# pct cpusets
[horizontal]
-`cpulimit`: :: You can use this option to further limit assigned CPU
-time. Please note that this is a floating point number, so it is
-perfectly valid to assign two cores to a container, but restrict
-overall CPU consumption to half a core.
+`cpulimit`: :: You can use this option to further limit assigned CPU time.
+Please note that this is a floating point number, so it is perfectly valid to
+assign two cores to a container, but restrict overall CPU consumption to half a
+core.
+
----
cores: 2
cpulimit: 0.5
----
-`cpuunits`: :: This is a relative weight passed to the kernel
-scheduler. The larger the number is, the more CPU time this container
-gets. Number is relative to the weights of all the other running
-containers. The default is 1024. You can use this setting to
-prioritize some containers.
+`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
+larger the number is, the more CPU time this container gets. Number is relative
+to the weights of all the other running containers. The default is 1024. You
+can use this setting to prioritize some containers.
[[pct_memory]]
[horizontal]
-`memory`: :: Limit overall memory usage. This corresponds
-to the `memory.limit_in_bytes` cgroup setting.
+`memory`: :: Limit overall memory usage. This corresponds to the
+`memory.limit_in_bytes` cgroup setting.
-`swap`: :: Allows the container to use additional swap memory from the
-host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
-cgroup setting, which is set to the sum of both value (`memory +
-swap`).
+`swap`: :: Allows the container to use additional swap memory from the host
+swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
+setting, which is set to the sum of both value (`memory + swap`).
[[pct_mount_points]]
[thumbnail="screenshot/gui-create-ct-root-disk.png"]
The root mount point is configured with the `rootfs` property. You can
-configure up to 256 additional mount points. The corresponding options
-are called `mp0` to `mp255`. They can contain the following settings:
+configure up to 256 additional mount points. The corresponding options are
+called `mp0` to `mp255`. They can contain the following settings:
include::pct-mountpoint-opts.adoc[]
-Currently there are three types of mount points: storage backed
-mount points, bind mounts, and device mounts.
+Currently there are three types of mount points: storage backed mount points,
+bind mounts, and device mounts.
.Typical container `rootfs` configuration
----
NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
mount point volumes will automatically allocate a volume of the specified size
-on the specified storage. E.g., calling
-`pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
-on the storage `thin1` and replace the volume ID place holder `10` with the
-allocated volume ID.
+on the specified storage. For example, calling
+
+----
+pct set 100 -mp0 thin1:10,mp=/path/in/container
+----
+
+will allocate a 10GB volume on the storage `thin1` and replace the volume ID
+place holder `10` with the allocated volume ID, and setup the moutpoint in the
+container at `/path/in/container`
Bind Mount Points
NOTE: The contents of bind mount points are not backed up when using `vzdump`.
-WARNING: For security reasons, bind mounts should only be established
-using source directories especially reserved for this purpose, e.g., a
-directory hierarchy under `/mnt/bindmounts`. Never bind mount system
-directories like `/`, `/var` or `/etc` into a container - this poses a
-great security risk.
+WARNING: For security reasons, bind mounts should only be established using
+source directories especially reserved for this purpose, e.g., a directory
+hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
+`/`, `/var` or `/etc` into a container - this poses a great security risk.
NOTE: The bind mount source path must not contain any symlinks.
most cases a storage backed mount point offers the same performance and a lot
more features.
-NOTE: The contents of device mount points are not backed up when using `vzdump`.
+NOTE: The contents of device mount points are not backed up when using
+`vzdump`.
[[pct_container_network]]
[thumbnail="screenshot/gui-create-ct-network.png"]
-You can configure up to 10 network interfaces for a single
-container. The corresponding options are called `net0` to `net9`, and
-they can contain the following setting:
+You can configure up to 10 network interfaces for a single container.
+The corresponding options are called `net0` to `net9`, and they can contain the
+following setting:
include::pct-network-opts.adoc[]
// use the screenshot from qemu - its the same
[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
-If you want to fine tune the boot order of your containers, you can use the following
-parameters:
+If you want to fine tune the boot order of your containers, you can use the
+following parameters:
-* *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
-you want the CT to be the first to be started. (We use the reverse startup
-order for shutdown, so a container with a start order of 1 would be the last to
-be shut down)
-* *Startup delay*: Defines the interval between this container start and subsequent
-containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
-other containers.
+* *Start/Shutdown order*: Defines the start order priority. For example, set it
+ to 1 if you want the CT to be the first to be started. (We use the reverse
+ startup order for shutdown, so a container with a start order of 1 would be
+ the last to be shut down)
+* *Startup delay*: Defines the interval between this container start and
+ subsequent containers starts. For example, set it to 240 if you want to wait
+ 240 seconds before starting other containers.
* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
-for the container to be offline after issuing a shutdown command.
-By default this value is set to 60, which means that {pve} will issue a
-shutdown request, wait 60s for the machine to be offline, and if after 60s
-the machine is still online will notify that the shutdown action failed.
+ for the container to be offline after issuing a shutdown command.
+ By default this value is set to 60, which means that {pve} will issue a
+ shutdown request, wait 60s for the machine to be offline, and if after 60s
+ the machine is still online will notify that the shutdown action failed.
-Please note that containers without a Start/Shutdown order parameter will always
-start after those where the parameter is set, and this parameter only
+Please note that containers without a Start/Shutdown order parameter will
+always start after those where the parameter is set, and this parameter only
makes sense between the machines running locally on a host, and not
cluster-wide.
# pct set 100 -hookscript local:snippets/hookscript.pl
----
-It will be called during various phases of the guests lifetime.
-For an example and documentation see the example script under
+It will be called during various phases of the guests lifetime. For an example
+and documentation see the example script under
`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
+Security Considerations
+-----------------------
+
+Containers use the kernel of the host system. This exposes an attack surface
+for malicious users. In general, full virtual machines provide better
+isolation. This should be considered if containers are provided to unkown or
+untrusted people.
+
+To reduce the attack surface, LXC uses many security features like AppArmor,
+CGroups and kernel namespaces.
+
+AppArmor
+~~~~~~~~
+
+AppArmor profiles are used to restrict access to possibly dangerous actions.
+Some system calls, i.e. `mount`, are prohibited from execution.
+
+To trace AppArmor activity, use:
+
+----
+# dmesg | grep apparmor
+----
+
+Although it is not recommended, AppArmor can be disabled for a container. This
+brings security risks with it. Some syscalls can lead to privilege escalation
+when executed within a container if the system is misconfigured or if a LXC or
+Linux Kernel vulnerability exists.
+
+To disable AppArmor for a container, add the following line to the container
+configuration file located at `/etc/pve/lxc/CTID.conf`:
+
+----
+lxc.apparmor_profile = unconfined
+----
+
+WARNING: Please note that this is not recommended for production use.
+
+
+// TODO: describe cgroups + seccomp a bit more.
+// TODO: pve-lxc-syscalld
+
+
+Guest Operating System Configuration
+------------------------------------
+
+{pve} tries to detect the Linux distribution in the container, and modifies
+some files. Here is a short list of things done at container startup:
+
+set /etc/hostname:: to set the container name
+
+modify /etc/hosts:: to allow lookup of the local hostname
+
+network setup:: pass the complete network setup to the container
+
+configure DNS:: pass information about DNS servers
+
+adapt the init system:: for example, fix the number of spawned getty processes
+
+set the root password:: when creating a new container
+
+rewrite ssh_host_keys:: so that each container has unique keys
+
+randomize crontab:: so that cron does not start at the same time on all containers
+
+Changes made by {PVE} are enclosed by comment markers:
+
+----
+# --- BEGIN PVE ---
+<data>
+# --- END PVE ---
+----
+
+Those markers will be inserted at a reasonable location in the file. If such a
+section already exists, it will be updated in place and will not be moved.
+
+Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
+For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
+file will not be touched. This can be a simple empty file created via:
+
+----
+# touch /etc/.pve-ignore.hosts
+----
+
+Most modifications are OS dependent, so they differ between different
+distributions and versions. You can completely disable modifications by
+manually setting the `ostype` to `unmanaged`.
+
+OS type detection is done by testing for certain files inside the
+container. {pve} first checks the `/etc/os-release` file
+footnote:[/etc/os-release replaces the multitude of per-distribution
+release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
+If that file is not present, or it does not contain a clearly recognizable
+distribution identifier the following distribution specific release files are
+checked.
+
+Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
+
+Debian:: test /etc/debian_version
+
+Fedora:: test /etc/fedora-release
+
+RedHat or CentOS:: test /etc/redhat-release
+
+ArchLinux:: test /etc/arch-release
+
+Alpine:: test /etc/alpine-release
+
+Gentoo:: test /etc/gentoo-release
+
+NOTE: Container start fails if the configured `ostype` differs from the auto
+detected type.
+
+
+[[pct_container_storage]]
+Container Storage
+-----------------
+
+The {pve} LXC container storage model is more flexible than traditional
+container storage models. A container can have multiple mount points. This
+makes it possible to use the best suited storage for each application.
+
+For example the root file system of the container can be on slow and cheap
+storage while the database can be on fast and distributed storage via a second
+mount point. See section <<pct_mount_points, Mount Points>> for further
+details.
+
+Any storage type supported by the {pve} storage library can be used. This means
+that containers can be stored on local (for example `lvm`, `zfs` or directory),
+shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
+Ceph. Advanced storage features like snapshots or clones can be used if the
+underlying storage supports them. The `vzdump` backup tool can use snapshots to
+provide consistent container backups.
+
+Furthermore, local devices or local directories can be mounted directly using
+'bind mounts'. This gives access to local resources inside a container with
+practically zero overhead. Bind mounts can be used as an easy way to share data
+between containers.
+
+
+FUSE Mounts
+~~~~~~~~~~~
+
+WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
+usage of FUSE mounts inside a container is strongly advised against, as
+containers need to be frozen for suspend or snapshot mode backups.
+
+If FUSE mounts cannot be replaced by other mounting mechanisms or storage
+technologies, it is possible to establish the FUSE mount on the Proxmox host
+and use a bind mount point to make it accessible inside the container.
+
+
+Using Quotas Inside Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Quotas allow to set limits inside a container for the amount of disk space that
+each user can use.
+
+NOTE: This only works on ext4 image based storage types and currently only
+works with privileged containers.
+
+Activating the `quota` option causes the following mount options to be used for
+a mount point:
+`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
+
+This allows quotas to be used like on any other system. You can initialize the
+`/aquota.user` and `/aquota.group` files by running:
+
+----
+# quotacheck -cmug /
+# quotaon /
+----
+
+Then edit the quotas using the `edquota` command. Refer to the documentation of
+the distribution running inside the container for details.
+
+NOTE: You need to run the above commands for every mount point by passing the
+mount point's path instead of just `/`.
+
+
+Using ACLs Inside Containers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
+containers. ACLs allow you to set more detailed file ownership than the
+traditional user/group/others model.
+
+
+Backup of Container mount points
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To include a mount point in backups, enable the `backup` option for it in the
+container configuration. For an existing mount point `mp0`
+
+----
+mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
+----
+
+add `backup=1` to enable it.
+
+----
+mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
+----
+
+NOTE: When creating a new mount point in the GUI, this option is enabled by
+default.
+
+To disable backups for a mount point, add `backup=0` in the way described
+above, or uncheck the *Backup* checkbox on the GUI.
+
+Replication of Containers mount points
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default, additional mount points are replicated when the Root Disk is
+replicated. If you want the {pve} storage replication mechanism to skip a mount
+point, you can set the *Skip replication* option for that mount point.
+As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
+mount point to a different type of storage when the container has replication
+configured requires to have *Skip replication* enabled for that mount point.
+
+
Backup and Restore
------------------
Container Backup
~~~~~~~~~~~~~~~~
-It is possible to use the `vzdump` tool for container backup. Please
-refer to the `vzdump` manual page for details.
+It is possible to use the `vzdump` tool for container backup. Please refer to
+the `vzdump` manual page for details.
Restoring Container Backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Restoring container backups made with `vzdump` is possible using the
-`pct restore` command. By default, `pct restore` will attempt to restore as much
-of the backed up container configuration as possible. It is possible to override
-the backed up configuration by manually setting container options on the command
-line (see the `pct` manual page for details).
+Restoring container backups made with `vzdump` is possible using the `pct
+restore` command. By default, `pct restore` will attempt to restore as much of
+the backed up container configuration as possible. It is possible to override
+the backed up configuration by manually setting container options on the
+command line (see the `pct` manual page for details).
NOTE: `pvesm extractconfig` can be used to view the backed up configuration
contained in a vzdump archive.
``Simple'' Restore Mode
^^^^^^^^^^^^^^^^^^^^^^^
-If neither the `rootfs` parameter nor any of the optional `mpX` parameters
-are explicitly set, the mount point configuration from the backed up
-configuration file is restored using the following steps:
+If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
+explicitly set, the mount point configuration from the backed up configuration
+file is restored using the following steps:
. Extract mount points and their options from backup
. Create volumes for storage backed mount points (on storage provided with the
-`storage` parameter, or default local storage if unset)
+ `storage` parameter, or default local storage if unset)
. Extract files from backup archive
-. Add bind and device mount points to restored configuration (limited to root user)
+. Add bind and device mount points to restored configuration (limited to root
+ user)
NOTE: Since bind and device mount points are never backed up, no files are
restored in the last step, but only the configuration options. The assumption
By setting the `rootfs` parameter (and optionally, any combination of `mpX`
parameters), the `pct restore` command is automatically switched into an
advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
-configuration options contained in the backup archive, and instead only
-uses the options explicitly provided as parameters.
+configuration options contained in the backup archive, and instead only uses
+the options explicitly provided as parameters.
-This mode allows flexible configuration of mount point settings at restore time,
-for example:
+This mode allows flexible configuration of mount point settings at restore
+time, for example:
* Set target storages, volume sizes and other options for each mount point
-individually
+ individually
* Redistribute backed up files according to new mount point scheme
* Restore to device and/or bind mount points (limited to root user)
CLI Usage Examples
~~~~~~~~~~~~~~~~~~
-Create a container based on a Debian template (provided you have
-already downloaded the template via the web interface)
+Create a container based on a Debian template (provided you have already
+downloaded the template via the web interface)
----
# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
# pct config 100
----
-Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
-set the address and gateway, while it's running
+Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
+the address and gateway, while it's running
----
# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
----
-This command will attempt to start the container in foreground mode,
-to stop the container run `pct shutdown ID` or `pct stop ID` in a
-second terminal.
+This command will attempt to start the container in foreground mode, to stop
+the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
The collected debug log is written to `/tmp/lxc-ID.log`.
mount points defined, the migration will copy the content over the network to
the target host if the same storage is defined there.
-If you want to migrate online Containers, the only way is to use
-restart migration. This can be initiated with the -restart flag and the optional
--timeout parameter.
+Running containers cannot live-migrated due to techincal limitations. You can
+do a restart migration, which shuts down, moves and then starts a container
+again on the target node. As containers are very lightweight, this results
+normally only in a downtime of some hundreds of milliseconds.
+
+A restart migration can be done through the web interface or by using the
+`--restart` flag with the `pct migrate` command.
-A restart migration will shut down the Container and kill it after the specified
-timeout (the default is 180 seconds). Then it will migrate the Container
-like an offline migration and when finished, it starts the Container on the
-target node.
+A restart migration will shut down the Container and kill it after the
+specified timeout (the default is 180 seconds). Then it will migrate the
+Container like an offline migration and when finished, it starts the Container
+on the target node.
[[pct_configuration]]
Configuration
-------------
-The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
-where `<CTID>` is the numeric ID of the given container. Like all
-other files stored inside `/etc/pve/`, they get automatically
-replicated to all other cluster nodes.
+The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
+`<CTID>` is the numeric ID of the given container. Like all other files stored
+inside `/etc/pve/`, they get automatically replicated to all other cluster
+nodes.
NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
unique cluster wide.
rootfs: local:107/vm-107-disk-1.raw,size=7G
----
-The configuration files are simple text files. You can edit them
-using a normal text editor (`vi`, `nano`, etc). This is sometimes
-useful to do small corrections, but keep in mind that you need to
-restart the container to apply such changes.
+The configuration files are simple text files. You can edit them using a normal
+text editor, for example, `vi` or `nano`.
+This is sometimes useful to do small corrections, but keep in mind that you
+need to restart the container to apply such changes.
-For that reason, it is usually better to use the `pct` command to
-generate and modify those files, or do the whole thing using the GUI.
-Our toolkit is smart enough to instantaneously apply most changes to
-running containers. This feature is called "hot plug", and there is no
-need to restart the container in that case.
+For that reason, it is usually better to use the `pct` command to generate and
+modify those files, or do the whole thing using the GUI.
+Our toolkit is smart enough to instantaneously apply most changes to running
+containers. This feature is called ``hot plug'', and there is no need to restart
+the container in that case.
-In cases where a change cannot be hot plugged, it will be registered
-as a pending change (shown in red color in the GUI). They will only
-be applied after rebooting the container.
+In cases where a change cannot be hot-plugged, it will be registered as a
+pending change (shown in red color in the GUI).
+They will only be applied after rebooting the container.
File Format
~~~~~~~~~~~
-The container configuration file uses a simple colon separated
-key/value format. Each line has the following format:
+The container configuration file uses a simple colon separated key/value
+format. Each line has the following format:
-----
# this is a comment
OPTION: value
-----
-Blank lines in those files are ignored, and lines starting with a `#`
-character are treated as comments and are also ignored.
+Blank lines in those files are ignored, and lines starting with a `#` character
+are treated as comments and are also ignored.
-It is possible to add low-level, LXC style configuration directly, for
-example:
+It is possible to add low-level, LXC style configuration directly, for example:
----
lxc.init_cmd: /sbin/my_own_init
Snapshots
~~~~~~~~~
-When you create a snapshot, `pct` stores the configuration at snapshot
-time into a separate snapshot section within the same configuration
-file. For example, after creating a snapshot called ``testsnapshot'',
-your configuration file will look like this:
+When you create a snapshot, `pct` stores the configuration at snapshot time
+into a separate snapshot section within the same configuration file. For
+example, after creating a snapshot called ``testsnapshot'', your configuration
+file will look like this:
.Container configuration with snapshot
----
...
----
-There are a few snapshot related properties like `parent` and
-`snaptime`. The `parent` property is used to store the parent/child
-relationship between snapshots. `snaptime` is the snapshot creation
-time stamp (Unix epoch).
+There are a few snapshot related properties like `parent` and `snaptime`. The
+`parent` property is used to store the parent/child relationship between
+snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
[[pct_options]]
Locks
-----
-Container migrations, snapshots and backups (`vzdump`) set a lock to
-prevent incompatible concurrent actions on the affected container. Sometimes
-you need to remove such a lock manually (e.g., after a power failure).
+Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
+incompatible concurrent actions on the affected container. Sometimes you need
+to remove such a lock manually (e.g., after a power failure).
----
# pct unlock <CTID>
----
-CAUTION: Only do this if you are sure the action which set the lock is
-no longer running.
+CAUTION: Only do this if you are sure the action which set the lock is no
+longer running.
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]
-
-
-
-
-
-
-