X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pct.adoc;h=14e2d3781b577ab3f3b26ff5966baa1cd24f9b13;hp=82f257b69d27c090da20be43b060411682b8613c;hb=9baca183555d08edb0a27b4a879d555e82f03ec2;hpb=d6ed3622fe1e4622ca9eeb1509882147999a7429 diff --git a/pct.adoc b/pct.adoc index 82f257b..14e2d37 100644 --- a/pct.adoc +++ b/pct.adoc @@ -59,7 +59,7 @@ Our primary goal is to offer an environment as one would get from a VM, but without the additional overhead. We call this "System Containers". -NOTE: If you want to run micro-containers (with docker, rct, ...), it +NOTE: If you want to run micro-containers (with docker, rkt, ...), it is best to run them inside a VM. @@ -101,11 +101,13 @@ unprivileged containers are safe by design. Configuration ------------- -The '/etc/pve/lxc/.conf' files stores container configuration, -where '' is the numeric ID of the given container. Note that -CTIDs < 100 are reserved for internal purposes, and CTIDs need to be -unique cluster wide. Files are stored inside '/etc/pve/', so they get -automatically replicated to all other cluster nodes. +The '/etc/pve/lxc/.conf' file stores container configuration, +where '' is the numeric ID of the given container. Like all +other files stored inside '/etc/pve/', they get automatically +replicated to all other cluster nodes. + +NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be +unique cluster wide. .Example Container Configuration ---- @@ -203,9 +205,28 @@ rewrite ssh_host_keys:: so that each container has unique keys randomize crontab:: so that cron does not start at the same time on all containers -The above task depends on the OS type, so the implementation is different -for each OS type. You can also disable any modifications by manually -setting the 'ostype' to 'unmanaged'. +Changes made by {PVE} are enclosed by comment markers: + +---- +# --- BEGIN PVE --- + +# --- END PVE --- +---- + +Those markers will be inserted at a reasonable location in the +file. If such a section already exists, it will be updated in place +and will not be moved. + +Modification of a file can be prevented by adding a `.pve-ignore.` +file for it. For instance, if the file `/etc/.pve-ignore.hosts` +exists then the `/etc/hosts` file will not be touched. This can be a +simple empty file creatd via: + + # touch /etc/.pve-ignore.hosts + +Most modifications are OS dependent, so they differ between different +distributions and versions. You can completely disable modifications +by manually setting the 'ostype' to 'unmanaged'. OS type detection is done by testing for certain files inside the container: @@ -222,9 +243,16 @@ ArchLinux:: test /etc/arch-release Alpine:: test /etc/alpine-release +Gentoo:: test /etc/gentoo-release + NOTE: Container start fails if the configured 'ostype' differs from the auto detected type. +Options +~~~~~~~ + +include::pct.conf.5-opts.adoc[] + Container Images ---------------- @@ -323,6 +351,201 @@ local storage inside containers with zero overhead. Such bind mounts also provide an easy way to share data between different containers. +Mount Points +~~~~~~~~~~~~ + +The root mount point is configured with the `rootfs` property, and you can +configure up to 10 additional mount points. The corresponding options +are called `mp0` to `mp9`, and they can contain the following setting: + +include::pct-mountpoint-opts.adoc[] + +Currently there are basically three types of mount points: storage backed +mount points, bind mounts and device mounts. + +.Typical Container `rootfs` configuration +---- +rootfs: thin1:base-100-disk-1,size=8G +---- + + +Storage backed mount points +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Storage backed mount points are managed by the {pve} storage subsystem and come +in three different flavors: + +- Image based: These are raw images containing a single ext4 formatted file + system. +- ZFS Subvolumes: These are technically bind mounts, but with managed storage, + and thus allow resizing and snapshotting. +- Directories: passing `size=0` triggers a special case where instead of a raw + image a directory is created. + + +Bind mount points +^^^^^^^^^^^^^^^^^ + +Bind mounts allow you to access arbitrary directories from your Proxmox VE host +inside a container. Some potential use cases are: + +- Accessing your home directory in the guest +- Accessing an USB device directory in the guest +- Accessing an NFS mount from in the host in the guest + +Bind mounts are considered to not be managed by the storage subsystem, so you +cannot make snapshots or deal with quotas from inside the container. With +unprivileged containers you might run into permission problems caused by the +user mapping and cannot use ACLs. + +NOTE: The contents of bind mount points are not backed up when using 'vzdump'. + +WARNING: For security reasons, bind mounts should only be established +using source directories especially reserved for this purpose, e.g., a +directory hierarchy under `/mnt/bindmounts`. Never bind mount system +directories like `/`, `/var` or `/etc` into a container - this poses a +great security risk. + +NOTE: The bind mount source path must not contain any symlinks. + +For example, to make the directory `/mnt/bindmounts/shared` accessible in the +container with ID `100` under the path `/shared`, use a configuration line like +'mp0: /mnt/bindmounts/shared,mp=/shared' in '/etc/pve/lxc/100.conf'. +Alternatively, use 'pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared' to +achieve the same result. + + +Device mount points +^^^^^^^^^^^^^^^^^^^ + +Similar to bind mounts, device mounts are not managed by the storage, but for +these the `quota` and `acl` options will be honored. + + +FUSE mounts +~~~~~~~~~~~ + +WARNING: Because of existing issues in the Linux kernel's freezer +subsystem the usage of FUSE mounts inside a container is strongly +advised against, as containers need to be frozen for suspend or +snapshot mode backups. + +If FUSE mounts cannot be replaced by other mounting mechanisms or storage +technologies, it is possible to establish the FUSE mount on the Proxmox host +and use a bind mount point to make it accessible inside the container. + + +Using quotas inside containers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Quotas allow to set limits inside a container for the amount of disk +space that each user can use. This only works on ext4 image based +storage types and currently does not work with unprivileged +containers. + +Activating the `quota` option causes the following mount options to be +used for a mount point: +`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` + +This allows quotas to be used like you would on any other system. You +can initialize the `/aquota.user` and `/aquota.group` files by running + +---- +quotacheck -cmug / +quotaon / +---- + +and edit the quotas via the `edquota` command. Refer to the documentation +of the distribution running inside the container for details. + +NOTE: You need to run the above commands for every mount point by passing +the mount point's path instead of just `/`. + + +Using ACLs inside containers +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The standard Posix Access Control Lists are also available inside containers. +ACLs allow you to set more detailed file ownership than the traditional user/ +group/others model. + + +Container Network +----------------- + +You can configure up to 10 network interfaces for a single +container. The corresponding options are called 'net0' to 'net9', and +they can contain the following setting: + +include::pct-network-opts.adoc[] + + +Backup and Restore +------------------ + +Container Backup +~~~~~~~~~~~~~~~~ + +It is possible to use the 'vzdump' tool for container backup. Please +refer to the 'vzdump' manual page for details. + +Restoring Container Backups +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Restoring container backups made with 'vzdump' is possible using the +'pct restore' command. By default, 'pct restore' will attempt to restore as much +of the backed up container configuration as possible. It is possible to override +the backed up configuration by manually setting container options on the command +line (see the 'pct' manual page for details). + +NOTE: 'pvesm extractconfig' can be used to view the backed up configuration +contained in a vzdump archive. + +There are two basic restore modes, only differing by their handling of mount +points: + + +"Simple" restore mode +^^^^^^^^^^^^^^^^^^^^^ + +If neither the `rootfs` parameter nor any of the optional `mpX` parameters +are explicitly set, the mount point configuration from the backed up +configuration file is restored using the following steps: + +. Extract mount points and their options from backup +. Create volumes for storage backed mount points (on storage provided with the +`storage` parameter, or default local storage if unset) +. Extract files from backup archive +. Add bind and device mount points to restored configuration (limited to root user) + +NOTE: Since bind and device mount points are never backed up, no files are +restored in the last step, but only the configuration options. The assumption +is that such mount points are either backed up with another mechanism (e.g., +NFS space that is bind mounted into many containers), or not intended to be +backed up at all. + +This simple mode is also used by the container restore operations in the web +interface. + + +"Advanced" restore mode +^^^^^^^^^^^^^^^^^^^^^^^ + +By setting the `rootfs` parameter (and optionally, any combination of `mpX` +parameters), the 'pct restore' command is automatically switched into an +advanced mode. This advanced mode completely ignores the `rootfs` and `mpX` +configuration options contained in the backup archive, and instead only +uses the options explicitly provided as parameters. + +This mode allows flexible configuration of mount point settings at restore time, +for example: + +* Set target storages, volume sizes and other options for each mount point +individually +* Redistribute backed up files according to new mount point scheme +* Restore to device and/or bind mount points (limited to root user) + + Managing Containers with 'pct' ------------------------------ @@ -332,7 +555,7 @@ and destroy containers, and control execution (start, stop, migrate, like network configuration or memory limits. CLI Usage Examples ------------------- +~~~~~~~~~~~~~~~~~~ Create a container based on a Debian template (provided you have already downloaded the template via the webgui) @@ -362,7 +585,8 @@ set the address and gateway, while it's running Reduce the memory of the container to 512MB - pct set -memory 512 100 + pct set 100 -memory 512 + Files ------ @@ -372,53 +596,6 @@ Files Configuration file for the container ''. -Container Mountpoints ---------------------- - -Beside the root directory the container can also have additional mountpoints. -Currently there are basically three types of mountpoints: storage backed -mountpoints, bind mounts and device mounts. - -Storage backed mountpoints are managed by the {pve} storage subsystem and come -in three different flavors: - -- Image based: These are raw images containing a single ext4 formatted file - system. -- ZFS Subvolumes: These are technically bind mounts, but with managed storage, - and thus allow resizing and snapshotting. -- Directories: passing `size=0` triggers a special case where instead of a raw - image a directory is created. - -Bind mounts are considered to not be managed by the storage subsystem, so you -cannot make snapshots or deal with quotas from inside the container, and with -unprivileged containers you might run into permission problems caused by the -user mapping, and cannot use ACLs from inside an unprivileged container. - -Similarly device mounts are not managed by the storage, but for these the -`quota` and `acl` options will be honored. - - -Using quotas inside containers ------------------------------- - -This only works on ext4 image based storage types and currently does not work -with unprivileged containers. - -Activating the `quota` option causes the following mount options to be used for -a mountpoint: `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` - -This allows quotas to be used like you would on any other system. You can -initialize the `/aquota.user` and `/aquota.group` files by running - - quotacheck -cmug / - quotaon / - -And edit the quotas via the `edquota` command. (Note that you need to do this -for every mountpoint by passing the mountpoint's path instead of just `/`.) Best -see the documentation specific to the distributiont running inside the -container. - - Container Advantages -------------------- @@ -457,7 +634,7 @@ Technology Overview - CRIU: for live migration (planned) -- We use latest available kernels (4.2.X) +- We use latest available kernels (4.4.X) - Image based deployment (templates)