4 include::attributes.txt[]
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
16 include::pct.1-synopsis.adoc[]
23 Proxmox Container Toolkit
24 =========================
25 include::attributes.txt[]
29 :title: Linux Container
32 Containers are a lightweight alternative to fully virtualized
33 VMs. Instead of emulating a complete Operating System (OS), containers
34 simply use the OS of the host they run on. This implies that all
35 containers use the same kernel, and that they can access resources
36 from the host directly.
38 This is great because containers do not waste CPU power nor memory due
39 to kernel emulation. Container run-time costs are close to zero and
40 usually negligible. But there are also some drawbacks you need to
43 * You can only run Linux based OS inside containers, i.e. it is not
44 possible to run FreeBSD or MS Windows inside.
46 * For security reasons, access to host resources needs to be
47 restricted. This is done with AppArmor, SecComp filters and other
48 kernel features. Be prepared that some syscalls are not allowed
51 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
52 technology. We consider LXC as low-level library, which provides
53 countless options. It would be too difficult to use those tools
54 directly. Instead, we provide a small wrapper called `pct`, the
55 "Proxmox Container Toolkit".
57 The toolkit is tightly coupled with {pve}. That means that it is aware
58 of the cluster setup, and it can use the same network and storage
59 resources as fully virtualized VMs. You can even use the {pve}
60 firewall, or manage containers using the HA framework.
62 Our primary goal is to offer an environment as one would get from a
63 VM, but without the additional overhead. We call this "System
66 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
67 is best to run them inside a VM.
70 Security Considerations
71 -----------------------
73 Containers use the same kernel as the host, so there is a big attack
74 surface for malicious users. You should consider this fact if you
75 provide containers to totally untrusted people. In general, fully
76 virtualized VMs provide better isolation.
78 The good news is that LXC uses many kernel security features like
79 AppArmor, CGroups and PID and user namespaces, which makes containers
80 usage quite secure. We distinguish two types of containers:
86 Security is done by dropping capabilities, using mandatory access
87 control (AppArmor), SecComp filters and namespaces. The LXC team
88 considers this kind of container as unsafe, and they will not consider
89 new container escape exploits to be security issues worthy of a CVE
90 and quick fix. So you should use this kind of containers only inside a
91 trusted environment, or when no untrusted task is running as root in
95 Unprivileged Containers
96 ~~~~~~~~~~~~~~~~~~~~~~~
98 This kind of containers use a new kernel feature called user
99 namespaces. The root UID 0 inside the container is mapped to an
100 unprivileged user outside the container. This means that most security
101 issues (container escape, resource abuse, ...) in those containers
102 will affect a random unprivileged user, and so would be a generic
103 kernel security bug rather than an LXC issue. The LXC team thinks
104 unprivileged containers are safe by design.
110 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
111 where `<CTID>` is the numeric ID of the given container. Like all
112 other files stored inside `/etc/pve/`, they get automatically
113 replicated to all other cluster nodes.
115 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
118 .Example Container Configuration
125 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
126 rootfs: local:107/vm-107-disk-1.raw,size=7G
129 Those configuration files are simple text files, and you can edit them
130 using a normal text editor (`vi`, `nano`, ...). This is sometimes
131 useful to do small corrections, but keep in mind that you need to
132 restart the container to apply such changes.
134 For that reason, it is usually better to use the `pct` command to
135 generate and modify those files, or do the whole thing using the GUI.
136 Our toolkit is smart enough to instantaneously apply most changes to
137 running containers. This feature is called "hot plug", and there is no
138 need to restart the container in that case.
144 Container configuration files use a simple colon separated key/value
145 format. Each line has the following format:
152 Blank lines in those files are ignored, and lines starting with a `#`
153 character are treated as comments and are also ignored.
155 It is possible to add low-level, LXC style configuration directly, for
158 lxc.init_cmd: /sbin/my_own_init
162 lxc.init_cmd = /sbin/my_own_init
164 Those settings are directly passed to the LXC low-level tools.
170 When you create a snapshot, `pct` stores the configuration at snapshot
171 time into a separate snapshot section within the same configuration
172 file. For example, after creating a snapshot called ``testsnapshot'',
173 your configuration file will look like this:
175 .Container configuration with snapshot
189 There are a few snapshot related properties like `parent` and
190 `snaptime`. The `parent` property is used to store the parent/child
191 relationship between snapshots. `snaptime` is the snapshot creation
192 time stamp (Unix epoch).
195 Guest Operating System Configuration
196 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
198 We normally try to detect the operating system type inside the
199 container, and then modify some files inside the container to make
200 them work as expected. Here is a short list of things we do at
203 set /etc/hostname:: to set the container name
205 modify /etc/hosts:: to allow lookup of the local hostname
207 network setup:: pass the complete network setup to the container
209 configure DNS:: pass information about DNS servers
211 adapt the init system:: for example, fix the number of spawned getty processes
213 set the root password:: when creating a new container
215 rewrite ssh_host_keys:: so that each container has unique keys
217 randomize crontab:: so that cron does not start at the same time on all containers
219 Changes made by {PVE} are enclosed by comment markers:
227 Those markers will be inserted at a reasonable location in the
228 file. If such a section already exists, it will be updated in place
229 and will not be moved.
231 Modification of a file can be prevented by adding a `.pve-ignore.`
232 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
233 exists then the `/etc/hosts` file will not be touched. This can be a
234 simple empty file creatd via:
236 # touch /etc/.pve-ignore.hosts
238 Most modifications are OS dependent, so they differ between different
239 distributions and versions. You can completely disable modifications
240 by manually setting the `ostype` to `unmanaged`.
242 OS type detection is done by testing for certain files inside the
245 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
247 Debian:: test /etc/debian_version
249 Fedora:: test /etc/fedora-release
251 RedHat or CentOS:: test /etc/redhat-release
253 ArchLinux:: test /etc/arch-release
255 Alpine:: test /etc/alpine-release
257 Gentoo:: test /etc/gentoo-release
259 NOTE: Container start fails if the configured `ostype` differs from the auto
266 include::pct.conf.5-opts.adoc[]
272 Container images, sometimes also referred to as ``templates'' or
273 ``appliances'', are `tar` archives which contain everything to run a
274 container. You can think of it as a tidy container backup. Like most
275 modern container toolkits, `pct` uses those images when you create a
276 new container, for example:
278 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
280 {pve} itself ships a set of basic templates for most common
281 operating systems, and you can download them using the `pveam` (short
282 for {pve} Appliance Manager) command line utility. You can also
283 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
284 that tool (or the graphical user interface).
286 Our image repositories contain a list of available images, and there
287 is a cron job run each day to download that list. You can trigger that
288 update manually with:
292 After that you can view the list of available images using:
296 You can restrict this large list by specifying the `section` you are
297 interested in, for example basic `system` images:
299 .List available system images
301 # pveam available --section system
302 system archlinux-base_2015-24-29-1_x86_64.tar.gz
303 system centos-7-default_20160205_amd64.tar.xz
304 system debian-6.0-standard_6.0-7_amd64.tar.gz
305 system debian-7.0-standard_7.0-3_amd64.tar.gz
306 system debian-8.0-standard_8.0-1_amd64.tar.gz
307 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
308 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
309 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
310 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
313 Before you can use such a template, you need to download them into one
314 of your storages. You can simply use storage `local` for that
315 purpose. For clustered installations, it is preferred to use a shared
316 storage so that all nodes can access those images.
318 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
320 You are now ready to create containers using that image, and you can
321 list all downloaded images on storage `local` with:
325 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
328 The above command shows you the full {pve} volume identifiers. They include
329 the storage name, and most other {pve} commands can use them. For
330 example you can delete that image later with:
332 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
338 Traditional containers use a very simple storage model, only allowing
339 a single mount point, the root file system. This was further
340 restricted to specific file system types like `ext4` and `nfs`.
341 Additional mounts are often done by user provided scripts. This turned
342 out to be complex and error prone, so we try to avoid that now.
344 Our new LXC based container model is more flexible regarding
345 storage. First, you can have more than a single mount point. This
346 allows you to choose a suitable storage for each application. For
347 example, you can use a relatively slow (and thus cheap) storage for
348 the container root file system. Then you can use a second mount point
349 to mount a very fast, distributed storage for your database
352 The second big improvement is that you can use any storage type
353 supported by the {pve} storage library. That means that you can store
354 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
355 or even on distributed storage systems like `ceph`. It also enables us
356 to use advanced storage features like snapshots and clones. `vzdump`
357 can also use the snapshot feature to provide consistent container
360 Last but not least, you can also mount local devices directly, or
361 mount local directories using bind mounts. That way you can access
362 local storage inside containers with zero overhead. Such bind mounts
363 also provide an easy way to share data between different containers.
369 The root mount point is configured with the `rootfs` property, and you can
370 configure up to 10 additional mount points. The corresponding options
371 are called `mp0` to `mp9`, and they can contain the following setting:
373 include::pct-mountpoint-opts.adoc[]
375 Currently there are basically three types of mount points: storage backed
376 mount points, bind mounts and device mounts.
378 .Typical container `rootfs` configuration
380 rootfs: thin1:base-100-disk-1,size=8G
384 Storage Backed Mount Points
385 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
387 Storage backed mount points are managed by the {pve} storage subsystem and come
388 in three different flavors:
390 - Image based: these are raw images containing a single ext4 formatted file
392 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
393 and thus allow resizing and snapshotting.
394 - Directories: passing `size=0` triggers a special case where instead of a raw
395 image a directory is created.
401 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
402 inside a container. Some potential use cases are:
404 - Accessing your home directory in the guest
405 - Accessing an USB device directory in the guest
406 - Accessing an NFS mount from the host in the guest
408 Bind mounts are considered to not be managed by the storage subsystem, so you
409 cannot make snapshots or deal with quotas from inside the container. With
410 unprivileged containers you might run into permission problems caused by the
411 user mapping and cannot use ACLs.
413 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
415 WARNING: For security reasons, bind mounts should only be established
416 using source directories especially reserved for this purpose, e.g., a
417 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
418 directories like `/`, `/var` or `/etc` into a container - this poses a
421 NOTE: The bind mount source path must not contain any symlinks.
423 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
424 container with ID `100` under the path `/shared`, use a configuration line like
425 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
426 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
427 achieve the same result.
433 Device mount points allow to mount block devices of the host directly into the
434 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
435 storage subsystem, but the `quota` and `acl` options will be honored.
437 NOTE: Device mount points should only be used under special circumstances. In
438 most cases a storage backed mount point offers the same performance and a lot
441 NOTE: The contents of device mount points are not backed up when using `vzdump`.
447 WARNING: Because of existing issues in the Linux kernel's freezer
448 subsystem the usage of FUSE mounts inside a container is strongly
449 advised against, as containers need to be frozen for suspend or
450 snapshot mode backups.
452 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
453 technologies, it is possible to establish the FUSE mount on the Proxmox host
454 and use a bind mount point to make it accessible inside the container.
457 Using Quotas Inside Containers
458 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
460 Quotas allow to set limits inside a container for the amount of disk
461 space that each user can use. This only works on ext4 image based
462 storage types and currently does not work with unprivileged
465 Activating the `quota` option causes the following mount options to be
466 used for a mount point:
467 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
469 This allows quotas to be used like you would on any other system. You
470 can initialize the `/aquota.user` and `/aquota.group` files by running
477 and edit the quotas via the `edquota` command. Refer to the documentation
478 of the distribution running inside the container for details.
480 NOTE: You need to run the above commands for every mount point by passing
481 the mount point's path instead of just `/`.
484 Using ACLs Inside Containers
485 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
487 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
488 ACLs allow you to set more detailed file ownership than the traditional user/
495 You can configure up to 10 network interfaces for a single
496 container. The corresponding options are called `net0` to `net9`, and
497 they can contain the following setting:
499 include::pct-network-opts.adoc[]
509 It is possible to use the `vzdump` tool for container backup. Please
510 refer to the `vzdump` manual page for details.
513 Restoring Container Backups
514 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
516 Restoring container backups made with `vzdump` is possible using the
517 `pct restore` command. By default, `pct restore` will attempt to restore as much
518 of the backed up container configuration as possible. It is possible to override
519 the backed up configuration by manually setting container options on the command
520 line (see the `pct` manual page for details).
522 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
523 contained in a vzdump archive.
525 There are two basic restore modes, only differing by their handling of mount
529 ``Simple'' Restore Mode
530 ^^^^^^^^^^^^^^^^^^^^^^^
532 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
533 are explicitly set, the mount point configuration from the backed up
534 configuration file is restored using the following steps:
536 . Extract mount points and their options from backup
537 . Create volumes for storage backed mount points (on storage provided with the
538 `storage` parameter, or default local storage if unset)
539 . Extract files from backup archive
540 . Add bind and device mount points to restored configuration (limited to root user)
542 NOTE: Since bind and device mount points are never backed up, no files are
543 restored in the last step, but only the configuration options. The assumption
544 is that such mount points are either backed up with another mechanism (e.g.,
545 NFS space that is bind mounted into many containers), or not intended to be
548 This simple mode is also used by the container restore operations in the web
552 ``Advanced'' Restore Mode
553 ^^^^^^^^^^^^^^^^^^^^^^^^^
555 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
556 parameters), the `pct restore` command is automatically switched into an
557 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
558 configuration options contained in the backup archive, and instead only
559 uses the options explicitly provided as parameters.
561 This mode allows flexible configuration of mount point settings at restore time,
564 * Set target storages, volume sizes and other options for each mount point
566 * Redistribute backed up files according to new mount point scheme
567 * Restore to device and/or bind mount points (limited to root user)
570 Managing Containers with `pct`
571 ------------------------------
573 `pct` is the tool to manage Linux Containers on {pve}. You can create
574 and destroy containers, and control execution (start, stop, migrate,
575 ...). You can use pct to set parameters in the associated config file,
576 like network configuration or memory limits.
582 Create a container based on a Debian template (provided you have
583 already downloaded the template via the web interface)
585 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
591 Start a login session via getty
595 Enter the LXC namespace and run a shell as root user
599 Display the configuration
603 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
604 set the address and gateway, while it's running
606 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
608 Reduce the memory of the container to 512MB
610 pct set 100 -memory 512
613 Obtaining Debugging Logs
614 ~~~~~~~~~~~~~~~~~~~~~~~~
616 In case `pct start` is unable to start a specific container, it might be
617 helpful to collect debugging output by running `lxc-start` (replace `ID` with
620 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
622 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
624 The collected debug log is written to `/tmp/lxc-ID.log`.
626 NOTE: If you have changed the container's configuration since the last start
627 attempt with `pct start`, you need to run `pct start` at least once to also
628 update the configuration used by `lxc-start`.
634 `/etc/pve/lxc/<CTID>.conf`::
636 Configuration file for the container '<CTID>'.
642 * Simple, and fully integrated into {pve}. Setup looks similar to a normal
645 ** Storage (ZFS, LVM, NFS, Ceph, ...)
653 * Fast: minimal overhead, as fast as bare metal
655 * High density (perfect for idle workloads)
659 * Direct hardware access
665 * Integrated into {pve} graphical user interface (GUI)
667 * LXC (https://linuxcontainers.org/)
669 * lxcfs to provide containerized /proc file system
673 * CRIU: for live migration (planned)
675 * We use latest available kernels (4.4.X)
677 * Image based deployment (templates)
679 * Container setup from host (network, DNS, storage, ...)
683 include::pve-copyright.adoc[]