5 include::attributes.txt[]
11 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
17 include::pct.1-synopsis.adoc[]
24 Proxmox Container Toolkit
25 =========================
26 include::attributes.txt[]
30 :title: Linux Container
33 Containers are a lightweight alternative to fully virtualized
34 VMs. Instead of emulating a complete Operating System (OS), containers
35 simply use the OS of the host they run on. This implies that all
36 containers use the same kernel, and that they can access resources
37 from the host directly.
39 This is great because containers do not waste CPU power nor memory due
40 to kernel emulation. Container run-time costs are close to zero and
41 usually negligible. But there are also some drawbacks you need to
44 * You can only run Linux based OS inside containers, i.e. it is not
45 possible to run FreeBSD or MS Windows inside.
47 * For security reasons, access to host resources needs to be
48 restricted. This is done with AppArmor, SecComp filters and other
49 kernel features. Be prepared that some syscalls are not allowed
52 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
53 technology. We consider LXC as low-level library, which provides
54 countless options. It would be too difficult to use those tools
55 directly. Instead, we provide a small wrapper called `pct`, the
56 "Proxmox Container Toolkit".
58 The toolkit is tightly coupled with {pve}. That means that it is aware
59 of the cluster setup, and it can use the same network and storage
60 resources as fully virtualized VMs. You can even use the {pve}
61 firewall, or manage containers using the HA framework.
63 Our primary goal is to offer an environment as one would get from a
64 VM, but without the additional overhead. We call this "System
67 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
68 is best to run them inside a VM.
71 Security Considerations
72 -----------------------
74 Containers use the same kernel as the host, so there is a big attack
75 surface for malicious users. You should consider this fact if you
76 provide containers to totally untrusted people. In general, fully
77 virtualized VMs provide better isolation.
79 The good news is that LXC uses many kernel security features like
80 AppArmor, CGroups and PID and user namespaces, which makes containers
81 usage quite secure. We distinguish two types of containers:
87 Security is done by dropping capabilities, using mandatory access
88 control (AppArmor), SecComp filters and namespaces. The LXC team
89 considers this kind of container as unsafe, and they will not consider
90 new container escape exploits to be security issues worthy of a CVE
91 and quick fix. So you should use this kind of containers only inside a
92 trusted environment, or when no untrusted task is running as root in
96 Unprivileged Containers
97 ~~~~~~~~~~~~~~~~~~~~~~~
99 This kind of containers use a new kernel feature called user
100 namespaces. The root UID 0 inside the container is mapped to an
101 unprivileged user outside the container. This means that most security
102 issues (container escape, resource abuse, ...) in those containers
103 will affect a random unprivileged user, and so would be a generic
104 kernel security bug rather than an LXC issue. The LXC team thinks
105 unprivileged containers are safe by design.
108 Guest Operating System Configuration
109 ------------------------------------
111 We normally try to detect the operating system type inside the
112 container, and then modify some files inside the container to make
113 them work as expected. Here is a short list of things we do at
116 set /etc/hostname:: to set the container name
118 modify /etc/hosts:: to allow lookup of the local hostname
120 network setup:: pass the complete network setup to the container
122 configure DNS:: pass information about DNS servers
124 adapt the init system:: for example, fix the number of spawned getty processes
126 set the root password:: when creating a new container
128 rewrite ssh_host_keys:: so that each container has unique keys
130 randomize crontab:: so that cron does not start at the same time on all containers
132 Changes made by {PVE} are enclosed by comment markers:
140 Those markers will be inserted at a reasonable location in the
141 file. If such a section already exists, it will be updated in place
142 and will not be moved.
144 Modification of a file can be prevented by adding a `.pve-ignore.`
145 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
146 exists then the `/etc/hosts` file will not be touched. This can be a
147 simple empty file creatd via:
149 # touch /etc/.pve-ignore.hosts
151 Most modifications are OS dependent, so they differ between different
152 distributions and versions. You can completely disable modifications
153 by manually setting the `ostype` to `unmanaged`.
155 OS type detection is done by testing for certain files inside the
158 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
160 Debian:: test /etc/debian_version
162 Fedora:: test /etc/fedora-release
164 RedHat or CentOS:: test /etc/redhat-release
166 ArchLinux:: test /etc/arch-release
168 Alpine:: test /etc/alpine-release
170 Gentoo:: test /etc/gentoo-release
172 NOTE: Container start fails if the configured `ostype` differs from the auto
176 [[pct_container_images]]
180 Container images, sometimes also referred to as ``templates'' or
181 ``appliances'', are `tar` archives which contain everything to run a
182 container. You can think of it as a tidy container backup. Like most
183 modern container toolkits, `pct` uses those images when you create a
184 new container, for example:
186 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
188 {pve} itself ships a set of basic templates for most common
189 operating systems, and you can download them using the `pveam` (short
190 for {pve} Appliance Manager) command line utility. You can also
191 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
192 that tool (or the graphical user interface).
194 Our image repositories contain a list of available images, and there
195 is a cron job run each day to download that list. You can trigger that
196 update manually with:
200 After that you can view the list of available images using:
204 You can restrict this large list by specifying the `section` you are
205 interested in, for example basic `system` images:
207 .List available system images
209 # pveam available --section system
210 system archlinux-base_2015-24-29-1_x86_64.tar.gz
211 system centos-7-default_20160205_amd64.tar.xz
212 system debian-6.0-standard_6.0-7_amd64.tar.gz
213 system debian-7.0-standard_7.0-3_amd64.tar.gz
214 system debian-8.0-standard_8.0-1_amd64.tar.gz
215 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
216 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
217 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
218 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
221 Before you can use such a template, you need to download them into one
222 of your storages. You can simply use storage `local` for that
223 purpose. For clustered installations, it is preferred to use a shared
224 storage so that all nodes can access those images.
226 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
228 You are now ready to create containers using that image, and you can
229 list all downloaded images on storage `local` with:
233 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
236 The above command shows you the full {pve} volume identifiers. They include
237 the storage name, and most other {pve} commands can use them. For
238 example you can delete that image later with:
240 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
243 [[pct_container_storage]]
247 Traditional containers use a very simple storage model, only allowing
248 a single mount point, the root file system. This was further
249 restricted to specific file system types like `ext4` and `nfs`.
250 Additional mounts are often done by user provided scripts. This turned
251 out to be complex and error prone, so we try to avoid that now.
253 Our new LXC based container model is more flexible regarding
254 storage. First, you can have more than a single mount point. This
255 allows you to choose a suitable storage for each application. For
256 example, you can use a relatively slow (and thus cheap) storage for
257 the container root file system. Then you can use a second mount point
258 to mount a very fast, distributed storage for your database
261 The second big improvement is that you can use any storage type
262 supported by the {pve} storage library. That means that you can store
263 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
264 or even on distributed storage systems like `ceph`. It also enables us
265 to use advanced storage features like snapshots and clones. `vzdump`
266 can also use the snapshot feature to provide consistent container
269 Last but not least, you can also mount local devices directly, or
270 mount local directories using bind mounts. That way you can access
271 local storage inside containers with zero overhead. Such bind mounts
272 also provide an easy way to share data between different containers.
278 The root mount point is configured with the `rootfs` property, and you can
279 configure up to 10 additional mount points. The corresponding options
280 are called `mp0` to `mp9`, and they can contain the following setting:
282 include::pct-mountpoint-opts.adoc[]
284 Currently there are basically three types of mount points: storage backed
285 mount points, bind mounts and device mounts.
287 .Typical container `rootfs` configuration
289 rootfs: thin1:base-100-disk-1,size=8G
293 Storage Backed Mount Points
294 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
296 Storage backed mount points are managed by the {pve} storage subsystem and come
297 in three different flavors:
299 - Image based: these are raw images containing a single ext4 formatted file
301 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
302 and thus allow resizing and snapshotting.
303 - Directories: passing `size=0` triggers a special case where instead of a raw
304 image a directory is created.
310 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
311 inside a container. Some potential use cases are:
313 - Accessing your home directory in the guest
314 - Accessing an USB device directory in the guest
315 - Accessing an NFS mount from the host in the guest
317 Bind mounts are considered to not be managed by the storage subsystem, so you
318 cannot make snapshots or deal with quotas from inside the container. With
319 unprivileged containers you might run into permission problems caused by the
320 user mapping and cannot use ACLs.
322 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
324 WARNING: For security reasons, bind mounts should only be established
325 using source directories especially reserved for this purpose, e.g., a
326 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
327 directories like `/`, `/var` or `/etc` into a container - this poses a
330 NOTE: The bind mount source path must not contain any symlinks.
332 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
333 container with ID `100` under the path `/shared`, use a configuration line like
334 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
335 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
336 achieve the same result.
342 Device mount points allow to mount block devices of the host directly into the
343 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
344 storage subsystem, but the `quota` and `acl` options will be honored.
346 NOTE: Device mount points should only be used under special circumstances. In
347 most cases a storage backed mount point offers the same performance and a lot
350 NOTE: The contents of device mount points are not backed up when using `vzdump`.
356 WARNING: Because of existing issues in the Linux kernel's freezer
357 subsystem the usage of FUSE mounts inside a container is strongly
358 advised against, as containers need to be frozen for suspend or
359 snapshot mode backups.
361 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
362 technologies, it is possible to establish the FUSE mount on the Proxmox host
363 and use a bind mount point to make it accessible inside the container.
366 Using Quotas Inside Containers
367 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
369 Quotas allow to set limits inside a container for the amount of disk
370 space that each user can use. This only works on ext4 image based
371 storage types and currently does not work with unprivileged
374 Activating the `quota` option causes the following mount options to be
375 used for a mount point:
376 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
378 This allows quotas to be used like you would on any other system. You
379 can initialize the `/aquota.user` and `/aquota.group` files by running
386 and edit the quotas via the `edquota` command. Refer to the documentation
387 of the distribution running inside the container for details.
389 NOTE: You need to run the above commands for every mount point by passing
390 the mount point's path instead of just `/`.
393 Using ACLs Inside Containers
394 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
396 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
397 ACLs allow you to set more detailed file ownership than the traditional user/
401 [[pct_container_network]]
405 You can configure up to 10 network interfaces for a single
406 container. The corresponding options are called `net0` to `net9`, and
407 they can contain the following setting:
409 include::pct-network-opts.adoc[]
419 It is possible to use the `vzdump` tool for container backup. Please
420 refer to the `vzdump` manual page for details.
423 Restoring Container Backups
424 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
426 Restoring container backups made with `vzdump` is possible using the
427 `pct restore` command. By default, `pct restore` will attempt to restore as much
428 of the backed up container configuration as possible. It is possible to override
429 the backed up configuration by manually setting container options on the command
430 line (see the `pct` manual page for details).
432 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
433 contained in a vzdump archive.
435 There are two basic restore modes, only differing by their handling of mount
439 ``Simple'' Restore Mode
440 ^^^^^^^^^^^^^^^^^^^^^^^
442 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
443 are explicitly set, the mount point configuration from the backed up
444 configuration file is restored using the following steps:
446 . Extract mount points and their options from backup
447 . Create volumes for storage backed mount points (on storage provided with the
448 `storage` parameter, or default local storage if unset)
449 . Extract files from backup archive
450 . Add bind and device mount points to restored configuration (limited to root user)
452 NOTE: Since bind and device mount points are never backed up, no files are
453 restored in the last step, but only the configuration options. The assumption
454 is that such mount points are either backed up with another mechanism (e.g.,
455 NFS space that is bind mounted into many containers), or not intended to be
458 This simple mode is also used by the container restore operations in the web
462 ``Advanced'' Restore Mode
463 ^^^^^^^^^^^^^^^^^^^^^^^^^
465 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
466 parameters), the `pct restore` command is automatically switched into an
467 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
468 configuration options contained in the backup archive, and instead only
469 uses the options explicitly provided as parameters.
471 This mode allows flexible configuration of mount point settings at restore time,
474 * Set target storages, volume sizes and other options for each mount point
476 * Redistribute backed up files according to new mount point scheme
477 * Restore to device and/or bind mount points (limited to root user)
480 Managing Containers with `pct`
481 ------------------------------
483 `pct` is the tool to manage Linux Containers on {pve}. You can create
484 and destroy containers, and control execution (start, stop, migrate,
485 ...). You can use pct to set parameters in the associated config file,
486 like network configuration or memory limits.
492 Create a container based on a Debian template (provided you have
493 already downloaded the template via the web interface)
495 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
501 Start a login session via getty
505 Enter the LXC namespace and run a shell as root user
509 Display the configuration
513 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
514 set the address and gateway, while it's running
516 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
518 Reduce the memory of the container to 512MB
520 pct set 100 -memory 512
523 Obtaining Debugging Logs
524 ~~~~~~~~~~~~~~~~~~~~~~~~
526 In case `pct start` is unable to start a specific container, it might be
527 helpful to collect debugging output by running `lxc-start` (replace `ID` with
530 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
532 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
534 The collected debug log is written to `/tmp/lxc-ID.log`.
536 NOTE: If you have changed the container's configuration since the last start
537 attempt with `pct start`, you need to run `pct start` at least once to also
538 update the configuration used by `lxc-start`.
541 [[pct_configuration]]
545 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
546 where `<CTID>` is the numeric ID of the given container. Like all
547 other files stored inside `/etc/pve/`, they get automatically
548 replicated to all other cluster nodes.
550 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
553 .Example Container Configuration
560 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
561 rootfs: local:107/vm-107-disk-1.raw,size=7G
564 Those configuration files are simple text files, and you can edit them
565 using a normal text editor (`vi`, `nano`, ...). This is sometimes
566 useful to do small corrections, but keep in mind that you need to
567 restart the container to apply such changes.
569 For that reason, it is usually better to use the `pct` command to
570 generate and modify those files, or do the whole thing using the GUI.
571 Our toolkit is smart enough to instantaneously apply most changes to
572 running containers. This feature is called "hot plug", and there is no
573 need to restart the container in that case.
579 Container configuration files use a simple colon separated key/value
580 format. Each line has the following format:
587 Blank lines in those files are ignored, and lines starting with a `#`
588 character are treated as comments and are also ignored.
590 It is possible to add low-level, LXC style configuration directly, for
593 lxc.init_cmd: /sbin/my_own_init
597 lxc.init_cmd = /sbin/my_own_init
599 Those settings are directly passed to the LXC low-level tools.
606 When you create a snapshot, `pct` stores the configuration at snapshot
607 time into a separate snapshot section within the same configuration
608 file. For example, after creating a snapshot called ``testsnapshot'',
609 your configuration file will look like this:
611 .Container configuration with snapshot
625 There are a few snapshot related properties like `parent` and
626 `snaptime`. The `parent` property is used to store the parent/child
627 relationship between snapshots. `snaptime` is the snapshot creation
628 time stamp (Unix epoch).
635 include::pct.conf.5-opts.adoc[]
641 Container migrations, snapshots and backups (`vzdump`) set a lock to
642 prevent incompatible concurrent actions on the affected container. Sometimes
643 you need to remove such a lock manually (e.g., after a power failure).
647 CAUTION: Only do that if you are sure the action which set the lock is
654 * Simple, and fully integrated into {pve}. Setup looks similar to a normal
657 ** Storage (ZFS, LVM, NFS, Ceph, ...)
665 * Fast: minimal overhead, as fast as bare metal
667 * High density (perfect for idle workloads)
671 * Direct hardware access
677 * Integrated into {pve} graphical user interface (GUI)
679 * LXC (https://linuxcontainers.org/)
681 * lxcfs to provide containerized /proc file system
685 * CRIU: for live migration (planned)
687 * We use latest available kernels (4.4.X)
689 * Image based deployment (templates)
691 * Container setup from host (network, DNS, storage, ...)
699 `/etc/pve/lxc/<CTID>.conf`::
701 Configuration file for the container '<CTID>'.
704 include::pve-copyright.adoc[]