]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
pct.adoc: s/Container Network/Network/
[pve-docs.git] / pct.adoc
1 [[chapter_pct]]
2 ifdef::manvolnum[]
3 pct(1)
4 ======
5 include::attributes.txt[]
6 :pve-toplevel:
7
8 NAME
9 ----
10
11 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
12
13
14 SYNOPSIS
15 --------
16
17 include::pct.1-synopsis.adoc[]
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22
23 ifndef::manvolnum[]
24 Proxmox Container Toolkit
25 =========================
26 include::attributes.txt[]
27 :pve-toplevel:
28 endif::manvolnum[]
29 ifdef::wiki[]
30 :title: Linux Container
31 endif::wiki[]
32
33 Containers are a lightweight alternative to fully virtualized
34 VMs. Instead of emulating a complete Operating System (OS), containers
35 simply use the OS of the host they run on. This implies that all
36 containers use the same kernel, and that they can access resources
37 from the host directly.
38
39 This is great because containers do not waste CPU power nor memory due
40 to kernel emulation. Container run-time costs are close to zero and
41 usually negligible. But there are also some drawbacks you need to
42 consider:
43
44 * You can only run Linux based OS inside containers, i.e. it is not
45 possible to run FreeBSD or MS Windows inside.
46
47 * For security reasons, access to host resources needs to be
48 restricted. This is done with AppArmor, SecComp filters and other
49 kernel features. Be prepared that some syscalls are not allowed
50 inside containers.
51
52 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
53 technology. We consider LXC as low-level library, which provides
54 countless options. It would be too difficult to use those tools
55 directly. Instead, we provide a small wrapper called `pct`, the
56 "Proxmox Container Toolkit".
57
58 The toolkit is tightly coupled with {pve}. That means that it is aware
59 of the cluster setup, and it can use the same network and storage
60 resources as fully virtualized VMs. You can even use the {pve}
61 firewall, or manage containers using the HA framework.
62
63 Our primary goal is to offer an environment as one would get from a
64 VM, but without the additional overhead. We call this "System
65 Containers".
66
67 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
68 is best to run them inside a VM.
69
70
71 Technology Overview
72 -------------------
73
74 * LXC (https://linuxcontainers.org/)
75
76 * Integrated into {pve} graphical user interface (GUI)
77
78 * Easy to use command line tool `pct`
79
80 * Access via {pve} REST API
81
82 * lxcfs to provide containerized /proc file system
83
84 * AppArmor/Seccomp to improve security
85
86 * CRIU: for live migration (planned)
87
88 * Use latest available kernels (4.4.X)
89
90 * Image based deployment (templates)
91
92 * Use {pve} storage library
93
94 * Container setup from host (network, DNS, storage, ...)
95
96
97 Security Considerations
98 -----------------------
99
100 Containers use the same kernel as the host, so there is a big attack
101 surface for malicious users. You should consider this fact if you
102 provide containers to totally untrusted people. In general, fully
103 virtualized VMs provide better isolation.
104
105 The good news is that LXC uses many kernel security features like
106 AppArmor, CGroups and PID and user namespaces, which makes containers
107 usage quite secure. We distinguish two types of containers:
108
109
110 Privileged Containers
111 ~~~~~~~~~~~~~~~~~~~~~
112
113 Security is done by dropping capabilities, using mandatory access
114 control (AppArmor), SecComp filters and namespaces. The LXC team
115 considers this kind of container as unsafe, and they will not consider
116 new container escape exploits to be security issues worthy of a CVE
117 and quick fix. So you should use this kind of containers only inside a
118 trusted environment, or when no untrusted task is running as root in
119 the container.
120
121
122 Unprivileged Containers
123 ~~~~~~~~~~~~~~~~~~~~~~~
124
125 This kind of containers use a new kernel feature called user
126 namespaces. The root UID 0 inside the container is mapped to an
127 unprivileged user outside the container. This means that most security
128 issues (container escape, resource abuse, ...) in those containers
129 will affect a random unprivileged user, and so would be a generic
130 kernel security bug rather than an LXC issue. The LXC team thinks
131 unprivileged containers are safe by design.
132
133
134 Guest Operating System Configuration
135 ------------------------------------
136
137 We normally try to detect the operating system type inside the
138 container, and then modify some files inside the container to make
139 them work as expected. Here is a short list of things we do at
140 container startup:
141
142 set /etc/hostname:: to set the container name
143
144 modify /etc/hosts:: to allow lookup of the local hostname
145
146 network setup:: pass the complete network setup to the container
147
148 configure DNS:: pass information about DNS servers
149
150 adapt the init system:: for example, fix the number of spawned getty processes
151
152 set the root password:: when creating a new container
153
154 rewrite ssh_host_keys:: so that each container has unique keys
155
156 randomize crontab:: so that cron does not start at the same time on all containers
157
158 Changes made by {PVE} are enclosed by comment markers:
159
160 ----
161 # --- BEGIN PVE ---
162 <data>
163 # --- END PVE ---
164 ----
165
166 Those markers will be inserted at a reasonable location in the
167 file. If such a section already exists, it will be updated in place
168 and will not be moved.
169
170 Modification of a file can be prevented by adding a `.pve-ignore.`
171 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
172 exists then the `/etc/hosts` file will not be touched. This can be a
173 simple empty file creatd via:
174
175 # touch /etc/.pve-ignore.hosts
176
177 Most modifications are OS dependent, so they differ between different
178 distributions and versions. You can completely disable modifications
179 by manually setting the `ostype` to `unmanaged`.
180
181 OS type detection is done by testing for certain files inside the
182 container:
183
184 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
185
186 Debian:: test /etc/debian_version
187
188 Fedora:: test /etc/fedora-release
189
190 RedHat or CentOS:: test /etc/redhat-release
191
192 ArchLinux:: test /etc/arch-release
193
194 Alpine:: test /etc/alpine-release
195
196 Gentoo:: test /etc/gentoo-release
197
198 NOTE: Container start fails if the configured `ostype` differs from the auto
199 detected type.
200
201
202 [[pct_container_images]]
203 Container Images
204 ----------------
205
206 Container images, sometimes also referred to as ``templates'' or
207 ``appliances'', are `tar` archives which contain everything to run a
208 container. You can think of it as a tidy container backup. Like most
209 modern container toolkits, `pct` uses those images when you create a
210 new container, for example:
211
212 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
213
214 {pve} itself ships a set of basic templates for most common
215 operating systems, and you can download them using the `pveam` (short
216 for {pve} Appliance Manager) command line utility. You can also
217 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
218 that tool (or the graphical user interface).
219
220 Our image repositories contain a list of available images, and there
221 is a cron job run each day to download that list. You can trigger that
222 update manually with:
223
224 pveam update
225
226 After that you can view the list of available images using:
227
228 pveam available
229
230 You can restrict this large list by specifying the `section` you are
231 interested in, for example basic `system` images:
232
233 .List available system images
234 ----
235 # pveam available --section system
236 system archlinux-base_2015-24-29-1_x86_64.tar.gz
237 system centos-7-default_20160205_amd64.tar.xz
238 system debian-6.0-standard_6.0-7_amd64.tar.gz
239 system debian-7.0-standard_7.0-3_amd64.tar.gz
240 system debian-8.0-standard_8.0-1_amd64.tar.gz
241 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
242 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
243 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
244 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
245 ----
246
247 Before you can use such a template, you need to download them into one
248 of your storages. You can simply use storage `local` for that
249 purpose. For clustered installations, it is preferred to use a shared
250 storage so that all nodes can access those images.
251
252 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
253
254 You are now ready to create containers using that image, and you can
255 list all downloaded images on storage `local` with:
256
257 ----
258 # pveam list local
259 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
260 ----
261
262 The above command shows you the full {pve} volume identifiers. They include
263 the storage name, and most other {pve} commands can use them. For
264 example you can delete that image later with:
265
266 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
267
268
269 [[pct_container_storage]]
270 Container Storage
271 -----------------
272
273 Traditional containers use a very simple storage model, only allowing
274 a single mount point, the root file system. This was further
275 restricted to specific file system types like `ext4` and `nfs`.
276 Additional mounts are often done by user provided scripts. This turned
277 out to be complex and error prone, so we try to avoid that now.
278
279 Our new LXC based container model is more flexible regarding
280 storage. First, you can have more than a single mount point. This
281 allows you to choose a suitable storage for each application. For
282 example, you can use a relatively slow (and thus cheap) storage for
283 the container root file system. Then you can use a second mount point
284 to mount a very fast, distributed storage for your database
285 application. See section <<pct_mount_points,Mount Points>> for further
286 details.
287
288 The second big improvement is that you can use any storage type
289 supported by the {pve} storage library. That means that you can store
290 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
291 or even on distributed storage systems like `ceph`. It also enables us
292 to use advanced storage features like snapshots and clones. `vzdump`
293 can also use the snapshot feature to provide consistent container
294 backups.
295
296 Last but not least, you can also mount local devices directly, or
297 mount local directories using bind mounts. That way you can access
298 local storage inside containers with zero overhead. Such bind mounts
299 also provide an easy way to share data between different containers.
300
301
302 FUSE Mounts
303 ~~~~~~~~~~~
304
305 WARNING: Because of existing issues in the Linux kernel's freezer
306 subsystem the usage of FUSE mounts inside a container is strongly
307 advised against, as containers need to be frozen for suspend or
308 snapshot mode backups.
309
310 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
311 technologies, it is possible to establish the FUSE mount on the Proxmox host
312 and use a bind mount point to make it accessible inside the container.
313
314
315 Using Quotas Inside Containers
316 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
317
318 Quotas allow to set limits inside a container for the amount of disk
319 space that each user can use. This only works on ext4 image based
320 storage types and currently does not work with unprivileged
321 containers.
322
323 Activating the `quota` option causes the following mount options to be
324 used for a mount point:
325 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
326
327 This allows quotas to be used like you would on any other system. You
328 can initialize the `/aquota.user` and `/aquota.group` files by running
329
330 ----
331 quotacheck -cmug /
332 quotaon /
333 ----
334
335 and edit the quotas via the `edquota` command. Refer to the documentation
336 of the distribution running inside the container for details.
337
338 NOTE: You need to run the above commands for every mount point by passing
339 the mount point's path instead of just `/`.
340
341
342 Using ACLs Inside Containers
343 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
344
345 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
346 ACLs allow you to set more detailed file ownership than the traditional user/
347 group/others model.
348
349
350 [[pct_setting]]
351 Container Settings
352 ------------------
353
354 [[pct_cpu]]
355
356 CPU
357 ~~~
358
359 You can restrict the number of visible CPUs inside the container using
360 the `cores` option. This is implemented using the Linux 'cpuset'
361 cgroup (**c**ontrol *group*). A special task inside `pvestatd` tries
362 to distribute running containers among available CPUs. You can view
363 the assigned CPUs using the following command:
364
365 ----
366 # pct cpusets
367 ---------------------
368 102: 6 7
369 105: 2 3 4 5
370 108: 0 1
371 ---------------------
372 ----
373
374 Containers use the host kernel directly, so all task inside a
375 container are handled by the host CPU scheduler. {pve} uses the Linux
376 'CFS' (**C**ompletely **F**air **S**cheduler) scheduler by default,
377 which has additional bandwidth control options.
378
379 [horizontal]
380 cpulimit: :: You can use this option to further limit assigned CPU
381 time. Please note that this is a floating point number, so it is
382 perfectly valid to assign two cores to a container, but restrict
383 overall CPU consumption to half a core.
384 +
385 ----
386 cores: 2
387 cpulimit: 0.5
388 ----
389
390 cpuunits: :: This is a relative weight passed to the kernel
391 scheduler. The larger the number is, the more CPU time this container
392 gets. Number is relative to the weights of all the other running
393 containers. The default is 1024. You can use this setting to
394 prioritize some containers.
395
396
397 [[pct_memory]]
398 Memory
399 ~~~~~~
400
401 Container memory is controlled using the cgroup memory controller.
402
403 [horizontal]
404
405 memory: :: Limit overall memory usage. This corresponds
406 to the `memory.limit_in_bytes` cgroup setting.
407
408 swap: :: Allows the container to use additional swap memory from the
409 host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
410 cgroup setting, which is set to the sum of both value (`memory +
411 swap`).
412
413
414 [[pct_mount_points]]
415 Mount Points
416 ~~~~~~~~~~~~
417
418 The root mount point is configured with the `rootfs` property, and you can
419 configure up to 10 additional mount points. The corresponding options
420 are called `mp0` to `mp9`, and they can contain the following setting:
421
422 include::pct-mountpoint-opts.adoc[]
423
424 Currently there are basically three types of mount points: storage backed
425 mount points, bind mounts and device mounts.
426
427 .Typical container `rootfs` configuration
428 ----
429 rootfs: thin1:base-100-disk-1,size=8G
430 ----
431
432
433 Storage Backed Mount Points
434 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
435
436 Storage backed mount points are managed by the {pve} storage subsystem and come
437 in three different flavors:
438
439 - Image based: these are raw images containing a single ext4 formatted file
440 system.
441 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
442 and thus allow resizing and snapshotting.
443 - Directories: passing `size=0` triggers a special case where instead of a raw
444 image a directory is created.
445
446
447 Bind Mount Points
448 ^^^^^^^^^^^^^^^^^
449
450 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
451 inside a container. Some potential use cases are:
452
453 - Accessing your home directory in the guest
454 - Accessing an USB device directory in the guest
455 - Accessing an NFS mount from the host in the guest
456
457 Bind mounts are considered to not be managed by the storage subsystem, so you
458 cannot make snapshots or deal with quotas from inside the container. With
459 unprivileged containers you might run into permission problems caused by the
460 user mapping and cannot use ACLs.
461
462 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
463
464 WARNING: For security reasons, bind mounts should only be established
465 using source directories especially reserved for this purpose, e.g., a
466 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
467 directories like `/`, `/var` or `/etc` into a container - this poses a
468 great security risk.
469
470 NOTE: The bind mount source path must not contain any symlinks.
471
472 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
473 container with ID `100` under the path `/shared`, use a configuration line like
474 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
475 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
476 achieve the same result.
477
478
479 Device Mount Points
480 ^^^^^^^^^^^^^^^^^^^
481
482 Device mount points allow to mount block devices of the host directly into the
483 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
484 storage subsystem, but the `quota` and `acl` options will be honored.
485
486 NOTE: Device mount points should only be used under special circumstances. In
487 most cases a storage backed mount point offers the same performance and a lot
488 more features.
489
490 NOTE: The contents of device mount points are not backed up when using `vzdump`.
491
492
493 [[pct_container_network]]
494 Network
495 ~~~~~~~
496
497 You can configure up to 10 network interfaces for a single
498 container. The corresponding options are called `net0` to `net9`, and
499 they can contain the following setting:
500
501 include::pct-network-opts.adoc[]
502
503
504 Backup and Restore
505 ------------------
506
507
508 Container Backup
509 ~~~~~~~~~~~~~~~~
510
511 It is possible to use the `vzdump` tool for container backup. Please
512 refer to the `vzdump` manual page for details.
513
514
515 Restoring Container Backups
516 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
517
518 Restoring container backups made with `vzdump` is possible using the
519 `pct restore` command. By default, `pct restore` will attempt to restore as much
520 of the backed up container configuration as possible. It is possible to override
521 the backed up configuration by manually setting container options on the command
522 line (see the `pct` manual page for details).
523
524 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
525 contained in a vzdump archive.
526
527 There are two basic restore modes, only differing by their handling of mount
528 points:
529
530
531 ``Simple'' Restore Mode
532 ^^^^^^^^^^^^^^^^^^^^^^^
533
534 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
535 are explicitly set, the mount point configuration from the backed up
536 configuration file is restored using the following steps:
537
538 . Extract mount points and their options from backup
539 . Create volumes for storage backed mount points (on storage provided with the
540 `storage` parameter, or default local storage if unset)
541 . Extract files from backup archive
542 . Add bind and device mount points to restored configuration (limited to root user)
543
544 NOTE: Since bind and device mount points are never backed up, no files are
545 restored in the last step, but only the configuration options. The assumption
546 is that such mount points are either backed up with another mechanism (e.g.,
547 NFS space that is bind mounted into many containers), or not intended to be
548 backed up at all.
549
550 This simple mode is also used by the container restore operations in the web
551 interface.
552
553
554 ``Advanced'' Restore Mode
555 ^^^^^^^^^^^^^^^^^^^^^^^^^
556
557 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
558 parameters), the `pct restore` command is automatically switched into an
559 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
560 configuration options contained in the backup archive, and instead only
561 uses the options explicitly provided as parameters.
562
563 This mode allows flexible configuration of mount point settings at restore time,
564 for example:
565
566 * Set target storages, volume sizes and other options for each mount point
567 individually
568 * Redistribute backed up files according to new mount point scheme
569 * Restore to device and/or bind mount points (limited to root user)
570
571
572 Managing Containers with `pct`
573 ------------------------------
574
575 `pct` is the tool to manage Linux Containers on {pve}. You can create
576 and destroy containers, and control execution (start, stop, migrate,
577 ...). You can use pct to set parameters in the associated config file,
578 like network configuration or memory limits.
579
580
581 CLI Usage Examples
582 ~~~~~~~~~~~~~~~~~~
583
584 Create a container based on a Debian template (provided you have
585 already downloaded the template via the web interface)
586
587 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
588
589 Start container 100
590
591 pct start 100
592
593 Start a login session via getty
594
595 pct console 100
596
597 Enter the LXC namespace and run a shell as root user
598
599 pct enter 100
600
601 Display the configuration
602
603 pct config 100
604
605 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
606 set the address and gateway, while it's running
607
608 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
609
610 Reduce the memory of the container to 512MB
611
612 pct set 100 -memory 512
613
614
615 Obtaining Debugging Logs
616 ~~~~~~~~~~~~~~~~~~~~~~~~
617
618 In case `pct start` is unable to start a specific container, it might be
619 helpful to collect debugging output by running `lxc-start` (replace `ID` with
620 the container's ID):
621
622 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
623
624 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
625
626 The collected debug log is written to `/tmp/lxc-ID.log`.
627
628 NOTE: If you have changed the container's configuration since the last start
629 attempt with `pct start`, you need to run `pct start` at least once to also
630 update the configuration used by `lxc-start`.
631
632
633 [[pct_configuration]]
634 Configuration
635 -------------
636
637 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
638 where `<CTID>` is the numeric ID of the given container. Like all
639 other files stored inside `/etc/pve/`, they get automatically
640 replicated to all other cluster nodes.
641
642 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
643 unique cluster wide.
644
645 .Example Container Configuration
646 ----
647 ostype: debian
648 arch: amd64
649 hostname: www
650 memory: 512
651 swap: 512
652 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
653 rootfs: local:107/vm-107-disk-1.raw,size=7G
654 ----
655
656 Those configuration files are simple text files, and you can edit them
657 using a normal text editor (`vi`, `nano`, ...). This is sometimes
658 useful to do small corrections, but keep in mind that you need to
659 restart the container to apply such changes.
660
661 For that reason, it is usually better to use the `pct` command to
662 generate and modify those files, or do the whole thing using the GUI.
663 Our toolkit is smart enough to instantaneously apply most changes to
664 running containers. This feature is called "hot plug", and there is no
665 need to restart the container in that case.
666
667
668 File Format
669 ~~~~~~~~~~~
670
671 Container configuration files use a simple colon separated key/value
672 format. Each line has the following format:
673
674 -----
675 # this is a comment
676 OPTION: value
677 -----
678
679 Blank lines in those files are ignored, and lines starting with a `#`
680 character are treated as comments and are also ignored.
681
682 It is possible to add low-level, LXC style configuration directly, for
683 example:
684
685 lxc.init_cmd: /sbin/my_own_init
686
687 or
688
689 lxc.init_cmd = /sbin/my_own_init
690
691 Those settings are directly passed to the LXC low-level tools.
692
693
694 [[pct_snapshots]]
695 Snapshots
696 ~~~~~~~~~
697
698 When you create a snapshot, `pct` stores the configuration at snapshot
699 time into a separate snapshot section within the same configuration
700 file. For example, after creating a snapshot called ``testsnapshot'',
701 your configuration file will look like this:
702
703 .Container configuration with snapshot
704 ----
705 memory: 512
706 swap: 512
707 parent: testsnaphot
708 ...
709
710 [testsnaphot]
711 memory: 512
712 swap: 512
713 snaptime: 1457170803
714 ...
715 ----
716
717 There are a few snapshot related properties like `parent` and
718 `snaptime`. The `parent` property is used to store the parent/child
719 relationship between snapshots. `snaptime` is the snapshot creation
720 time stamp (Unix epoch).
721
722
723 [[pct_options]]
724 Options
725 ~~~~~~~
726
727 include::pct.conf.5-opts.adoc[]
728
729
730 Locks
731 -----
732
733 Container migrations, snapshots and backups (`vzdump`) set a lock to
734 prevent incompatible concurrent actions on the affected container. Sometimes
735 you need to remove such a lock manually (e.g., after a power failure).
736
737 pct unlock <CTID>
738
739 CAUTION: Only do that if you are sure the action which set the lock is
740 no longer running.
741
742
743 ifdef::manvolnum[]
744
745 Files
746 ------
747
748 `/etc/pve/lxc/<CTID>.conf`::
749
750 Configuration file for the container '<CTID>'.
751
752
753 include::pve-copyright.adoc[]
754 endif::manvolnum[]
755
756
757
758
759
760
761