]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
add correct wiki titles
[pve-docs.git] / pct.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 :pve-toplevel:
7
8 NAME
9 ----
10
11 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
12
13
14 SYNOPSIS
15 --------
16
17 include::pct.1-synopsis.adoc[]
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22
23 ifndef::manvolnum[]
24 Proxmox Container Toolkit
25 =========================
26 include::attributes.txt[]
27 endif::manvolnum[]
28
29 ifdef::wiki[]
30 :pve-toplevel:
31 :title: Linux Container
32 endif::wiki[]
33
34 Containers are a lightweight alternative to fully virtualized
35 VMs. Instead of emulating a complete Operating System (OS), containers
36 simply use the OS of the host they run on. This implies that all
37 containers use the same kernel, and that they can access resources
38 from the host directly.
39
40 This is great because containers do not waste CPU power nor memory due
41 to kernel emulation. Container run-time costs are close to zero and
42 usually negligible. But there are also some drawbacks you need to
43 consider:
44
45 * You can only run Linux based OS inside containers, i.e. it is not
46 possible to run FreeBSD or MS Windows inside.
47
48 * For security reasons, access to host resources needs to be
49 restricted. This is done with AppArmor, SecComp filters and other
50 kernel features. Be prepared that some syscalls are not allowed
51 inside containers.
52
53 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
54 technology. We consider LXC as low-level library, which provides
55 countless options. It would be too difficult to use those tools
56 directly. Instead, we provide a small wrapper called `pct`, the
57 "Proxmox Container Toolkit".
58
59 The toolkit is tightly coupled with {pve}. That means that it is aware
60 of the cluster setup, and it can use the same network and storage
61 resources as fully virtualized VMs. You can even use the {pve}
62 firewall, or manage containers using the HA framework.
63
64 Our primary goal is to offer an environment as one would get from a
65 VM, but without the additional overhead. We call this "System
66 Containers".
67
68 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
69 is best to run them inside a VM.
70
71
72 Security Considerations
73 -----------------------
74
75 Containers use the same kernel as the host, so there is a big attack
76 surface for malicious users. You should consider this fact if you
77 provide containers to totally untrusted people. In general, fully
78 virtualized VMs provide better isolation.
79
80 The good news is that LXC uses many kernel security features like
81 AppArmor, CGroups and PID and user namespaces, which makes containers
82 usage quite secure. We distinguish two types of containers:
83
84
85 Privileged Containers
86 ~~~~~~~~~~~~~~~~~~~~~
87
88 Security is done by dropping capabilities, using mandatory access
89 control (AppArmor), SecComp filters and namespaces. The LXC team
90 considers this kind of container as unsafe, and they will not consider
91 new container escape exploits to be security issues worthy of a CVE
92 and quick fix. So you should use this kind of containers only inside a
93 trusted environment, or when no untrusted task is running as root in
94 the container.
95
96
97 Unprivileged Containers
98 ~~~~~~~~~~~~~~~~~~~~~~~
99
100 This kind of containers use a new kernel feature called user
101 namespaces. The root UID 0 inside the container is mapped to an
102 unprivileged user outside the container. This means that most security
103 issues (container escape, resource abuse, ...) in those containers
104 will affect a random unprivileged user, and so would be a generic
105 kernel security bug rather than an LXC issue. The LXC team thinks
106 unprivileged containers are safe by design.
107
108
109 Configuration
110 -------------
111
112 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
113 where `<CTID>` is the numeric ID of the given container. Like all
114 other files stored inside `/etc/pve/`, they get automatically
115 replicated to all other cluster nodes.
116
117 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
118 unique cluster wide.
119
120 .Example Container Configuration
121 ----
122 ostype: debian
123 arch: amd64
124 hostname: www
125 memory: 512
126 swap: 512
127 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
128 rootfs: local:107/vm-107-disk-1.raw,size=7G
129 ----
130
131 Those configuration files are simple text files, and you can edit them
132 using a normal text editor (`vi`, `nano`, ...). This is sometimes
133 useful to do small corrections, but keep in mind that you need to
134 restart the container to apply such changes.
135
136 For that reason, it is usually better to use the `pct` command to
137 generate and modify those files, or do the whole thing using the GUI.
138 Our toolkit is smart enough to instantaneously apply most changes to
139 running containers. This feature is called "hot plug", and there is no
140 need to restart the container in that case.
141
142
143 File Format
144 ~~~~~~~~~~~
145
146 Container configuration files use a simple colon separated key/value
147 format. Each line has the following format:
148
149 -----
150 # this is a comment
151 OPTION: value
152 -----
153
154 Blank lines in those files are ignored, and lines starting with a `#`
155 character are treated as comments and are also ignored.
156
157 It is possible to add low-level, LXC style configuration directly, for
158 example:
159
160 lxc.init_cmd: /sbin/my_own_init
161
162 or
163
164 lxc.init_cmd = /sbin/my_own_init
165
166 Those settings are directly passed to the LXC low-level tools.
167
168
169 Snapshots
170 ~~~~~~~~~
171
172 When you create a snapshot, `pct` stores the configuration at snapshot
173 time into a separate snapshot section within the same configuration
174 file. For example, after creating a snapshot called ``testsnapshot'',
175 your configuration file will look like this:
176
177 .Container configuration with snapshot
178 ----
179 memory: 512
180 swap: 512
181 parent: testsnaphot
182 ...
183
184 [testsnaphot]
185 memory: 512
186 swap: 512
187 snaptime: 1457170803
188 ...
189 ----
190
191 There are a few snapshot related properties like `parent` and
192 `snaptime`. The `parent` property is used to store the parent/child
193 relationship between snapshots. `snaptime` is the snapshot creation
194 time stamp (Unix epoch).
195
196
197 Guest Operating System Configuration
198 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
199
200 We normally try to detect the operating system type inside the
201 container, and then modify some files inside the container to make
202 them work as expected. Here is a short list of things we do at
203 container startup:
204
205 set /etc/hostname:: to set the container name
206
207 modify /etc/hosts:: to allow lookup of the local hostname
208
209 network setup:: pass the complete network setup to the container
210
211 configure DNS:: pass information about DNS servers
212
213 adapt the init system:: for example, fix the number of spawned getty processes
214
215 set the root password:: when creating a new container
216
217 rewrite ssh_host_keys:: so that each container has unique keys
218
219 randomize crontab:: so that cron does not start at the same time on all containers
220
221 Changes made by {PVE} are enclosed by comment markers:
222
223 ----
224 # --- BEGIN PVE ---
225 <data>
226 # --- END PVE ---
227 ----
228
229 Those markers will be inserted at a reasonable location in the
230 file. If such a section already exists, it will be updated in place
231 and will not be moved.
232
233 Modification of a file can be prevented by adding a `.pve-ignore.`
234 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
235 exists then the `/etc/hosts` file will not be touched. This can be a
236 simple empty file creatd via:
237
238 # touch /etc/.pve-ignore.hosts
239
240 Most modifications are OS dependent, so they differ between different
241 distributions and versions. You can completely disable modifications
242 by manually setting the `ostype` to `unmanaged`.
243
244 OS type detection is done by testing for certain files inside the
245 container:
246
247 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
248
249 Debian:: test /etc/debian_version
250
251 Fedora:: test /etc/fedora-release
252
253 RedHat or CentOS:: test /etc/redhat-release
254
255 ArchLinux:: test /etc/arch-release
256
257 Alpine:: test /etc/alpine-release
258
259 Gentoo:: test /etc/gentoo-release
260
261 NOTE: Container start fails if the configured `ostype` differs from the auto
262 detected type.
263
264
265 Options
266 ~~~~~~~
267
268 include::pct.conf.5-opts.adoc[]
269
270
271 Container Images
272 ----------------
273
274 Container images, sometimes also referred to as ``templates'' or
275 ``appliances'', are `tar` archives which contain everything to run a
276 container. You can think of it as a tidy container backup. Like most
277 modern container toolkits, `pct` uses those images when you create a
278 new container, for example:
279
280 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
281
282 {pve} itself ships a set of basic templates for most common
283 operating systems, and you can download them using the `pveam` (short
284 for {pve} Appliance Manager) command line utility. You can also
285 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
286 that tool (or the graphical user interface).
287
288 Our image repositories contain a list of available images, and there
289 is a cron job run each day to download that list. You can trigger that
290 update manually with:
291
292 pveam update
293
294 After that you can view the list of available images using:
295
296 pveam available
297
298 You can restrict this large list by specifying the `section` you are
299 interested in, for example basic `system` images:
300
301 .List available system images
302 ----
303 # pveam available --section system
304 system archlinux-base_2015-24-29-1_x86_64.tar.gz
305 system centos-7-default_20160205_amd64.tar.xz
306 system debian-6.0-standard_6.0-7_amd64.tar.gz
307 system debian-7.0-standard_7.0-3_amd64.tar.gz
308 system debian-8.0-standard_8.0-1_amd64.tar.gz
309 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
310 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
311 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
312 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
313 ----
314
315 Before you can use such a template, you need to download them into one
316 of your storages. You can simply use storage `local` for that
317 purpose. For clustered installations, it is preferred to use a shared
318 storage so that all nodes can access those images.
319
320 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
321
322 You are now ready to create containers using that image, and you can
323 list all downloaded images on storage `local` with:
324
325 ----
326 # pveam list local
327 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
328 ----
329
330 The above command shows you the full {pve} volume identifiers. They include
331 the storage name, and most other {pve} commands can use them. For
332 example you can delete that image later with:
333
334 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
335
336
337 Container Storage
338 -----------------
339
340 Traditional containers use a very simple storage model, only allowing
341 a single mount point, the root file system. This was further
342 restricted to specific file system types like `ext4` and `nfs`.
343 Additional mounts are often done by user provided scripts. This turned
344 out to be complex and error prone, so we try to avoid that now.
345
346 Our new LXC based container model is more flexible regarding
347 storage. First, you can have more than a single mount point. This
348 allows you to choose a suitable storage for each application. For
349 example, you can use a relatively slow (and thus cheap) storage for
350 the container root file system. Then you can use a second mount point
351 to mount a very fast, distributed storage for your database
352 application.
353
354 The second big improvement is that you can use any storage type
355 supported by the {pve} storage library. That means that you can store
356 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
357 or even on distributed storage systems like `ceph`. It also enables us
358 to use advanced storage features like snapshots and clones. `vzdump`
359 can also use the snapshot feature to provide consistent container
360 backups.
361
362 Last but not least, you can also mount local devices directly, or
363 mount local directories using bind mounts. That way you can access
364 local storage inside containers with zero overhead. Such bind mounts
365 also provide an easy way to share data between different containers.
366
367
368 Mount Points
369 ~~~~~~~~~~~~
370
371 The root mount point is configured with the `rootfs` property, and you can
372 configure up to 10 additional mount points. The corresponding options
373 are called `mp0` to `mp9`, and they can contain the following setting:
374
375 include::pct-mountpoint-opts.adoc[]
376
377 Currently there are basically three types of mount points: storage backed
378 mount points, bind mounts and device mounts.
379
380 .Typical container `rootfs` configuration
381 ----
382 rootfs: thin1:base-100-disk-1,size=8G
383 ----
384
385
386 Storage Backed Mount Points
387 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
388
389 Storage backed mount points are managed by the {pve} storage subsystem and come
390 in three different flavors:
391
392 - Image based: these are raw images containing a single ext4 formatted file
393 system.
394 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
395 and thus allow resizing and snapshotting.
396 - Directories: passing `size=0` triggers a special case where instead of a raw
397 image a directory is created.
398
399
400 Bind Mount Points
401 ^^^^^^^^^^^^^^^^^
402
403 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
404 inside a container. Some potential use cases are:
405
406 - Accessing your home directory in the guest
407 - Accessing an USB device directory in the guest
408 - Accessing an NFS mount from the host in the guest
409
410 Bind mounts are considered to not be managed by the storage subsystem, so you
411 cannot make snapshots or deal with quotas from inside the container. With
412 unprivileged containers you might run into permission problems caused by the
413 user mapping and cannot use ACLs.
414
415 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
416
417 WARNING: For security reasons, bind mounts should only be established
418 using source directories especially reserved for this purpose, e.g., a
419 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
420 directories like `/`, `/var` or `/etc` into a container - this poses a
421 great security risk.
422
423 NOTE: The bind mount source path must not contain any symlinks.
424
425 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
426 container with ID `100` under the path `/shared`, use a configuration line like
427 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
428 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
429 achieve the same result.
430
431
432 Device Mount Points
433 ^^^^^^^^^^^^^^^^^^^
434
435 Device mount points allow to mount block devices of the host directly into the
436 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
437 storage subsystem, but the `quota` and `acl` options will be honored.
438
439 NOTE: Device mount points should only be used under special circumstances. In
440 most cases a storage backed mount point offers the same performance and a lot
441 more features.
442
443 NOTE: The contents of device mount points are not backed up when using `vzdump`.
444
445
446 FUSE Mounts
447 ~~~~~~~~~~~
448
449 WARNING: Because of existing issues in the Linux kernel's freezer
450 subsystem the usage of FUSE mounts inside a container is strongly
451 advised against, as containers need to be frozen for suspend or
452 snapshot mode backups.
453
454 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
455 technologies, it is possible to establish the FUSE mount on the Proxmox host
456 and use a bind mount point to make it accessible inside the container.
457
458
459 Using Quotas Inside Containers
460 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
461
462 Quotas allow to set limits inside a container for the amount of disk
463 space that each user can use. This only works on ext4 image based
464 storage types and currently does not work with unprivileged
465 containers.
466
467 Activating the `quota` option causes the following mount options to be
468 used for a mount point:
469 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
470
471 This allows quotas to be used like you would on any other system. You
472 can initialize the `/aquota.user` and `/aquota.group` files by running
473
474 ----
475 quotacheck -cmug /
476 quotaon /
477 ----
478
479 and edit the quotas via the `edquota` command. Refer to the documentation
480 of the distribution running inside the container for details.
481
482 NOTE: You need to run the above commands for every mount point by passing
483 the mount point's path instead of just `/`.
484
485
486 Using ACLs Inside Containers
487 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
488
489 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
490 ACLs allow you to set more detailed file ownership than the traditional user/
491 group/others model.
492
493
494 Container Network
495 -----------------
496
497 You can configure up to 10 network interfaces for a single
498 container. The corresponding options are called `net0` to `net9`, and
499 they can contain the following setting:
500
501 include::pct-network-opts.adoc[]
502
503
504 Backup and Restore
505 ------------------
506
507
508 Container Backup
509 ~~~~~~~~~~~~~~~~
510
511 It is possible to use the `vzdump` tool for container backup. Please
512 refer to the `vzdump` manual page for details.
513
514
515 Restoring Container Backups
516 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
517
518 Restoring container backups made with `vzdump` is possible using the
519 `pct restore` command. By default, `pct restore` will attempt to restore as much
520 of the backed up container configuration as possible. It is possible to override
521 the backed up configuration by manually setting container options on the command
522 line (see the `pct` manual page for details).
523
524 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
525 contained in a vzdump archive.
526
527 There are two basic restore modes, only differing by their handling of mount
528 points:
529
530
531 ``Simple'' Restore Mode
532 ^^^^^^^^^^^^^^^^^^^^^^^
533
534 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
535 are explicitly set, the mount point configuration from the backed up
536 configuration file is restored using the following steps:
537
538 . Extract mount points and their options from backup
539 . Create volumes for storage backed mount points (on storage provided with the
540 `storage` parameter, or default local storage if unset)
541 . Extract files from backup archive
542 . Add bind and device mount points to restored configuration (limited to root user)
543
544 NOTE: Since bind and device mount points are never backed up, no files are
545 restored in the last step, but only the configuration options. The assumption
546 is that such mount points are either backed up with another mechanism (e.g.,
547 NFS space that is bind mounted into many containers), or not intended to be
548 backed up at all.
549
550 This simple mode is also used by the container restore operations in the web
551 interface.
552
553
554 ``Advanced'' Restore Mode
555 ^^^^^^^^^^^^^^^^^^^^^^^^^
556
557 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
558 parameters), the `pct restore` command is automatically switched into an
559 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
560 configuration options contained in the backup archive, and instead only
561 uses the options explicitly provided as parameters.
562
563 This mode allows flexible configuration of mount point settings at restore time,
564 for example:
565
566 * Set target storages, volume sizes and other options for each mount point
567 individually
568 * Redistribute backed up files according to new mount point scheme
569 * Restore to device and/or bind mount points (limited to root user)
570
571
572 Managing Containers with `pct`
573 ------------------------------
574
575 `pct` is the tool to manage Linux Containers on {pve}. You can create
576 and destroy containers, and control execution (start, stop, migrate,
577 ...). You can use pct to set parameters in the associated config file,
578 like network configuration or memory limits.
579
580
581 CLI Usage Examples
582 ~~~~~~~~~~~~~~~~~~
583
584 Create a container based on a Debian template (provided you have
585 already downloaded the template via the web interface)
586
587 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
588
589 Start container 100
590
591 pct start 100
592
593 Start a login session via getty
594
595 pct console 100
596
597 Enter the LXC namespace and run a shell as root user
598
599 pct enter 100
600
601 Display the configuration
602
603 pct config 100
604
605 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
606 set the address and gateway, while it's running
607
608 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
609
610 Reduce the memory of the container to 512MB
611
612 pct set 100 -memory 512
613
614
615 Obtaining Debugging Logs
616 ~~~~~~~~~~~~~~~~~~~~~~~~
617
618 In case `pct start` is unable to start a specific container, it might be
619 helpful to collect debugging output by running `lxc-start` (replace `ID` with
620 the container's ID):
621
622 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
623
624 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
625
626 The collected debug log is written to `/tmp/lxc-ID.log`.
627
628 NOTE: If you have changed the container's configuration since the last start
629 attempt with `pct start`, you need to run `pct start` at least once to also
630 update the configuration used by `lxc-start`.
631
632
633 Files
634 ------
635
636 `/etc/pve/lxc/<CTID>.conf`::
637
638 Configuration file for the container '<CTID>'.
639
640
641 Container Advantages
642 --------------------
643
644 * Simple, and fully integrated into {pve}. Setup looks similar to a normal
645 VM setup.
646
647 ** Storage (ZFS, LVM, NFS, Ceph, ...)
648
649 ** Network
650
651 ** Authentication
652
653 ** Cluster
654
655 * Fast: minimal overhead, as fast as bare metal
656
657 * High density (perfect for idle workloads)
658
659 * REST API
660
661 * Direct hardware access
662
663
664 Technology Overview
665 -------------------
666
667 * Integrated into {pve} graphical user interface (GUI)
668
669 * LXC (https://linuxcontainers.org/)
670
671 * lxcfs to provide containerized /proc file system
672
673 * AppArmor
674
675 * CRIU: for live migration (planned)
676
677 * We use latest available kernels (4.4.X)
678
679 * Image based deployment (templates)
680
681 * Container setup from host (network, DNS, storage, ...)
682
683
684 ifdef::manvolnum[]
685 include::pve-copyright.adoc[]
686 endif::manvolnum[]
687
688
689
690
691
692
693