]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
de976b5d5a3e76de088bfb731343b4b13d7e997b
[pve-docs.git] / pct.adoc
1 ifdef::manvolnum[]
2 pct(1)
3 ======
4 include::attributes.txt[]
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
13 SYNOPSIS
14 --------
15
16 include::pct.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 Proxmox Container Toolkit
24 =========================
25 include::attributes.txt[]
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :pve-toplevel:
29 :title: Linux Container
30 endif::wiki[]
31
32 Containers are a lightweight alternative to fully virtualized
33 VMs. Instead of emulating a complete Operating System (OS), containers
34 simply use the OS of the host they run on. This implies that all
35 containers use the same kernel, and that they can access resources
36 from the host directly.
37
38 This is great because containers do not waste CPU power nor memory due
39 to kernel emulation. Container run-time costs are close to zero and
40 usually negligible. But there are also some drawbacks you need to
41 consider:
42
43 * You can only run Linux based OS inside containers, i.e. it is not
44 possible to run FreeBSD or MS Windows inside.
45
46 * For security reasons, access to host resources needs to be
47 restricted. This is done with AppArmor, SecComp filters and other
48 kernel features. Be prepared that some syscalls are not allowed
49 inside containers.
50
51 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
52 technology. We consider LXC as low-level library, which provides
53 countless options. It would be too difficult to use those tools
54 directly. Instead, we provide a small wrapper called `pct`, the
55 "Proxmox Container Toolkit".
56
57 The toolkit is tightly coupled with {pve}. That means that it is aware
58 of the cluster setup, and it can use the same network and storage
59 resources as fully virtualized VMs. You can even use the {pve}
60 firewall, or manage containers using the HA framework.
61
62 Our primary goal is to offer an environment as one would get from a
63 VM, but without the additional overhead. We call this "System
64 Containers".
65
66 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
67 is best to run them inside a VM.
68
69
70 Security Considerations
71 -----------------------
72
73 Containers use the same kernel as the host, so there is a big attack
74 surface for malicious users. You should consider this fact if you
75 provide containers to totally untrusted people. In general, fully
76 virtualized VMs provide better isolation.
77
78 The good news is that LXC uses many kernel security features like
79 AppArmor, CGroups and PID and user namespaces, which makes containers
80 usage quite secure. We distinguish two types of containers:
81
82
83 Privileged Containers
84 ~~~~~~~~~~~~~~~~~~~~~
85
86 Security is done by dropping capabilities, using mandatory access
87 control (AppArmor), SecComp filters and namespaces. The LXC team
88 considers this kind of container as unsafe, and they will not consider
89 new container escape exploits to be security issues worthy of a CVE
90 and quick fix. So you should use this kind of containers only inside a
91 trusted environment, or when no untrusted task is running as root in
92 the container.
93
94
95 Unprivileged Containers
96 ~~~~~~~~~~~~~~~~~~~~~~~
97
98 This kind of containers use a new kernel feature called user
99 namespaces. The root UID 0 inside the container is mapped to an
100 unprivileged user outside the container. This means that most security
101 issues (container escape, resource abuse, ...) in those containers
102 will affect a random unprivileged user, and so would be a generic
103 kernel security bug rather than an LXC issue. The LXC team thinks
104 unprivileged containers are safe by design.
105
106
107 Configuration
108 -------------
109
110 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
111 where `<CTID>` is the numeric ID of the given container. Like all
112 other files stored inside `/etc/pve/`, they get automatically
113 replicated to all other cluster nodes.
114
115 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
116 unique cluster wide.
117
118 .Example Container Configuration
119 ----
120 ostype: debian
121 arch: amd64
122 hostname: www
123 memory: 512
124 swap: 512
125 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
126 rootfs: local:107/vm-107-disk-1.raw,size=7G
127 ----
128
129 Those configuration files are simple text files, and you can edit them
130 using a normal text editor (`vi`, `nano`, ...). This is sometimes
131 useful to do small corrections, but keep in mind that you need to
132 restart the container to apply such changes.
133
134 For that reason, it is usually better to use the `pct` command to
135 generate and modify those files, or do the whole thing using the GUI.
136 Our toolkit is smart enough to instantaneously apply most changes to
137 running containers. This feature is called "hot plug", and there is no
138 need to restart the container in that case.
139
140
141 File Format
142 ~~~~~~~~~~~
143
144 Container configuration files use a simple colon separated key/value
145 format. Each line has the following format:
146
147 -----
148 # this is a comment
149 OPTION: value
150 -----
151
152 Blank lines in those files are ignored, and lines starting with a `#`
153 character are treated as comments and are also ignored.
154
155 It is possible to add low-level, LXC style configuration directly, for
156 example:
157
158 lxc.init_cmd: /sbin/my_own_init
159
160 or
161
162 lxc.init_cmd = /sbin/my_own_init
163
164 Those settings are directly passed to the LXC low-level tools.
165
166
167 Snapshots
168 ~~~~~~~~~
169
170 When you create a snapshot, `pct` stores the configuration at snapshot
171 time into a separate snapshot section within the same configuration
172 file. For example, after creating a snapshot called ``testsnapshot'',
173 your configuration file will look like this:
174
175 .Container configuration with snapshot
176 ----
177 memory: 512
178 swap: 512
179 parent: testsnaphot
180 ...
181
182 [testsnaphot]
183 memory: 512
184 swap: 512
185 snaptime: 1457170803
186 ...
187 ----
188
189 There are a few snapshot related properties like `parent` and
190 `snaptime`. The `parent` property is used to store the parent/child
191 relationship between snapshots. `snaptime` is the snapshot creation
192 time stamp (Unix epoch).
193
194
195 Guest Operating System Configuration
196 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
197
198 We normally try to detect the operating system type inside the
199 container, and then modify some files inside the container to make
200 them work as expected. Here is a short list of things we do at
201 container startup:
202
203 set /etc/hostname:: to set the container name
204
205 modify /etc/hosts:: to allow lookup of the local hostname
206
207 network setup:: pass the complete network setup to the container
208
209 configure DNS:: pass information about DNS servers
210
211 adapt the init system:: for example, fix the number of spawned getty processes
212
213 set the root password:: when creating a new container
214
215 rewrite ssh_host_keys:: so that each container has unique keys
216
217 randomize crontab:: so that cron does not start at the same time on all containers
218
219 Changes made by {PVE} are enclosed by comment markers:
220
221 ----
222 # --- BEGIN PVE ---
223 <data>
224 # --- END PVE ---
225 ----
226
227 Those markers will be inserted at a reasonable location in the
228 file. If such a section already exists, it will be updated in place
229 and will not be moved.
230
231 Modification of a file can be prevented by adding a `.pve-ignore.`
232 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
233 exists then the `/etc/hosts` file will not be touched. This can be a
234 simple empty file creatd via:
235
236 # touch /etc/.pve-ignore.hosts
237
238 Most modifications are OS dependent, so they differ between different
239 distributions and versions. You can completely disable modifications
240 by manually setting the `ostype` to `unmanaged`.
241
242 OS type detection is done by testing for certain files inside the
243 container:
244
245 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
246
247 Debian:: test /etc/debian_version
248
249 Fedora:: test /etc/fedora-release
250
251 RedHat or CentOS:: test /etc/redhat-release
252
253 ArchLinux:: test /etc/arch-release
254
255 Alpine:: test /etc/alpine-release
256
257 Gentoo:: test /etc/gentoo-release
258
259 NOTE: Container start fails if the configured `ostype` differs from the auto
260 detected type.
261
262
263 Options
264 ~~~~~~~
265
266 include::pct.conf.5-opts.adoc[]
267
268
269 Container Images
270 ----------------
271
272 Container images, sometimes also referred to as ``templates'' or
273 ``appliances'', are `tar` archives which contain everything to run a
274 container. You can think of it as a tidy container backup. Like most
275 modern container toolkits, `pct` uses those images when you create a
276 new container, for example:
277
278 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
279
280 {pve} itself ships a set of basic templates for most common
281 operating systems, and you can download them using the `pveam` (short
282 for {pve} Appliance Manager) command line utility. You can also
283 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
284 that tool (or the graphical user interface).
285
286 Our image repositories contain a list of available images, and there
287 is a cron job run each day to download that list. You can trigger that
288 update manually with:
289
290 pveam update
291
292 After that you can view the list of available images using:
293
294 pveam available
295
296 You can restrict this large list by specifying the `section` you are
297 interested in, for example basic `system` images:
298
299 .List available system images
300 ----
301 # pveam available --section system
302 system archlinux-base_2015-24-29-1_x86_64.tar.gz
303 system centos-7-default_20160205_amd64.tar.xz
304 system debian-6.0-standard_6.0-7_amd64.tar.gz
305 system debian-7.0-standard_7.0-3_amd64.tar.gz
306 system debian-8.0-standard_8.0-1_amd64.tar.gz
307 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
308 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
309 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
310 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
311 ----
312
313 Before you can use such a template, you need to download them into one
314 of your storages. You can simply use storage `local` for that
315 purpose. For clustered installations, it is preferred to use a shared
316 storage so that all nodes can access those images.
317
318 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
319
320 You are now ready to create containers using that image, and you can
321 list all downloaded images on storage `local` with:
322
323 ----
324 # pveam list local
325 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
326 ----
327
328 The above command shows you the full {pve} volume identifiers. They include
329 the storage name, and most other {pve} commands can use them. For
330 example you can delete that image later with:
331
332 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
333
334
335 Container Storage
336 -----------------
337
338 Traditional containers use a very simple storage model, only allowing
339 a single mount point, the root file system. This was further
340 restricted to specific file system types like `ext4` and `nfs`.
341 Additional mounts are often done by user provided scripts. This turned
342 out to be complex and error prone, so we try to avoid that now.
343
344 Our new LXC based container model is more flexible regarding
345 storage. First, you can have more than a single mount point. This
346 allows you to choose a suitable storage for each application. For
347 example, you can use a relatively slow (and thus cheap) storage for
348 the container root file system. Then you can use a second mount point
349 to mount a very fast, distributed storage for your database
350 application.
351
352 The second big improvement is that you can use any storage type
353 supported by the {pve} storage library. That means that you can store
354 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
355 or even on distributed storage systems like `ceph`. It also enables us
356 to use advanced storage features like snapshots and clones. `vzdump`
357 can also use the snapshot feature to provide consistent container
358 backups.
359
360 Last but not least, you can also mount local devices directly, or
361 mount local directories using bind mounts. That way you can access
362 local storage inside containers with zero overhead. Such bind mounts
363 also provide an easy way to share data between different containers.
364
365
366 Mount Points
367 ~~~~~~~~~~~~
368
369 The root mount point is configured with the `rootfs` property, and you can
370 configure up to 10 additional mount points. The corresponding options
371 are called `mp0` to `mp9`, and they can contain the following setting:
372
373 include::pct-mountpoint-opts.adoc[]
374
375 Currently there are basically three types of mount points: storage backed
376 mount points, bind mounts and device mounts.
377
378 .Typical container `rootfs` configuration
379 ----
380 rootfs: thin1:base-100-disk-1,size=8G
381 ----
382
383
384 Storage Backed Mount Points
385 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
386
387 Storage backed mount points are managed by the {pve} storage subsystem and come
388 in three different flavors:
389
390 - Image based: these are raw images containing a single ext4 formatted file
391 system.
392 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
393 and thus allow resizing and snapshotting.
394 - Directories: passing `size=0` triggers a special case where instead of a raw
395 image a directory is created.
396
397
398 Bind Mount Points
399 ^^^^^^^^^^^^^^^^^
400
401 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
402 inside a container. Some potential use cases are:
403
404 - Accessing your home directory in the guest
405 - Accessing an USB device directory in the guest
406 - Accessing an NFS mount from the host in the guest
407
408 Bind mounts are considered to not be managed by the storage subsystem, so you
409 cannot make snapshots or deal with quotas from inside the container. With
410 unprivileged containers you might run into permission problems caused by the
411 user mapping and cannot use ACLs.
412
413 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
414
415 WARNING: For security reasons, bind mounts should only be established
416 using source directories especially reserved for this purpose, e.g., a
417 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
418 directories like `/`, `/var` or `/etc` into a container - this poses a
419 great security risk.
420
421 NOTE: The bind mount source path must not contain any symlinks.
422
423 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
424 container with ID `100` under the path `/shared`, use a configuration line like
425 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
426 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
427 achieve the same result.
428
429
430 Device Mount Points
431 ^^^^^^^^^^^^^^^^^^^
432
433 Device mount points allow to mount block devices of the host directly into the
434 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
435 storage subsystem, but the `quota` and `acl` options will be honored.
436
437 NOTE: Device mount points should only be used under special circumstances. In
438 most cases a storage backed mount point offers the same performance and a lot
439 more features.
440
441 NOTE: The contents of device mount points are not backed up when using `vzdump`.
442
443
444 FUSE Mounts
445 ~~~~~~~~~~~
446
447 WARNING: Because of existing issues in the Linux kernel's freezer
448 subsystem the usage of FUSE mounts inside a container is strongly
449 advised against, as containers need to be frozen for suspend or
450 snapshot mode backups.
451
452 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
453 technologies, it is possible to establish the FUSE mount on the Proxmox host
454 and use a bind mount point to make it accessible inside the container.
455
456
457 Using Quotas Inside Containers
458 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
459
460 Quotas allow to set limits inside a container for the amount of disk
461 space that each user can use. This only works on ext4 image based
462 storage types and currently does not work with unprivileged
463 containers.
464
465 Activating the `quota` option causes the following mount options to be
466 used for a mount point:
467 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
468
469 This allows quotas to be used like you would on any other system. You
470 can initialize the `/aquota.user` and `/aquota.group` files by running
471
472 ----
473 quotacheck -cmug /
474 quotaon /
475 ----
476
477 and edit the quotas via the `edquota` command. Refer to the documentation
478 of the distribution running inside the container for details.
479
480 NOTE: You need to run the above commands for every mount point by passing
481 the mount point's path instead of just `/`.
482
483
484 Using ACLs Inside Containers
485 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
486
487 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
488 ACLs allow you to set more detailed file ownership than the traditional user/
489 group/others model.
490
491
492 Container Network
493 -----------------
494
495 You can configure up to 10 network interfaces for a single
496 container. The corresponding options are called `net0` to `net9`, and
497 they can contain the following setting:
498
499 include::pct-network-opts.adoc[]
500
501
502 Backup and Restore
503 ------------------
504
505
506 Container Backup
507 ~~~~~~~~~~~~~~~~
508
509 It is possible to use the `vzdump` tool for container backup. Please
510 refer to the `vzdump` manual page for details.
511
512
513 Restoring Container Backups
514 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
515
516 Restoring container backups made with `vzdump` is possible using the
517 `pct restore` command. By default, `pct restore` will attempt to restore as much
518 of the backed up container configuration as possible. It is possible to override
519 the backed up configuration by manually setting container options on the command
520 line (see the `pct` manual page for details).
521
522 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
523 contained in a vzdump archive.
524
525 There are two basic restore modes, only differing by their handling of mount
526 points:
527
528
529 ``Simple'' Restore Mode
530 ^^^^^^^^^^^^^^^^^^^^^^^
531
532 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
533 are explicitly set, the mount point configuration from the backed up
534 configuration file is restored using the following steps:
535
536 . Extract mount points and their options from backup
537 . Create volumes for storage backed mount points (on storage provided with the
538 `storage` parameter, or default local storage if unset)
539 . Extract files from backup archive
540 . Add bind and device mount points to restored configuration (limited to root user)
541
542 NOTE: Since bind and device mount points are never backed up, no files are
543 restored in the last step, but only the configuration options. The assumption
544 is that such mount points are either backed up with another mechanism (e.g.,
545 NFS space that is bind mounted into many containers), or not intended to be
546 backed up at all.
547
548 This simple mode is also used by the container restore operations in the web
549 interface.
550
551
552 ``Advanced'' Restore Mode
553 ^^^^^^^^^^^^^^^^^^^^^^^^^
554
555 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
556 parameters), the `pct restore` command is automatically switched into an
557 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
558 configuration options contained in the backup archive, and instead only
559 uses the options explicitly provided as parameters.
560
561 This mode allows flexible configuration of mount point settings at restore time,
562 for example:
563
564 * Set target storages, volume sizes and other options for each mount point
565 individually
566 * Redistribute backed up files according to new mount point scheme
567 * Restore to device and/or bind mount points (limited to root user)
568
569
570 Managing Containers with `pct`
571 ------------------------------
572
573 `pct` is the tool to manage Linux Containers on {pve}. You can create
574 and destroy containers, and control execution (start, stop, migrate,
575 ...). You can use pct to set parameters in the associated config file,
576 like network configuration or memory limits.
577
578
579 CLI Usage Examples
580 ~~~~~~~~~~~~~~~~~~
581
582 Create a container based on a Debian template (provided you have
583 already downloaded the template via the web interface)
584
585 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
586
587 Start container 100
588
589 pct start 100
590
591 Start a login session via getty
592
593 pct console 100
594
595 Enter the LXC namespace and run a shell as root user
596
597 pct enter 100
598
599 Display the configuration
600
601 pct config 100
602
603 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
604 set the address and gateway, while it's running
605
606 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
607
608 Reduce the memory of the container to 512MB
609
610 pct set 100 -memory 512
611
612
613 Obtaining Debugging Logs
614 ~~~~~~~~~~~~~~~~~~~~~~~~
615
616 In case `pct start` is unable to start a specific container, it might be
617 helpful to collect debugging output by running `lxc-start` (replace `ID` with
618 the container's ID):
619
620 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
621
622 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
623
624 The collected debug log is written to `/tmp/lxc-ID.log`.
625
626 NOTE: If you have changed the container's configuration since the last start
627 attempt with `pct start`, you need to run `pct start` at least once to also
628 update the configuration used by `lxc-start`.
629
630
631 Files
632 ------
633
634 `/etc/pve/lxc/<CTID>.conf`::
635
636 Configuration file for the container '<CTID>'.
637
638
639 Container Advantages
640 --------------------
641
642 * Simple, and fully integrated into {pve}. Setup looks similar to a normal
643 VM setup.
644
645 ** Storage (ZFS, LVM, NFS, Ceph, ...)
646
647 ** Network
648
649 ** Authentication
650
651 ** Cluster
652
653 * Fast: minimal overhead, as fast as bare metal
654
655 * High density (perfect for idle workloads)
656
657 * REST API
658
659 * Direct hardware access
660
661
662 Technology Overview
663 -------------------
664
665 * Integrated into {pve} graphical user interface (GUI)
666
667 * LXC (https://linuxcontainers.org/)
668
669 * lxcfs to provide containerized /proc file system
670
671 * AppArmor
672
673 * CRIU: for live migration (planned)
674
675 * We use latest available kernels (4.4.X)
676
677 * Image based deployment (templates)
678
679 * Container setup from host (network, DNS, storage, ...)
680
681
682 ifdef::manvolnum[]
683 include::pve-copyright.adoc[]
684 endif::manvolnum[]
685
686
687
688
689
690
691