]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
style cleanups
[pve-docs.git] / pct.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
10
11
12 SYNOPSYS
13 --------
14
15 include::pct.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Proxmox Container Toolkit
23 =========================
24 include::attributes.txt[]
25 endif::manvolnum[]
26
27
28 Containers are a lightweight alternative to fully virtualized
29 VMs. Instead of emulating a complete Operating System (OS), containers
30 simply use the OS of the host they run on. This implies that all
31 containers use the same kernel, and that they can access resources
32 from the host directly.
33
34 This is great because containers do not waste CPU power nor memory due
35 to kernel emulation. Container run-time costs are close to zero and
36 usually negligible. But there are also some drawbacks you need to
37 consider:
38
39 * You can only run Linux based OS inside containers, i.e. it is not
40 possible to run FreeBSD or MS Windows inside.
41
42 * For security reasons, access to host resources needs to be
43 restricted. This is done with AppArmor, SecComp filters and other
44 kernel features. Be prepared that some syscalls are not allowed
45 inside containers.
46
47 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
48 technology. We consider LXC as low-level library, which provides
49 countless options. It would be too difficult to use those tools
50 directly. Instead, we provide a small wrapper called `pct`, the
51 "Proxmox Container Toolkit".
52
53 The toolkit is tightly coupled with {pve}. That means that it is aware
54 of the cluster setup, and it can use the same network and storage
55 resources as fully virtualized VMs. You can even use the {pve}
56 firewall, or manage containers using the HA framework.
57
58 Our primary goal is to offer an environment as one would get from a
59 VM, but without the additional overhead. We call this "System
60 Containers".
61
62 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
63 is best to run them inside a VM.
64
65
66 Security Considerations
67 -----------------------
68
69 Containers use the same kernel as the host, so there is a big attack
70 surface for malicious users. You should consider this fact if you
71 provide containers to totally untrusted people. In general, fully
72 virtualized VMs provide better isolation.
73
74 The good news is that LXC uses many kernel security features like
75 AppArmor, CGroups and PID and user namespaces, which makes containers
76 usage quite secure. We distinguish two types of containers:
77
78
79 Privileged Containers
80 ~~~~~~~~~~~~~~~~~~~~~
81
82 Security is done by dropping capabilities, using mandatory access
83 control (AppArmor), SecComp filters and namespaces. The LXC team
84 considers this kind of container as unsafe, and they will not consider
85 new container escape exploits to be security issues worthy of a CVE
86 and quick fix. So you should use this kind of containers only inside a
87 trusted environment, or when no untrusted task is running as root in
88 the container.
89
90
91 Unprivileged Containers
92 ~~~~~~~~~~~~~~~~~~~~~~~
93
94 This kind of containers use a new kernel feature called user
95 namespaces. The root UID 0 inside the container is mapped to an
96 unprivileged user outside the container. This means that most security
97 issues (container escape, resource abuse, ...) in those containers
98 will affect a random unprivileged user, and so would be a generic
99 kernel security bug rather than an LXC issue. The LXC team thinks
100 unprivileged containers are safe by design.
101
102
103 Configuration
104 -------------
105
106 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
107 where `<CTID>` is the numeric ID of the given container. Like all
108 other files stored inside `/etc/pve/`, they get automatically
109 replicated to all other cluster nodes.
110
111 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
112 unique cluster wide.
113
114 .Example Container Configuration
115 ----
116 ostype: debian
117 arch: amd64
118 hostname: www
119 memory: 512
120 swap: 512
121 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
122 rootfs: local:107/vm-107-disk-1.raw,size=7G
123 ----
124
125 Those configuration files are simple text files, and you can edit them
126 using a normal text editor (`vi`, `nano`, ...). This is sometimes
127 useful to do small corrections, but keep in mind that you need to
128 restart the container to apply such changes.
129
130 For that reason, it is usually better to use the `pct` command to
131 generate and modify those files, or do the whole thing using the GUI.
132 Our toolkit is smart enough to instantaneously apply most changes to
133 running containers. This feature is called "hot plug", and there is no
134 need to restart the container in that case.
135
136
137 File Format
138 ~~~~~~~~~~~
139
140 Container configuration files use a simple colon separated key/value
141 format. Each line has the following format:
142
143 -----
144 # this is a comment
145 OPTION: value
146 -----
147
148 Blank lines in those files are ignored, and lines starting with a `#`
149 character are treated as comments and are also ignored.
150
151 It is possible to add low-level, LXC style configuration directly, for
152 example:
153
154 lxc.init_cmd: /sbin/my_own_init
155
156 or
157
158 lxc.init_cmd = /sbin/my_own_init
159
160 Those settings are directly passed to the LXC low-level tools.
161
162
163 Snapshots
164 ~~~~~~~~~
165
166 When you create a snapshot, `pct` stores the configuration at snapshot
167 time into a separate snapshot section within the same configuration
168 file. For example, after creating a snapshot called ``testsnapshot'',
169 your configuration file will look like this:
170
171 .Container configuration with snapshot
172 ----
173 memory: 512
174 swap: 512
175 parent: testsnaphot
176 ...
177
178 [testsnaphot]
179 memory: 512
180 swap: 512
181 snaptime: 1457170803
182 ...
183 ----
184
185 There are a few snapshot related properties like `parent` and
186 `snaptime`. The `parent` property is used to store the parent/child
187 relationship between snapshots. `snaptime` is the snapshot creation
188 time stamp (Unix epoch).
189
190
191 Guest Operating System Configuration
192 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193
194 We normally try to detect the operating system type inside the
195 container, and then modify some files inside the container to make
196 them work as expected. Here is a short list of things we do at
197 container startup:
198
199 set /etc/hostname:: to set the container name
200
201 modify /etc/hosts:: to allow lookup of the local hostname
202
203 network setup:: pass the complete network setup to the container
204
205 configure DNS:: pass information about DNS servers
206
207 adapt the init system:: for example, fix the number of spawned getty processes
208
209 set the root password:: when creating a new container
210
211 rewrite ssh_host_keys:: so that each container has unique keys
212
213 randomize crontab:: so that cron does not start at the same time on all containers
214
215 Changes made by {PVE} are enclosed by comment markers:
216
217 ----
218 # --- BEGIN PVE ---
219 <data>
220 # --- END PVE ---
221 ----
222
223 Those markers will be inserted at a reasonable location in the
224 file. If such a section already exists, it will be updated in place
225 and will not be moved.
226
227 Modification of a file can be prevented by adding a `.pve-ignore.`
228 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
229 exists then the `/etc/hosts` file will not be touched. This can be a
230 simple empty file creatd via:
231
232 # touch /etc/.pve-ignore.hosts
233
234 Most modifications are OS dependent, so they differ between different
235 distributions and versions. You can completely disable modifications
236 by manually setting the `ostype` to `unmanaged`.
237
238 OS type detection is done by testing for certain files inside the
239 container:
240
241 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
242
243 Debian:: test /etc/debian_version
244
245 Fedora:: test /etc/fedora-release
246
247 RedHat or CentOS:: test /etc/redhat-release
248
249 ArchLinux:: test /etc/arch-release
250
251 Alpine:: test /etc/alpine-release
252
253 Gentoo:: test /etc/gentoo-release
254
255 NOTE: Container start fails if the configured `ostype` differs from the auto
256 detected type.
257
258
259 Options
260 ~~~~~~~
261
262 include::pct.conf.5-opts.adoc[]
263
264
265 Container Images
266 ----------------
267
268 Container images, sometimes also referred to as ``templates'' or
269 ``appliances'', are `tar` archives which contain everything to run a
270 container. You can think of it as a tidy container backup. Like most
271 modern container toolkits, `pct` uses those images when you create a
272 new container, for example:
273
274 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
275
276 {pve} itself ships a set of basic templates for most common
277 operating systems, and you can download them using the `pveam` (short
278 for {pve} Appliance Manager) command line utility. You can also
279 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
280 that tool (or the graphical user interface).
281
282 Our image repositories contain a list of available images, and there
283 is a cron job run each day to download that list. You can trigger that
284 update manually with:
285
286 pveam update
287
288 After that you can view the list of available images using:
289
290 pveam available
291
292 You can restrict this large list by specifying the `section` you are
293 interested in, for example basic `system` images:
294
295 .List available system images
296 ----
297 # pveam available --section system
298 system archlinux-base_2015-24-29-1_x86_64.tar.gz
299 system centos-7-default_20160205_amd64.tar.xz
300 system debian-6.0-standard_6.0-7_amd64.tar.gz
301 system debian-7.0-standard_7.0-3_amd64.tar.gz
302 system debian-8.0-standard_8.0-1_amd64.tar.gz
303 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
304 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
305 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
306 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
307 ----
308
309 Before you can use such a template, you need to download them into one
310 of your storages. You can simply use storage `local` for that
311 purpose. For clustered installations, it is preferred to use a shared
312 storage so that all nodes can access those images.
313
314 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
315
316 You are now ready to create containers using that image, and you can
317 list all downloaded images on storage `local` with:
318
319 ----
320 # pveam list local
321 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
322 ----
323
324 The above command shows you the full {pve} volume identifiers. They include
325 the storage name, and most other {pve} commands can use them. For
326 example you can delete that image later with:
327
328 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
329
330
331 Container Storage
332 -----------------
333
334 Traditional containers use a very simple storage model, only allowing
335 a single mount point, the root file system. This was further
336 restricted to specific file system types like `ext4` and `nfs`.
337 Additional mounts are often done by user provided scripts. This turned
338 out to be complex and error prone, so we try to avoid that now.
339
340 Our new LXC based container model is more flexible regarding
341 storage. First, you can have more than a single mount point. This
342 allows you to choose a suitable storage for each application. For
343 example, you can use a relatively slow (and thus cheap) storage for
344 the container root file system. Then you can use a second mount point
345 to mount a very fast, distributed storage for your database
346 application.
347
348 The second big improvement is that you can use any storage type
349 supported by the {pve} storage library. That means that you can store
350 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
351 or even on distributed storage systems like `ceph`. It also enables us
352 to use advanced storage features like snapshots and clones. `vzdump`
353 can also use the snapshot feature to provide consistent container
354 backups.
355
356 Last but not least, you can also mount local devices directly, or
357 mount local directories using bind mounts. That way you can access
358 local storage inside containers with zero overhead. Such bind mounts
359 also provide an easy way to share data between different containers.
360
361
362 Mount Points
363 ~~~~~~~~~~~~
364
365 The root mount point is configured with the `rootfs` property, and you can
366 configure up to 10 additional mount points. The corresponding options
367 are called `mp0` to `mp9`, and they can contain the following setting:
368
369 include::pct-mountpoint-opts.adoc[]
370
371 Currently there are basically three types of mount points: storage backed
372 mount points, bind mounts and device mounts.
373
374 .Typical container `rootfs` configuration
375 ----
376 rootfs: thin1:base-100-disk-1,size=8G
377 ----
378
379
380 Storage Backed Mount Points
381 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
382
383 Storage backed mount points are managed by the {pve} storage subsystem and come
384 in three different flavors:
385
386 - Image based: these are raw images containing a single ext4 formatted file
387 system.
388 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
389 and thus allow resizing and snapshotting.
390 - Directories: passing `size=0` triggers a special case where instead of a raw
391 image a directory is created.
392
393
394 Bind Mount Points
395 ^^^^^^^^^^^^^^^^^
396
397 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
398 inside a container. Some potential use cases are:
399
400 - Accessing your home directory in the guest
401 - Accessing an USB device directory in the guest
402 - Accessing an NFS mount from the host in the guest
403
404 Bind mounts are considered to not be managed by the storage subsystem, so you
405 cannot make snapshots or deal with quotas from inside the container. With
406 unprivileged containers you might run into permission problems caused by the
407 user mapping and cannot use ACLs.
408
409 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
410
411 WARNING: For security reasons, bind mounts should only be established
412 using source directories especially reserved for this purpose, e.g., a
413 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
414 directories like `/`, `/var` or `/etc` into a container - this poses a
415 great security risk.
416
417 NOTE: The bind mount source path must not contain any symlinks.
418
419 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
420 container with ID `100` under the path `/shared`, use a configuration line like
421 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
422 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
423 achieve the same result.
424
425
426 Device Mount Points
427 ^^^^^^^^^^^^^^^^^^^
428
429 Device mount points allow to mount block devices of the host directly into the
430 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
431 storage subsystem, but the `quota` and `acl` options will be honored.
432
433 NOTE: Device mount points should only be used under special circumstances. In
434 most cases a storage backed mount point offers the same performance and a lot
435 more features.
436
437 NOTE: The contents of device mount points are not backed up when using `vzdump`.
438
439
440 FUSE Mounts
441 ~~~~~~~~~~~
442
443 WARNING: Because of existing issues in the Linux kernel's freezer
444 subsystem the usage of FUSE mounts inside a container is strongly
445 advised against, as containers need to be frozen for suspend or
446 snapshot mode backups.
447
448 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
449 technologies, it is possible to establish the FUSE mount on the Proxmox host
450 and use a bind mount point to make it accessible inside the container.
451
452
453 Using Quotas Inside Containers
454 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
455
456 Quotas allow to set limits inside a container for the amount of disk
457 space that each user can use. This only works on ext4 image based
458 storage types and currently does not work with unprivileged
459 containers.
460
461 Activating the `quota` option causes the following mount options to be
462 used for a mount point:
463 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
464
465 This allows quotas to be used like you would on any other system. You
466 can initialize the `/aquota.user` and `/aquota.group` files by running
467
468 ----
469 quotacheck -cmug /
470 quotaon /
471 ----
472
473 and edit the quotas via the `edquota` command. Refer to the documentation
474 of the distribution running inside the container for details.
475
476 NOTE: You need to run the above commands for every mount point by passing
477 the mount point's path instead of just `/`.
478
479
480 Using ACLs Inside Containers
481 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
482
483 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
484 ACLs allow you to set more detailed file ownership than the traditional user/
485 group/others model.
486
487
488 Container Network
489 -----------------
490
491 You can configure up to 10 network interfaces for a single
492 container. The corresponding options are called `net0` to `net9`, and
493 they can contain the following setting:
494
495 include::pct-network-opts.adoc[]
496
497
498 Backup and Restore
499 ------------------
500
501
502 Container Backup
503 ~~~~~~~~~~~~~~~~
504
505 It is possible to use the `vzdump` tool for container backup. Please
506 refer to the `vzdump` manual page for details.
507
508
509 Restoring Container Backups
510 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
511
512 Restoring container backups made with `vzdump` is possible using the
513 `pct restore` command. By default, `pct restore` will attempt to restore as much
514 of the backed up container configuration as possible. It is possible to override
515 the backed up configuration by manually setting container options on the command
516 line (see the `pct` manual page for details).
517
518 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
519 contained in a vzdump archive.
520
521 There are two basic restore modes, only differing by their handling of mount
522 points:
523
524
525 ``Simple'' Restore Mode
526 ^^^^^^^^^^^^^^^^^^^^^^^
527
528 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
529 are explicitly set, the mount point configuration from the backed up
530 configuration file is restored using the following steps:
531
532 . Extract mount points and their options from backup
533 . Create volumes for storage backed mount points (on storage provided with the
534 `storage` parameter, or default local storage if unset)
535 . Extract files from backup archive
536 . Add bind and device mount points to restored configuration (limited to root user)
537
538 NOTE: Since bind and device mount points are never backed up, no files are
539 restored in the last step, but only the configuration options. The assumption
540 is that such mount points are either backed up with another mechanism (e.g.,
541 NFS space that is bind mounted into many containers), or not intended to be
542 backed up at all.
543
544 This simple mode is also used by the container restore operations in the web
545 interface.
546
547
548 ``Advanced'' Restore Mode
549 ^^^^^^^^^^^^^^^^^^^^^^^^^
550
551 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
552 parameters), the `pct restore` command is automatically switched into an
553 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
554 configuration options contained in the backup archive, and instead only
555 uses the options explicitly provided as parameters.
556
557 This mode allows flexible configuration of mount point settings at restore time,
558 for example:
559
560 * Set target storages, volume sizes and other options for each mount point
561 individually
562 * Redistribute backed up files according to new mount point scheme
563 * Restore to device and/or bind mount points (limited to root user)
564
565
566 Managing Containers with `pct`
567 ------------------------------
568
569 `pct` is the tool to manage Linux Containers on {pve}. You can create
570 and destroy containers, and control execution (start, stop, migrate,
571 ...). You can use pct to set parameters in the associated config file,
572 like network configuration or memory limits.
573
574
575 CLI Usage Examples
576 ~~~~~~~~~~~~~~~~~~
577
578 Create a container based on a Debian template (provided you have
579 already downloaded the template via the web interface)
580
581 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
582
583 Start container 100
584
585 pct start 100
586
587 Start a login session via getty
588
589 pct console 100
590
591 Enter the LXC namespace and run a shell as root user
592
593 pct enter 100
594
595 Display the configuration
596
597 pct config 100
598
599 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
600 set the address and gateway, while it's running
601
602 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
603
604 Reduce the memory of the container to 512MB
605
606 pct set 100 -memory 512
607
608
609 Files
610 ------
611
612 `/etc/pve/lxc/<CTID>.conf`::
613
614 Configuration file for the container '<CTID>'.
615
616
617 Container Advantages
618 --------------------
619
620 * Simple, and fully integrated into {pve}. Setup looks similar to a normal
621 VM setup.
622
623 ** Storage (ZFS, LVM, NFS, Ceph, ...)
624
625 ** Network
626
627 ** Authentication
628
629 ** Cluster
630
631 * Fast: minimal overhead, as fast as bare metal
632
633 * High density (perfect for idle workloads)
634
635 * REST API
636
637 * Direct hardware access
638
639
640 Technology Overview
641 -------------------
642
643 * Integrated into {pve} graphical user interface (GUI)
644
645 * LXC (https://linuxcontainers.org/)
646
647 * lxcfs to provide containerized /proc file system
648
649 * AppArmor
650
651 * CRIU: for live migration (planned)
652
653 * We use latest available kernels (4.4.X)
654
655 * Image based deployment (templates)
656
657 * Container setup from host (network, DNS, storage, ...)
658
659
660 ifdef::manvolnum[]
661 include::pve-copyright.adoc[]
662 endif::manvolnum[]
663
664
665
666
667
668
669