]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
reorder docs (pct after qm)
[pve-docs.git] / pct.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
10
11
12 SYNOPSYS
13 --------
14
15 include::pct.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Proxmox Container Toolkit
23 =========================
24 include::attributes.txt[]
25 endif::manvolnum[]
26
27
28 Containers are a lightweight alternative to fully virtualized
29 VMs. Instead of emulating a complete Operating System (OS), containers
30 simply use the OS of the host they run on. This implies that all
31 containers use the same kernel, and that they can access resources
32 from the host directly.
33
34 This is great because containers do not waste CPU power nor memory due
35 to kernel emulation. Container run-time costs are close to zero and
36 usually negligible. But there are also some drawbacks you need to
37 consider:
38
39 * You can only run Linux based OS inside containers, i.e. it is not
40 possible to run FreeBSD or MS Windows inside.
41
42 * For security reasons, access to host resources needs to be
43 restricted. This is done with AppArmor, SecComp filters and other
44 kernel features. Be prepared that some syscalls are not allowed
45 inside containers.
46
47 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
48 technology. We consider LXC as low-level library, which provides
49 countless options. It would be too difficult to use those tools
50 directly. Instead, we provide a small wrapper called `pct`, the
51 "Proxmox Container Toolkit".
52
53 The toolkit is tightly coupled with {pve}. That means that it is aware
54 of the cluster setup, and it can use the same network and storage
55 resources as fully virtualized VMs. You can even use the {pve}
56 firewall, or manage containers using the HA framework.
57
58 Our primary goal is to offer an environment as one would get from a
59 VM, but without the additional overhead. We call this "System
60 Containers".
61
62 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
63 is best to run them inside a VM.
64
65
66 Security Considerations
67 -----------------------
68
69 Containers use the same kernel as the host, so there is a big attack
70 surface for malicious users. You should consider this fact if you
71 provide containers to totally untrusted people. In general, fully
72 virtualized VMs provide better isolation.
73
74 The good news is that LXC uses many kernel security features like
75 AppArmor, CGroups and PID and user namespaces, which makes containers
76 usage quite secure. We distinguish two types of containers:
77
78 Privileged containers
79 ~~~~~~~~~~~~~~~~~~~~~
80
81 Security is done by dropping capabilities, using mandatory access
82 control (AppArmor), SecComp filters and namespaces. The LXC team
83 considers this kind of container as unsafe, and they will not consider
84 new container escape exploits to be security issues worthy of a CVE
85 and quick fix. So you should use this kind of containers only inside a
86 trusted environment, or when no untrusted task is running as root in
87 the container.
88
89 Unprivileged containers
90 ~~~~~~~~~~~~~~~~~~~~~~~
91
92 This kind of containers use a new kernel feature called user
93 namespaces. The root uid 0 inside the container is mapped to an
94 unprivileged user outside the container. This means that most security
95 issues (container escape, resource abuse, ...) in those containers
96 will affect a random unprivileged user, and so would be a generic
97 kernel security bug rather than an LXC issue. The LXC team thinks
98 unprivileged containers are safe by design.
99
100
101 Configuration
102 -------------
103
104 The '/etc/pve/lxc/<CTID>.conf' file stores container configuration,
105 where '<CTID>' is the numeric ID of the given container. Like all
106 other files stored inside '/etc/pve/', they get automatically
107 replicated to all other cluster nodes.
108
109 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
110 unique cluster wide.
111
112 .Example Container Configuration
113 ----
114 ostype: debian
115 arch: amd64
116 hostname: www
117 memory: 512
118 swap: 512
119 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
120 rootfs: local:107/vm-107-disk-1.raw,size=7G
121 ----
122
123 Those configuration files are simple text files, and you can edit them
124 using a normal text editor ('vi', 'nano', ...). This is sometimes
125 useful to do small corrections, but keep in mind that you need to
126 restart the container to apply such changes.
127
128 For that reason, it is usually better to use the 'pct' command to
129 generate and modify those files, or do the whole thing using the GUI.
130 Our toolkit is smart enough to instantaneously apply most changes to
131 running containers. This feature is called "hot plug", and there is no
132 need to restart the container in that case.
133
134 File Format
135 ~~~~~~~~~~~
136
137 Container configuration files use a simple colon separated key/value
138 format. Each line has the following format:
139
140 # this is a comment
141 OPTION: value
142
143 Blank lines in those files are ignored, and lines starting with a '#'
144 character are treated as comments and are also ignored.
145
146 It is possible to add low-level, LXC style configuration directly, for
147 example:
148
149 lxc.init_cmd: /sbin/my_own_init
150
151 or
152
153 lxc.init_cmd = /sbin/my_own_init
154
155 Those settings are directly passed to the LXC low-level tools.
156
157 Snapshots
158 ~~~~~~~~~
159
160 When you create a snapshot, 'pct' stores the configuration at snapshot
161 time into a separate snapshot section within the same configuration
162 file. For example, after creating a snapshot called 'testsnapshot',
163 your configuration file will look like this:
164
165 .Container Configuration with Snapshot
166 ----
167 memory: 512
168 swap: 512
169 parent: testsnaphot
170 ...
171
172 [testsnaphot]
173 memory: 512
174 swap: 512
175 snaptime: 1457170803
176 ...
177 ----
178
179 There are a few snapshot related properties like 'parent' and
180 'snaptime'. The 'parent' property is used to store the parent/child
181 relationship between snapshots. 'snaptime' is the snapshot creation
182 time stamp (unix epoch).
183
184 Guest Operating System Configuration
185 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186
187 We normally try to detect the operating system type inside the
188 container, and then modify some files inside the container to make
189 them work as expected. Here is a short list of things we do at
190 container startup:
191
192 set /etc/hostname:: to set the container name
193
194 modify /etc/hosts:: to allow lookup of the local hostname
195
196 network setup:: pass the complete network setup to the container
197
198 configure DNS:: pass information about DNS servers
199
200 adapt the init system:: for example, fix the number of spawned getty processes
201
202 set the root password:: when creating a new container
203
204 rewrite ssh_host_keys:: so that each container has unique keys
205
206 randomize crontab:: so that cron does not start at the same time on all containers
207
208 Changes made by {PVE} are enclosed by comment markers:
209
210 ----
211 # --- BEGIN PVE ---
212 <data>
213 # --- END PVE ---
214 ----
215
216 Those markers will be inserted at a reasonable location in the
217 file. If such a section already exists, it will be updated in place
218 and will not be moved.
219
220 Modification of a file can be prevented by adding a `.pve-ignore.`
221 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
222 exists then the `/etc/hosts` file will not be touched. This can be a
223 simple empty file creatd via:
224
225 # touch /etc/.pve-ignore.hosts
226
227 Most modifications are OS dependent, so they differ between different
228 distributions and versions. You can completely disable modifications
229 by manually setting the 'ostype' to 'unmanaged'.
230
231 OS type detection is done by testing for certain files inside the
232 container:
233
234 Ubuntu:: inspect /etc/lsb-release ('DISTRIB_ID=Ubuntu')
235
236 Debian:: test /etc/debian_version
237
238 Fedora:: test /etc/fedora-release
239
240 RedHat or CentOS:: test /etc/redhat-release
241
242 ArchLinux:: test /etc/arch-release
243
244 Alpine:: test /etc/alpine-release
245
246 Gentoo:: test /etc/gentoo-release
247
248 NOTE: Container start fails if the configured 'ostype' differs from the auto
249 detected type.
250
251 Options
252 ~~~~~~~
253
254 include::pct.conf.5-opts.adoc[]
255
256
257 Container Images
258 ----------------
259
260 Container Images, sometimes also referred to as "templates" or
261 "appliances", are 'tar' archives which contain everything to run a
262 container. You can think of it as a tidy container backup. Like most
263 modern container toolkits, 'pct' uses those images when you create a
264 new container, for example:
265
266 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
267
268 Proxmox itself ships a set of basic templates for most common
269 operating systems, and you can download them using the 'pveam' (short
270 for {pve} Appliance Manager) command line utility. You can also
271 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
272 that tool (or the graphical user interface).
273
274 Our image repositories contain a list of available images, and there
275 is a cron job run each day to download that list. You can trigger that
276 update manually with:
277
278 pveam update
279
280 After that you can view the list of available images using:
281
282 pveam available
283
284 You can restrict this large list by specifying the 'section' you are
285 interested in, for example basic 'system' images:
286
287 .List available system images
288 ----
289 # pveam available --section system
290 system archlinux-base_2015-24-29-1_x86_64.tar.gz
291 system centos-7-default_20160205_amd64.tar.xz
292 system debian-6.0-standard_6.0-7_amd64.tar.gz
293 system debian-7.0-standard_7.0-3_amd64.tar.gz
294 system debian-8.0-standard_8.0-1_amd64.tar.gz
295 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
296 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
297 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
298 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
299 ----
300
301 Before you can use such a template, you need to download them into one
302 of your storages. You can simply use storage 'local' for that
303 purpose. For clustered installations, it is preferred to use a shared
304 storage so that all nodes can access those images.
305
306 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
307
308 You are now ready to create containers using that image, and you can
309 list all downloaded images on storage 'local' with:
310
311 ----
312 # pveam list local
313 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
314 ----
315
316 The above command shows you the full {pve} volume identifiers. They include
317 the storage name, and most other {pve} commands can use them. For
318 examply you can delete that image later with:
319
320 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
321
322
323 Container Storage
324 -----------------
325
326 Traditional containers use a very simple storage model, only allowing
327 a single mount point, the root file system. This was further
328 restricted to specific file system types like 'ext4' and 'nfs'.
329 Additional mounts are often done by user provided scripts. This turend
330 out to be complex and error prone, so we try to avoid that now.
331
332 Our new LXC based container model is more flexible regarding
333 storage. First, you can have more than a single mount point. This
334 allows you to choose a suitable storage for each application. For
335 example, you can use a relatively slow (and thus cheap) storage for
336 the container root file system. Then you can use a second mount point
337 to mount a very fast, distributed storage for your database
338 application.
339
340 The second big improvement is that you can use any storage type
341 supported by the {pve} storage library. That means that you can store
342 your containers on local 'lvmthin' or 'zfs', shared 'iSCSI' storage,
343 or even on distributed storage systems like 'ceph'. It also enables us
344 to use advanced storage features like snapshots and clones. 'vzdump'
345 can also use the snapshot feature to provide consistent container
346 backups.
347
348 Last but not least, you can also mount local devices directly, or
349 mount local directories using bind mounts. That way you can access
350 local storage inside containers with zero overhead. Such bind mounts
351 also provide an easy way to share data between different containers.
352
353
354 Mount Points
355 ~~~~~~~~~~~~
356
357 The root mount point is configured with the `rootfs` property, and you can
358 configure up to 10 additional mount points. The corresponding options
359 are called `mp0` to `mp9`, and they can contain the following setting:
360
361 include::pct-mountpoint-opts.adoc[]
362
363 Currently there are basically three types of mount points: storage backed
364 mount points, bind mounts and device mounts.
365
366 .Typical Container `rootfs` configuration
367 ----
368 rootfs: thin1:base-100-disk-1,size=8G
369 ----
370
371
372 Storage backed mount points
373 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
374
375 Storage backed mount points are managed by the {pve} storage subsystem and come
376 in three different flavors:
377
378 - Image based: These are raw images containing a single ext4 formatted file
379 system.
380 - ZFS Subvolumes: These are technically bind mounts, but with managed storage,
381 and thus allow resizing and snapshotting.
382 - Directories: passing `size=0` triggers a special case where instead of a raw
383 image a directory is created.
384
385
386 Bind mount points
387 ^^^^^^^^^^^^^^^^^
388
389 Bind mounts are considered to not be managed by the storage subsystem, so you
390 cannot make snapshots or deal with quotas from inside the container, and with
391 unprivileged containers you might run into permission problems caused by the
392 user mapping, and cannot use ACLs from inside an unprivileged container.
393
394 WARNING: For security reasons, bind mounts should only be established
395 using source directories especially reserved for this purpose, e.g., a
396 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
397 directories like `/`, `/var` or `/etc` into a container - this poses a
398 great security risk. The bind mount source path must not contain any symlinks.
399
400
401 Device mount points
402 ^^^^^^^^^^^^^^^^^^^
403
404 Similar to bind mounts, device mounts are not managed by the storage, but for
405 these the `quota` and `acl` options will be honored.
406
407
408 FUSE mounts
409 ~~~~~~~~~~~
410
411 WARNING: Because of existing issues in the Linux kernel's freezer
412 subsystem the usage of FUSE mounts inside a container is strongly
413 advised against, as containers need to be frozen for suspend or
414 snapshot mode backups.
415
416 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
417 technologies, it is possible to establish the FUSE mount on the Proxmox host
418 and use a bind mount point to make it accessible inside the container.
419
420
421 Using quotas inside containers
422 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
423
424 Quotas allow to set limits inside a container for the amount of disk
425 space that each user can use. This only works on ext4 image based
426 storage types and currently does not work with unprivileged
427 containers.
428
429 Activating the `quota` option causes the following mount options to be
430 used for a mount point:
431 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
432
433 This allows quotas to be used like you would on any other system. You
434 can initialize the `/aquota.user` and `/aquota.group` files by running
435
436 ----
437 quotacheck -cmug /
438 quotaon /
439 ----
440
441 and edit the quotas via the `edquota` command. Refer to the documentation
442 of the distribution running inside the container for details.
443
444 NOTE: You need to run the above commands for every mount point by passing
445 the mount point's path instead of just `/`.
446
447
448 Using ACLs inside containers
449 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
450
451 The standard Posix Access Control Lists are also available inside containers.
452 ACLs allow you to set more detailed file ownership than the traditional user/
453 group/others model.
454
455
456 Container Network
457 -----------------
458
459 You can configure up to 10 network interfaces for a single
460 container. The corresponding options are called 'net0' to 'net9', and
461 they can contain the following setting:
462
463 include::pct-network-opts.adoc[]
464
465
466 Backup and Restore
467 ------------------
468
469 Container Backup
470 ~~~~~~~~~~~~~~~~
471
472 It is possible to use the 'vzdump' tool for container backup. Please
473 refer to the 'vzdump' manual page for details.
474
475 Restoring Container Backups
476 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
477
478 Restoring container backups made with 'vzdump' is possible using the
479 'pct restore' command. By default, 'pct restore' will attempt to restore as much
480 of the backed up container configuration as possible. It is possible to override
481 the backed up configuration by manually setting container options on the command
482 line (see the 'pct' manual page for details).
483
484 NOTE: 'pvesm extractconfig' can be used to view the backed up configuration
485 contained in a vzdump archive.
486
487 There are two basic restore modes, only differing by their handling of mount
488 points:
489
490
491 "Simple" restore mode
492 ^^^^^^^^^^^^^^^^^^^^^
493
494 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
495 are explicitly set, the mount point configuration from the backed up
496 configuration file is restored using the following steps:
497
498 . Extract mount points and their options from backup
499 . Create volumes for storage backed mount points (on storage provided with the
500 `storage` parameter, or default local storage if unset)
501 . Extract files from backup archive
502 . Add bind and device mount points to restored configuration (limited to root user)
503
504 NOTE: Since bind and device mount points are never backed up, no files are
505 restored in the last step, but only the configuration options. The assumption
506 is that such mount points are either backed up with another mechanism (e.g.,
507 NFS space that is bind mounted into many containers), or not intended to be
508 backed up at all.
509
510 This simple mode is also used by the container restore operations in the web
511 interface.
512
513
514 "Advanced" restore mode
515 ^^^^^^^^^^^^^^^^^^^^^^^
516
517 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
518 parameters), the 'pct restore' command is automatically switched into an
519 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
520 configuration options contained in the backup archive, and instead only
521 uses the options explicitly provided as parameters.
522
523 This mode allows flexible configuration of mount point settings at restore time,
524 for example:
525
526 * Set target storages, volume sizes and other options for each mount point
527 individually
528 * Redistribute backed up files according to new mount point scheme
529 * Restore to device and/or bind mount points (limited to root user)
530
531
532 Managing Containers with 'pct'
533 ------------------------------
534
535 'pct' is the tool to manage Linux Containers on {pve}. You can create
536 and destroy containers, and control execution (start, stop, migrate,
537 ...). You can use pct to set parameters in the associated config file,
538 like network configuration or memory limits.
539
540 CLI Usage Examples
541 ~~~~~~~~~~~~~~~~~~
542
543 Create a container based on a Debian template (provided you have
544 already downloaded the template via the webgui)
545
546 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
547
548 Start container 100
549
550 pct start 100
551
552 Start a login session via getty
553
554 pct console 100
555
556 Enter the LXC namespace and run a shell as root user
557
558 pct enter 100
559
560 Display the configuration
561
562 pct config 100
563
564 Add a network interface called eth0, bridged to the host bridge vmbr0,
565 set the address and gateway, while it's running
566
567 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
568
569 Reduce the memory of the container to 512MB
570
571 pct set 100 -memory 512
572
573
574 Files
575 ------
576
577 '/etc/pve/lxc/<CTID>.conf'::
578
579 Configuration file for the container '<CTID>'.
580
581
582 Container Advantages
583 --------------------
584
585 - Simple, and fully integrated into {pve}. Setup looks similar to a normal
586 VM setup.
587
588 * Storage (ZFS, LVM, NFS, Ceph, ...)
589
590 * Network
591
592 * Authentification
593
594 * Cluster
595
596 - Fast: minimal overhead, as fast as bare metal
597
598 - High density (perfect for idle workloads)
599
600 - REST API
601
602 - Direct hardware access
603
604
605 Technology Overview
606 -------------------
607
608 - Integrated into {pve} graphical user interface (GUI)
609
610 - LXC (https://linuxcontainers.org/)
611
612 - cgmanager for cgroup management
613
614 - lxcfs to provive containerized /proc file system
615
616 - apparmor
617
618 - CRIU: for live migration (planned)
619
620 - We use latest available kernels (4.4.X)
621
622 - Image based deployment (templates)
623
624 - Container setup from host (Network, DNS, Storage, ...)
625
626
627 ifdef::manvolnum[]
628 include::pve-copyright.adoc[]
629 endif::manvolnum[]
630
631
632
633
634
635
636