]> git.proxmox.com Git - pve-docs.git/blame - pct.adoc
pct: supported distros: improve quotes
[pve-docs.git] / pct.adoc
CommitLineData
80c0adcb 1[[chapter_pct]]
0c6b782f 2ifdef::manvolnum[]
b2f242ab 3pct(1)
7e2fdb3d 4======
5f09af76
DM
5:pve-toplevel:
6
0c6b782f
DM
7NAME
8----
9
10pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
49a5e11c 13SYNOPSIS
0c6b782f
DM
14--------
15
16include::pct.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23Proxmox Container Toolkit
24=========================
194d2f29 25:pve-toplevel:
0c6b782f 26endif::manvolnum[]
5f09af76 27ifdef::wiki[]
cb84ed18 28:title: Linux Container
5f09af76 29endif::wiki[]
4a2ae9ed 30
14e97811
OB
31Containers are a lightweight alternative to fully virtualized machines (VMs).
32They use the kernel of the host system that they run on, instead of emulating a
33full operating system (OS). This means that containers can access resources on
34the host system directly.
4a2ae9ed 35
6d718b9b
TL
36The runtime costs for containers is low, usually negligible. However, there are
37some drawbacks that need be considered:
4a2ae9ed 38
fd7fb228
DW
39* Only Linux distributions can be run in Proxmox Containers. It is not possible to run
40 other operating systems like, for example, FreeBSD or Microsoft Windows
6d718b9b 41 inside a container.
4a2ae9ed 42
6d718b9b 43* For security reasons, access to host resources needs to be restricted.
fd7fb228
DW
44 Therefore, containers run in their own separate namespaces. Additionally some
45 syscalls (user space requests to the Linux kernel) are not allowed within containers.
4a2ae9ed 46
fd7fb228 47{pve} uses https://linuxcontainers.org/lxc/introduction/[Linux Containers (LXC)] as its underlying
6d718b9b 48container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
fd7fb228
DW
49usage and management of LXC, by providing an interface that abstracts
50complex tasks.
4a2ae9ed 51
14e97811
OB
52Containers are tightly integrated with {pve}. This means that they are aware of
53the cluster setup, and they can use the same network and storage resources as
54virtual machines. You can also use the {pve} firewall, or manage containers
55using the HA framework.
4a2ae9ed 56
fd7fb228
DW
57Our primary goal is to offer an environment that provides the benefits of using a
58VM, but without the additional overhead. This means that Proxmox Containers can
59be categorized as ``System Containers'', rather than ``Application Containers''.
4a2ae9ed 60
fd7fb228
DW
61NOTE: If you want to run application containers, for example, 'Docker' images, it
62is recommended that you run them inside a Proxmox Qemu VM. This will give you
63all the advantages of application containerization, while also providing the
64benefits that VMs offer, such as strong isolation from the host and the ability
65to live-migrate, which otherwise isn't possible with containers.
4a2ae9ed
DM
66
67
99f6ae1a
DM
68Technology Overview
69-------------------
70
71* LXC (https://linuxcontainers.org/)
72
6d718b9b 73* Integrated into {pve} graphical web user interface (GUI)
99f6ae1a
DM
74
75* Easy to use command line tool `pct`
76
77* Access via {pve} REST API
78
6d718b9b 79* 'lxcfs' to provide containerized /proc file system
99f6ae1a 80
6d718b9b 81* Control groups ('cgroups') for resource isolation and limitation
99f6ae1a 82
6d718b9b 83* 'AppArmor' and 'seccomp' to improve security
99f6ae1a 84
14e97811 85* Modern Linux kernels
99f6ae1a 86
a645c907 87* Image based deployment (xref:pct_supported_distributions[templates])
99f6ae1a 88
6d718b9b 89* Uses {pve} xref:chapter_storage[storage library]
99f6ae1a 90
14e97811 91* Container setup from host (network, DNS, storage, etc.)
99f6ae1a 92
69ab602f 93
a645c907
OB
94[[pct_supported_distributions]]
95Supported Distributions
109ca764 96-----------------------
a645c907
OB
97
98List of officially supported distributions can be found below.
99
100Templates for the following distributions are available through our
101repositories. You can use xref:pct_container_images[pveam] tool or the
102Graphical User Interface to download them.
103
104Alpine Linux
109ca764 105~~~~~~~~~~~~
a645c907
OB
106
107[quote, 'https://alpinelinux.org']
108____
70292f72
TL
109Alpine Linux is a security-oriented, lightweight Linux distribution based on
110musl libc and busybox.
a645c907
OB
111____
112
113https://alpinelinux.org/releases/
114
109ca764
TL
115Arch Linux
116~~~~~~~~~~
a645c907 117
70292f72 118[quote, 'https://archlinux.org/']
a645c907 119____
70292f72 120Arch Linux, a lightweight and flexible Linux® distribution that tries to Keep It Simple.
a645c907
OB
121____
122
123
124CentOS, Almalinux, Rocky Linux
109ca764 125~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
a645c907 126
109ca764
TL
127CentOS / CentOS Stream
128^^^^^^^^^^^^^^^^^^^^^^
a645c907
OB
129
130[quote, 'https://centos.org']
131____
70292f72 132The CentOS Linux distribution is a stable, predictable, manageable and
a645c907 133reproducible platform derived from the sources of Red Hat Enterprise Linux
70292f72 134(RHEL)
a645c907
OB
135____
136
137https://wiki.centos.org/About/Product
138
109ca764
TL
139Almalinux
140^^^^^^^^^
a645c907
OB
141
142[quote, 'https://almalinux.org']
143____
70292f72
TL
144An Open Source, community owned and governed, forever-free enterprise Linux
145distribution, focused on long-term stability, providing a robust
146production-grade platform. AlmaLinux OS is 1:1 binary compatible with RHEL® and
147pre-Stream CentOS.
a645c907
OB
148____
149
150
151https://en.wikipedia.org/wiki/AlmaLinux#Releases
152
109ca764
TL
153Rocky Linux
154^^^^^^^^^^^
a645c907
OB
155
156[quote, 'https://rockylinux.org']
157____
70292f72
TL
158Rocky Linux is a community enterprise operating system designed to be 100%
159bug-for-bug compatible with America's top enterprise Linux distribution now
160that its downstream partner has shifted direction.
a645c907
OB
161____
162
163https://en.wikipedia.org/wiki/Rocky_Linux#Releases
164
a645c907 165Debian
109ca764 166~~~~~~
a645c907
OB
167
168[quote, 'https://www.debian.org/intro/index#software']
169____
70292f72 170Debian is a free operating system, developed and maintained by the Debian
a645c907 171project. A free Linux distribution with thousands of applications to meet our
70292f72 172users' needs.
a645c907
OB
173____
174
175https://www.debian.org/releases/stable/releasenotes
176
177Devuan
109ca764 178~~~~~~
a645c907
OB
179
180[quote, 'https://www.devuan.org']
181____
70292f72 182Devuan GNU+Linux is a fork of Debian without systemd that allows users to
a645c907 183reclaim control over their system by avoiding unnecessary entanglements and
70292f72 184ensuring Init Freedom.
a645c907
OB
185____
186
187
188Fedora
109ca764 189~~~~~~
a645c907
OB
190
191[quote, 'https://getfedora.org']
192____
70292f72 193Fedora creates an innovative, free, and open source platform for hardware,
a645c907 194clouds, and containers that enables software developers and community members
70292f72 195to build tailored solutions for their users.
a645c907
OB
196____
197
198https://fedoraproject.org/wiki/Releases
199
200Gentoo
109ca764 201~~~~~~
a645c907
OB
202
203[quote, 'https://www.gentoo.org']
204____
70292f72 205a highly flexible, source-based Linux distribution.
a645c907
OB
206____
207
208OpenSUSE
109ca764 209~~~~~~~~
a645c907
OB
210
211[quote, 'https://www.opensuse.org']
212____
70292f72 213The makers' choice for sysadmins, developers and desktop users.
a645c907
OB
214____
215
216https://get.opensuse.org/leap/
217
218Ubuntu
109ca764 219~~~~~~
a645c907 220
70292f72 221[quote, 'https://ubuntu.com/']
a645c907 222____
70292f72
TL
223Ubuntu is the modern, open source operating system on Linux for the enterprise
224server, desktop, cloud, and IoT.
a645c907
OB
225____
226
227https://wiki.ubuntu.com/Releases
228
80c0adcb 229[[pct_container_images]]
d61bab51
DM
230Container Images
231----------------
232
8c1189b6 233Container images, sometimes also referred to as ``templates'' or
69ab602f 234``appliances'', are `tar` archives which contain everything to run a container.
d61bab51 235
a645c907
OB
236{pve} itself provides a variety of basic templates for the
237xref:pct_supported_distributions[most common Linux distributions]. They can be
238downloaded using the GUI or the `pveam` (short for {pve} Appliance Manager)
239command line utility. Additionally, https://www.turnkeylinux.org/[TurnKey
240Linux] container templates are also available to download.
d61bab51 241
2a368b1e
TL
242The list of available templates is updated daily through the 'pve-daily-update'
243timer. You can also trigger an update manually by executing:
3a6fa247 244
14e97811
OB
245----
246# pveam update
247----
3a6fa247 248
14e97811 249To view the list of available images run:
3a6fa247 250
14e97811
OB
251----
252# pveam available
253----
3a6fa247 254
8c1189b6
FG
255You can restrict this large list by specifying the `section` you are
256interested in, for example basic `system` images:
3a6fa247
DM
257
258.List available system images
259----
260# pveam available --section system
151bbda8
TL
261system alpine-3.12-default_20200823_amd64.tar.xz
262system alpine-3.13-default_20210419_amd64.tar.xz
263system alpine-3.14-default_20210623_amd64.tar.xz
264system archlinux-base_20210420-1_amd64.tar.gz
14e97811 265system centos-7-default_20190926_amd64.tar.xz
151bbda8 266system centos-8-default_20201210_amd64.tar.xz
14e97811 267system debian-9.0-standard_9.7-1_amd64.tar.gz
151bbda8
TL
268system debian-10-standard_10.7-1_amd64.tar.gz
269system devuan-3.0-standard_3.0_amd64.tar.gz
270system fedora-33-default_20201115_amd64.tar.xz
271system fedora-34-default_20210427_amd64.tar.xz
272system gentoo-current-default_20200310_amd64.tar.xz
273system opensuse-15.2-default_20200824_amd64.tar.xz
14e97811
OB
274system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
275system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
151bbda8
TL
276system ubuntu-20.04-standard_20.04-1_amd64.tar.gz
277system ubuntu-20.10-standard_20.10-1_amd64.tar.gz
278system ubuntu-21.04-standard_21.04-1_amd64.tar.gz
3a6fa247
DM
279----
280
69ab602f 281Before you can use such a template, you need to download them into one of your
2a368b1e
TL
282storages. If you're unsure to which one, you can simply use the `local` named
283storage for that purpose. For clustered installations, it is preferred to use a
284shared storage so that all nodes can access those images.
3a6fa247 285
14e97811
OB
286----
287# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
288----
3a6fa247 289
69ab602f
TL
290You are now ready to create containers using that image, and you can list all
291downloaded images on storage `local` with:
24f73a63
DM
292
293----
294# pveam list local
14e97811 295local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
24f73a63
DM
296----
297
2a368b1e
TL
298TIP: You can also use the {pve} web interface GUI to download, list and delete
299container templates.
300
301`pct` uses them to create a new container, for example:
302
303----
304# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
305----
306
69ab602f
TL
307The above command shows you the full {pve} volume identifiers. They include the
308storage name, and most other {pve} commands can use them. For example you can
309delete that image later with:
24f73a63 310
14e97811
OB
311----
312# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
313----
d61bab51 314
690cd737 315
f3afbb70 316[[pct_settings]]
4f785ca7
DM
317Container Settings
318------------------
319
304eb5a9
EK
320[[pct_general]]
321General Settings
322~~~~~~~~~~~~~~~~
323
1ff5e4e8 324[thumbnail="screenshot/gui-create-ct-general.png"]
2225402c 325
304eb5a9
EK
326General settings of a container include
327
328* the *Node* : the physical server on which the container will run
69ab602f
TL
329* the *CT ID*: a unique number in this {pve} installation used to identify your
330 container
304eb5a9
EK
331* *Hostname*: the hostname of the container
332* *Resource Pool*: a logical group of containers and VMs
333* *Password*: the root password of the container
334* *SSH Public Key*: a public key for connecting to the root account over SSH
335* *Unprivileged container*: this option allows to choose at creation time
69ab602f 336 if you want to create a privileged or unprivileged container.
304eb5a9 337
14e97811
OB
338Unprivileged Containers
339^^^^^^^^^^^^^^^^^^^^^^^
340
69ab602f
TL
341Unprivileged containers use a new kernel feature called user namespaces.
342The root UID 0 inside the container is mapped to an unprivileged user outside
343the container. This means that most security issues (container escape, resource
14e97811
OB
344abuse, etc.) in these containers will affect a random unprivileged user, and
345would be a generic kernel security bug rather than an LXC issue. The LXC team
346thinks unprivileged containers are safe by design.
347
348This is the default option when creating a new container.
349
69ab602f
TL
350NOTE: If the container uses systemd as an init system, please be aware the
351systemd version running inside the container should be equal to or greater than
352220.
14e97811 353
304eb5a9
EK
354
355Privileged Containers
356^^^^^^^^^^^^^^^^^^^^^
357
c02ac25b
TL
358Security in containers is achieved by using mandatory access control 'AppArmor'
359restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
360considers this kind of container as unsafe, and they will not consider new
361container escape exploits to be security issues worthy of a CVE and quick fix.
362That's why privileged containers should only be used in trusted environments.
304eb5a9 363
304eb5a9 364
9a5e9443 365[[pct_cpu]]
9a5e9443
DM
366CPU
367~~~
368
1ff5e4e8 369[thumbnail="screenshot/gui-create-ct-cpu.png"]
097aa949 370
14e97811
OB
371You can restrict the number of visible CPUs inside the container using the
372`cores` option. This is implemented using the Linux 'cpuset' cgroup
69ab602f
TL
373(**c**ontrol *group*).
374A special task inside `pvestatd` tries to distribute running containers among
375available CPUs periodically.
376To view the assigned CPUs run the following command:
9a5e9443
DM
377
378----
379# pct cpusets
380 ---------------------
381 102: 6 7
382 105: 2 3 4 5
383 108: 0 1
384 ---------------------
385----
386
14e97811
OB
387Containers use the host kernel directly. All tasks inside a container are
388handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
389**F**air **S**cheduler) scheduler by default, which has additional bandwidth
390control options.
9a5e9443
DM
391
392[horizontal]
0725e3c6 393
69ab602f
TL
394`cpulimit`: :: You can use this option to further limit assigned CPU time.
395Please note that this is a floating point number, so it is perfectly valid to
396assign two cores to a container, but restrict overall CPU consumption to half a
397core.
9a5e9443
DM
398+
399----
400cores: 2
401cpulimit: 0.5
402----
403
69ab602f
TL
404`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
405larger the number is, the more CPU time this container gets. Number is relative
406to the weights of all the other running containers. The default is 1024. You
407can use this setting to prioritize some containers.
9a5e9443
DM
408
409
410[[pct_memory]]
411Memory
412~~~~~~
413
1ff5e4e8 414[thumbnail="screenshot/gui-create-ct-memory.png"]
097aa949 415
9a5e9443
DM
416Container memory is controlled using the cgroup memory controller.
417
418[horizontal]
419
69ab602f
TL
420`memory`: :: Limit overall memory usage. This corresponds to the
421`memory.limit_in_bytes` cgroup setting.
9a5e9443 422
69ab602f
TL
423`swap`: :: Allows the container to use additional swap memory from the host
424swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
425setting, which is set to the sum of both value (`memory + swap`).
9a5e9443 426
4f785ca7
DM
427
428[[pct_mount_points]]
9e44e493
DM
429Mount Points
430~~~~~~~~~~~~
eeecce95 431
1ff5e4e8 432[thumbnail="screenshot/gui-create-ct-root-disk.png"]
097aa949 433
14e97811 434The root mount point is configured with the `rootfs` property. You can
69ab602f
TL
435configure up to 256 additional mount points. The corresponding options are
436called `mp0` to `mp255`. They can contain the following settings:
01639994
FG
437
438include::pct-mountpoint-opts.adoc[]
439
69ab602f
TL
440Currently there are three types of mount points: storage backed mount points,
441bind mounts, and device mounts.
9e44e493 442
5eba0743 443.Typical container `rootfs` configuration
4c3b5c77
DM
444----
445rootfs: thin1:base-100-disk-1,size=8G
446----
447
448
5eba0743 449Storage Backed Mount Points
4c3b5c77 450^^^^^^^^^^^^^^^^^^^^^^^^^^^
01639994 451
9e44e493 452Storage backed mount points are managed by the {pve} storage subsystem and come
eeecce95
WB
453in three different flavors:
454
5eba0743 455- Image based: these are raw images containing a single ext4 formatted file
eeecce95 456 system.
5eba0743 457- ZFS subvolumes: these are technically bind mounts, but with managed storage,
eeecce95
WB
458 and thus allow resizing and snapshotting.
459- Directories: passing `size=0` triggers a special case where instead of a raw
460 image a directory is created.
461
03782251
FG
462NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
463mount point volumes will automatically allocate a volume of the specified size
69ab602f
TL
464on the specified storage. For example, calling
465
466----
467pct set 100 -mp0 thin1:10,mp=/path/in/container
468----
469
470will allocate a 10GB volume on the storage `thin1` and replace the volume ID
471place holder `10` with the allocated volume ID, and setup the moutpoint in the
472container at `/path/in/container`
03782251 473
4c3b5c77 474
5eba0743 475Bind Mount Points
4c3b5c77 476^^^^^^^^^^^^^^^^^
01639994 477
9baca183
FG
478Bind mounts allow you to access arbitrary directories from your Proxmox VE host
479inside a container. Some potential use cases are:
480
481- Accessing your home directory in the guest
482- Accessing an USB device directory in the guest
acccc49b 483- Accessing an NFS mount from the host in the guest
9baca183 484
eeecce95 485Bind mounts are considered to not be managed by the storage subsystem, so you
9baca183 486cannot make snapshots or deal with quotas from inside the container. With
eeecce95 487unprivileged containers you might run into permission problems caused by the
9baca183
FG
488user mapping and cannot use ACLs.
489
8c1189b6 490NOTE: The contents of bind mount points are not backed up when using `vzdump`.
eeecce95 491
69ab602f
TL
492WARNING: For security reasons, bind mounts should only be established using
493source directories especially reserved for this purpose, e.g., a directory
494hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
495`/`, `/var` or `/etc` into a container - this poses a great security risk.
9baca183
FG
496
497NOTE: The bind mount source path must not contain any symlinks.
498
499For example, to make the directory `/mnt/bindmounts/shared` accessible in the
500container with ID `100` under the path `/shared`, use a configuration line like
8c1189b6
FG
501`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
502Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
9baca183 503achieve the same result.
6b707f2c 504
4c3b5c77 505
5eba0743 506Device Mount Points
4c3b5c77 507^^^^^^^^^^^^^^^^^^^
fe154a4f 508
7432d78e
FG
509Device mount points allow to mount block devices of the host directly into the
510container. Similar to bind mounts, device mounts are not managed by {PVE}'s
511storage subsystem, but the `quota` and `acl` options will be honored.
512
513NOTE: Device mount points should only be used under special circumstances. In
514most cases a storage backed mount point offers the same performance and a lot
515more features.
516
69ab602f
TL
517NOTE: The contents of device mount points are not backed up when using
518`vzdump`.
01639994 519
4c3b5c77 520
80c0adcb 521[[pct_container_network]]
f5c351f0
DM
522Network
523~~~~~~~
04c569f6 524
1ff5e4e8 525[thumbnail="screenshot/gui-create-ct-network.png"]
097aa949 526
69ab602f
TL
527You can configure up to 10 network interfaces for a single container.
528The corresponding options are called `net0` to `net9`, and they can contain the
529following setting:
bac8c385
DM
530
531include::pct-network-opts.adoc[]
04c569f6
DM
532
533
139a9019
DM
534[[pct_startup_and_shutdown]]
535Automatic Start and Shutdown of Containers
536~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
537
14e97811
OB
538To automatically start a container when the host system boots, select the
539option 'Start at boot' in the 'Options' panel of the container in the web
540interface or run the following command:
139a9019 541
14e97811
OB
542----
543# pct set CTID -onboot 1
544----
139a9019 545
4dbeb548
DM
546.Start and Shutdown Order
547// use the screenshot from qemu - its the same
1ff5e4e8 548[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548 549
69ab602f
TL
550If you want to fine tune the boot order of your containers, you can use the
551following parameters:
139a9019 552
69ab602f
TL
553* *Start/Shutdown order*: Defines the start order priority. For example, set it
554 to 1 if you want the CT to be the first to be started. (We use the reverse
555 startup order for shutdown, so a container with a start order of 1 would be
556 the last to be shut down)
557* *Startup delay*: Defines the interval between this container start and
558 subsequent containers starts. For example, set it to 240 if you want to wait
559 240 seconds before starting other containers.
139a9019 560* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
69ab602f
TL
561 for the container to be offline after issuing a shutdown command.
562 By default this value is set to 60, which means that {pve} will issue a
563 shutdown request, wait 60s for the machine to be offline, and if after 60s
564 the machine is still online will notify that the shutdown action failed.
139a9019 565
69ab602f
TL
566Please note that containers without a Start/Shutdown order parameter will
567always start after those where the parameter is set, and this parameter only
139a9019
DM
568makes sense between the machines running locally on a host, and not
569cluster-wide.
570
0f7778ac
DW
571If you require a delay between the host boot and the booting of the first
572container, see the section on
573xref:first_guest_boot_delay[Proxmox VE Node Management].
574
575
c2c8eb89
DC
576Hookscripts
577~~~~~~~~~~~
578
579You can add a hook script to CTs with the config property `hookscript`.
580
14e97811
OB
581----
582# pct set 100 -hookscript local:snippets/hookscript.pl
583----
c2c8eb89 584
69ab602f
TL
585It will be called during various phases of the guests lifetime. For an example
586and documentation see the example script under
c2c8eb89 587`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
139a9019 588
bf7f598a
TL
589Security Considerations
590-----------------------
591
592Containers use the kernel of the host system. This exposes an attack surface
593for malicious users. In general, full virtual machines provide better
656d8b21 594isolation. This should be considered if containers are provided to unknown or
bf7f598a
TL
595untrusted people.
596
597To reduce the attack surface, LXC uses many security features like AppArmor,
598CGroups and kernel namespaces.
599
c02ac25b
TL
600AppArmor
601~~~~~~~~
602
bf7f598a
TL
603AppArmor profiles are used to restrict access to possibly dangerous actions.
604Some system calls, i.e. `mount`, are prohibited from execution.
605
606To trace AppArmor activity, use:
607
608----
609# dmesg | grep apparmor
610----
611
c02ac25b
TL
612Although it is not recommended, AppArmor can be disabled for a container. This
613brings security risks with it. Some syscalls can lead to privilege escalation
614when executed within a container if the system is misconfigured or if a LXC or
615Linux Kernel vulnerability exists.
616
617To disable AppArmor for a container, add the following line to the container
618configuration file located at `/etc/pve/lxc/CTID.conf`:
619
620----
76aaaeab 621lxc.apparmor.profile = unconfined
c02ac25b
TL
622----
623
624WARNING: Please note that this is not recommended for production use.
625
626
17238cd3
WB
627[[pct_cgroup]]
628Control Groups ('cgroup')
629~~~~~~~~~~~~~~~~~~~~~~~~~
630
631'cgroup' is a kernel
632mechanism used to hierarchically organize processes and distribute system
633resources.
634
635The main resources controlled via 'cgroups' are CPU time, memory and swap
636limits, and access to device nodes. 'cgroups' are also used to "freeze" a
637container before taking snapshots.
638
639There are 2 versions of 'cgroups' currently available,
640https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy]
641and
642https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2'].
643
644Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a
645"hybrid" setup was used, where resource control was mainly done in 'cgroupv1'
646with an additional 'cgroupv2' controller which could take over some subsystems
647via the 'cgroup_no_v1' kernel command line parameter. (See the
648https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel
649parameter documentation] for details.)
650
75d3c2be
TL
651[[pct_cgroup_compat]]
652CGroup Version Compatibility
653^^^^^^^^^^^^^^^^^^^^^^^^^^^^
17238cd3
WB
654The main difference between pure 'cgroupv2' and the old hybrid environments
655regarding {pve} is that with 'cgroupv2' memory and swap are now controlled
656independently. The memory and swap settings for containers can map directly to
657these values, whereas previously only the memory limit and the limit of the
658*sum* of memory and swap could be limited.
659
660Another important difference is that the 'devices' controller is configured in a
661completely different way. Because of this, file system quotas are currently not
662supported in a pure 'cgroupv2' environment.
663
c80d381a
SI
664'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2'
665environment. Containers running 'systemd' version 231 or newer support
666'cgroupv2' footnote:[this includes all newest major versions of container
667templates shipped by {pve}], as do containers not using 'systemd' as init
668system footnote:[for example Alpine Linux].
669
75d3c2be
TL
670[NOTE]
671====
672CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases,
673which have a 'systemd' version that is too old to run in a 'cgroupv2'
674environment, you can either
c80d381a 675
75d3c2be
TL
676* Upgrade the whole distribution to a newer release. For the examples above, that
677 could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like
678 AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and
679 security fixes, often also new features, and moving the EOL date in the future.
680
681* Upgrade the Containers systemd version. If the distribution provides a
682 backports repository this can be an easy and quick stop-gap measurement.
683
684* Move the container, or its services, to a Virtual Machine. Virtual Machines
685 have a much less interaction with the host, that's why one can install
686 decades old OS versions just fine there.
687
688* Switch back to the legacy 'cgroup' controller. Note that while it can be a
689 valid solution, it's not a permanent one. There's a high likelihood that a
690 future {pve} major release, for example 8.0, cannot support the legacy
691 controller anymore.
692====
693
694[[pct_cgroup_change_version]]
695Changing CGroup Version
696^^^^^^^^^^^^^^^^^^^^^^^
697
698TIP: If file system quotas are not required and all containers support 'cgroupv2',
c80d381a 699it is recommended to stick to the new default.
17238cd3
WB
700
701To switch back to the previous version the following kernel command line
702parameter can be used:
703
704----
705systemd.unified_cgroup_hierarchy=0
706----
707
708See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot
709command line on where to add the parameter.
710
711// TODO: seccomp a bit more.
c02ac25b
TL
712// TODO: pve-lxc-syscalld
713
714
0892a2c2
TL
715Guest Operating System Configuration
716------------------------------------
717
718{pve} tries to detect the Linux distribution in the container, and modifies
719some files. Here is a short list of things done at container startup:
720
721set /etc/hostname:: to set the container name
722
723modify /etc/hosts:: to allow lookup of the local hostname
724
725network setup:: pass the complete network setup to the container
726
727configure DNS:: pass information about DNS servers
728
729adapt the init system:: for example, fix the number of spawned getty processes
730
731set the root password:: when creating a new container
732
733rewrite ssh_host_keys:: so that each container has unique keys
734
735randomize crontab:: so that cron does not start at the same time on all containers
736
737Changes made by {PVE} are enclosed by comment markers:
738
739----
740# --- BEGIN PVE ---
741<data>
742# --- END PVE ---
743----
744
745Those markers will be inserted at a reasonable location in the file. If such a
746section already exists, it will be updated in place and will not be moved.
747
748Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
749For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
750file will not be touched. This can be a simple empty file created via:
751
752----
753# touch /etc/.pve-ignore.hosts
754----
755
756Most modifications are OS dependent, so they differ between different
757distributions and versions. You can completely disable modifications by
758manually setting the `ostype` to `unmanaged`.
759
760OS type detection is done by testing for certain files inside the
3d5c55fc
TL
761container. {pve} first checks the `/etc/os-release` file
762footnote:[/etc/os-release replaces the multitude of per-distribution
763release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
764If that file is not present, or it does not contain a clearly recognizable
765distribution identifier the following distribution specific release files are
766checked.
0892a2c2
TL
767
768Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
769
770Debian:: test /etc/debian_version
771
772Fedora:: test /etc/fedora-release
773
774RedHat or CentOS:: test /etc/redhat-release
775
776ArchLinux:: test /etc/arch-release
777
778Alpine:: test /etc/alpine-release
779
780Gentoo:: test /etc/gentoo-release
781
782NOTE: Container start fails if the configured `ostype` differs from the auto
783detected type.
784
785
b0df9949
TL
786[[pct_container_storage]]
787Container Storage
788-----------------
789
790The {pve} LXC container storage model is more flexible than traditional
791container storage models. A container can have multiple mount points. This
792makes it possible to use the best suited storage for each application.
793
794For example the root file system of the container can be on slow and cheap
795storage while the database can be on fast and distributed storage via a second
796mount point. See section <<pct_mount_points, Mount Points>> for further
797details.
798
799Any storage type supported by the {pve} storage library can be used. This means
800that containers can be stored on local (for example `lvm`, `zfs` or directory),
801shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
802Ceph. Advanced storage features like snapshots or clones can be used if the
803underlying storage supports them. The `vzdump` backup tool can use snapshots to
804provide consistent container backups.
805
806Furthermore, local devices or local directories can be mounted directly using
807'bind mounts'. This gives access to local resources inside a container with
808practically zero overhead. Bind mounts can be used as an easy way to share data
809between containers.
810
811
812FUSE Mounts
813~~~~~~~~~~~
814
815WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
816usage of FUSE mounts inside a container is strongly advised against, as
817containers need to be frozen for suspend or snapshot mode backups.
818
819If FUSE mounts cannot be replaced by other mounting mechanisms or storage
820technologies, it is possible to establish the FUSE mount on the Proxmox host
821and use a bind mount point to make it accessible inside the container.
822
823
824Using Quotas Inside Containers
825~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
826
827Quotas allow to set limits inside a container for the amount of disk space that
828each user can use.
829
17238cd3
WB
830NOTE: This currently requires the use of legacy 'cgroups'.
831
b0df9949
TL
832NOTE: This only works on ext4 image based storage types and currently only
833works with privileged containers.
834
835Activating the `quota` option causes the following mount options to be used for
836a mount point:
837`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
838
839This allows quotas to be used like on any other system. You can initialize the
840`/aquota.user` and `/aquota.group` files by running:
841
842----
843# quotacheck -cmug /
844# quotaon /
845----
846
847Then edit the quotas using the `edquota` command. Refer to the documentation of
848the distribution running inside the container for details.
849
850NOTE: You need to run the above commands for every mount point by passing the
851mount point's path instead of just `/`.
852
853
854Using ACLs Inside Containers
855~~~~~~~~~~~~~~~~~~~~~~~~~~~~
856
857The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
858containers. ACLs allow you to set more detailed file ownership than the
859traditional user/group/others model.
860
861
862Backup of Container mount points
863~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
864
865To include a mount point in backups, enable the `backup` option for it in the
866container configuration. For an existing mount point `mp0`
867
868----
869mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
870----
871
872add `backup=1` to enable it.
873
874----
875mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
876----
877
878NOTE: When creating a new mount point in the GUI, this option is enabled by
879default.
880
881To disable backups for a mount point, add `backup=0` in the way described
882above, or uncheck the *Backup* checkbox on the GUI.
883
884Replication of Containers mount points
885~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
886
887By default, additional mount points are replicated when the Root Disk is
888replicated. If you want the {pve} storage replication mechanism to skip a mount
889point, you can set the *Skip replication* option for that mount point.
890As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
891mount point to a different type of storage when the container has replication
892configured requires to have *Skip replication* enabled for that mount point.
893
894
51e33128
FG
895Backup and Restore
896------------------
897
5eba0743 898
2175e37b
FG
899Container Backup
900~~~~~~~~~~~~~~~~
901
69ab602f
TL
902It is possible to use the `vzdump` tool for container backup. Please refer to
903the `vzdump` manual page for details.
8c1189b6 904
51e33128 905
2175e37b
FG
906Restoring Container Backups
907~~~~~~~~~~~~~~~~~~~~~~~~~~~
908
69ab602f
TL
909Restoring container backups made with `vzdump` is possible using the `pct
910restore` command. By default, `pct restore` will attempt to restore as much of
911the backed up container configuration as possible. It is possible to override
912the backed up configuration by manually setting container options on the
913command line (see the `pct` manual page for details).
2175e37b 914
8c1189b6 915NOTE: `pvesm extractconfig` can be used to view the backed up configuration
2175e37b
FG
916contained in a vzdump archive.
917
918There are two basic restore modes, only differing by their handling of mount
919points:
920
4c3b5c77 921
8c1189b6
FG
922``Simple'' Restore Mode
923^^^^^^^^^^^^^^^^^^^^^^^
2175e37b 924
69ab602f
TL
925If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
926explicitly set, the mount point configuration from the backed up configuration
927file is restored using the following steps:
2175e37b
FG
928
929. Extract mount points and their options from backup
324efba3
FG
930. Create volumes for storage backed mount points on the storage provided with
931 the `storage` parameter (default: `local`).
2175e37b 932. Extract files from backup archive
69ab602f
TL
933. Add bind and device mount points to restored configuration (limited to root
934 user)
2175e37b
FG
935
936NOTE: Since bind and device mount points are never backed up, no files are
937restored in the last step, but only the configuration options. The assumption
938is that such mount points are either backed up with another mechanism (e.g.,
939NFS space that is bind mounted into many containers), or not intended to be
940backed up at all.
941
942This simple mode is also used by the container restore operations in the web
943interface.
944
4c3b5c77 945
8c1189b6
FG
946``Advanced'' Restore Mode
947^^^^^^^^^^^^^^^^^^^^^^^^^
2175e37b
FG
948
949By setting the `rootfs` parameter (and optionally, any combination of `mpX`
8c1189b6 950parameters), the `pct restore` command is automatically switched into an
2175e37b 951advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
69ab602f
TL
952configuration options contained in the backup archive, and instead only uses
953the options explicitly provided as parameters.
2175e37b 954
69ab602f
TL
955This mode allows flexible configuration of mount point settings at restore
956time, for example:
2175e37b
FG
957
958* Set target storages, volume sizes and other options for each mount point
69ab602f 959 individually
2175e37b
FG
960* Redistribute backed up files according to new mount point scheme
961* Restore to device and/or bind mount points (limited to root user)
962
51e33128 963
8c1189b6 964Managing Containers with `pct`
04c569f6
DM
965------------------------------
966
6d718b9b
TL
967The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
968{pve} containers. It enables you to create or destroy containers, as well as
969control the container execution (start, stop, reboot, migrate, etc.). It can be
970used to set parameters in the config file of a container, for example the
971network configuration or memory limits.
5eba0743 972
04c569f6
DM
973CLI Usage Examples
974~~~~~~~~~~~~~~~~~~
975
69ab602f
TL
976Create a container based on a Debian template (provided you have already
977downloaded the template via the web interface)
04c569f6 978
14e97811
OB
979----
980# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
981----
04c569f6
DM
982
983Start container 100
984
14e97811
OB
985----
986# pct start 100
987----
04c569f6
DM
988
989Start a login session via getty
990
14e97811
OB
991----
992# pct console 100
993----
04c569f6
DM
994
995Enter the LXC namespace and run a shell as root user
996
14e97811
OB
997----
998# pct enter 100
999----
04c569f6
DM
1000
1001Display the configuration
1002
14e97811
OB
1003----
1004# pct config 100
1005----
04c569f6 1006
69ab602f
TL
1007Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
1008the address and gateway, while it's running
04c569f6 1009
14e97811
OB
1010----
1011# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
1012----
04c569f6
DM
1013
1014Reduce the memory of the container to 512MB
1015
14e97811
OB
1016----
1017# pct set 100 -memory 512
1018----
0585f29a 1019
87927c65
DJ
1020Destroying a container always removes it from Access Control Lists and it always
1021removes the firewall configuration of the container. You have to activate
1022'--purge', if you want to additionally remove the container from replication jobs,
1023backup jobs and HA resource configurations.
1024
1025----
1026# pct destroy 100 --purge
1027----
1028
1029
04c569f6 1030
fe57a420
FG
1031Obtaining Debugging Logs
1032~~~~~~~~~~~~~~~~~~~~~~~~
1033
1034In case `pct start` is unable to start a specific container, it might be
59b89a69
OB
1035helpful to collect debugging output by passing the `--debug` flag (replace `CTID` with
1036the container's CTID):
fe57a420 1037
14e97811 1038----
59b89a69
OB
1039# pct start CTID --debug
1040----
1041
97e4455e
TL
1042Alternatively, you can use the following `lxc-start` command, which will save
1043the debug log to the file specified by the `-o` output option:
59b89a69
OB
1044
1045----
1046# lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log
14e97811 1047----
fe57a420 1048
69ab602f 1049This command will attempt to start the container in foreground mode, to stop
59b89a69 1050the container run `pct shutdown CTID` or `pct stop CTID` in a second terminal.
fe57a420 1051
59b89a69 1052The collected debug log is written to `/tmp/lxc-CTID.log`.
fe57a420
FG
1053
1054NOTE: If you have changed the container's configuration since the last start
1055attempt with `pct start`, you need to run `pct start` at least once to also
1056update the configuration used by `lxc-start`.
1057
33f50e04
DC
1058[[pct_migration]]
1059Migration
1060---------
1061
1062If you have a cluster, you can migrate your Containers with
1063
14e97811
OB
1064----
1065# pct migrate <ctid> <target>
1066----
33f50e04
DC
1067
1068This works as long as your Container is offline. If it has local volumes or
14e97811 1069mount points defined, the migration will copy the content over the network to
ba021358 1070the target host if the same storage is defined there.
33f50e04 1071
656d8b21 1072Running containers cannot live-migrated due to technical limitations. You can
4c82550d
TL
1073do a restart migration, which shuts down, moves and then starts a container
1074again on the target node. As containers are very lightweight, this results
1075normally only in a downtime of some hundreds of milliseconds.
1076
1077A restart migration can be done through the web interface or by using the
1078`--restart` flag with the `pct migrate` command.
33f50e04 1079
69ab602f
TL
1080A restart migration will shut down the Container and kill it after the
1081specified timeout (the default is 180 seconds). Then it will migrate the
1082Container like an offline migration and when finished, it starts the Container
1083on the target node.
c7bc47af
DM
1084
1085[[pct_configuration]]
1086Configuration
1087-------------
1088
69ab602f
TL
1089The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
1090`<CTID>` is the numeric ID of the given container. Like all other files stored
1091inside `/etc/pve/`, they get automatically replicated to all other cluster
1092nodes.
c7bc47af
DM
1093
1094NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
1095unique cluster wide.
1096
1097.Example Container Configuration
1098----
1099ostype: debian
1100arch: amd64
1101hostname: www
1102memory: 512
1103swap: 512
1104net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
1105rootfs: local:107/vm-107-disk-1.raw,size=7G
1106----
1107
69ab602f 1108The configuration files are simple text files. You can edit them using a normal
da9679b6 1109text editor, for example, `vi` or `nano`.
69ab602f
TL
1110This is sometimes useful to do small corrections, but keep in mind that you
1111need to restart the container to apply such changes.
c7bc47af 1112
69ab602f
TL
1113For that reason, it is usually better to use the `pct` command to generate and
1114modify those files, or do the whole thing using the GUI.
1115Our toolkit is smart enough to instantaneously apply most changes to running
da9679b6 1116containers. This feature is called ``hot plug'', and there is no need to restart
69ab602f 1117the container in that case.
c7bc47af 1118
da9679b6 1119In cases where a change cannot be hot-plugged, it will be registered as a
69ab602f
TL
1120pending change (shown in red color in the GUI).
1121They will only be applied after rebooting the container.
14e97811 1122
c7bc47af
DM
1123
1124File Format
1125~~~~~~~~~~~
1126
69ab602f
TL
1127The container configuration file uses a simple colon separated key/value
1128format. Each line has the following format:
c7bc47af
DM
1129
1130-----
1131# this is a comment
1132OPTION: value
1133-----
1134
69ab602f
TL
1135Blank lines in those files are ignored, and lines starting with a `#` character
1136are treated as comments and are also ignored.
c7bc47af 1137
69ab602f 1138It is possible to add low-level, LXC style configuration directly, for example:
c7bc47af 1139
14e97811
OB
1140----
1141lxc.init_cmd: /sbin/my_own_init
1142----
c7bc47af
DM
1143
1144or
1145
14e97811
OB
1146----
1147lxc.init_cmd = /sbin/my_own_init
1148----
c7bc47af 1149
14e97811 1150The settings are passed directly to the LXC low-level tools.
c7bc47af
DM
1151
1152
1153[[pct_snapshots]]
1154Snapshots
1155~~~~~~~~~
1156
69ab602f
TL
1157When you create a snapshot, `pct` stores the configuration at snapshot time
1158into a separate snapshot section within the same configuration file. For
1159example, after creating a snapshot called ``testsnapshot'', your configuration
1160file will look like this:
c7bc47af
DM
1161
1162.Container configuration with snapshot
1163----
1164memory: 512
1165swap: 512
1166parent: testsnaphot
1167...
1168
1169[testsnaphot]
1170memory: 512
1171swap: 512
1172snaptime: 1457170803
1173...
1174----
1175
69ab602f
TL
1176There are a few snapshot related properties like `parent` and `snaptime`. The
1177`parent` property is used to store the parent/child relationship between
1178snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
c7bc47af
DM
1179
1180
1181[[pct_options]]
1182Options
1183~~~~~~~
1184
1185include::pct.conf.5-opts.adoc[]
1186
1187
2a11aa70
DM
1188Locks
1189-----
1190
69ab602f
TL
1191Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
1192incompatible concurrent actions on the affected container. Sometimes you need
1193to remove such a lock manually (e.g., after a power failure).
2a11aa70 1194
14e97811
OB
1195----
1196# pct unlock <CTID>
1197----
2a11aa70 1198
69ab602f
TL
1199CAUTION: Only do this if you are sure the action which set the lock is no
1200longer running.
2a11aa70 1201
fe57a420 1202
0c6b782f 1203ifdef::manvolnum[]
3bd9d0cf
DM
1204
1205Files
1206------
1207
1208`/etc/pve/lxc/<CTID>.conf`::
1209
1210Configuration file for the container '<CTID>'.
1211
1212
0c6b782f
DM
1213include::pve-copyright.adoc[]
1214endif::manvolnum[]