]> git.proxmox.com Git - pve-docs.git/blame - pct.adoc
pct: move "CT storage" below "guest OS"
[pve-docs.git] / pct.adoc
CommitLineData
80c0adcb 1[[chapter_pct]]
0c6b782f 2ifdef::manvolnum[]
b2f242ab 3pct(1)
7e2fdb3d 4======
5f09af76
DM
5:pve-toplevel:
6
0c6b782f
DM
7NAME
8----
9
10pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
49a5e11c 13SYNOPSIS
0c6b782f
DM
14--------
15
16include::pct.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23Proxmox Container Toolkit
24=========================
194d2f29 25:pve-toplevel:
0c6b782f 26endif::manvolnum[]
5f09af76 27ifdef::wiki[]
cb84ed18 28:title: Linux Container
5f09af76 29endif::wiki[]
4a2ae9ed 30
14e97811
OB
31Containers are a lightweight alternative to fully virtualized machines (VMs).
32They use the kernel of the host system that they run on, instead of emulating a
33full operating system (OS). This means that containers can access resources on
34the host system directly.
4a2ae9ed 35
6d718b9b
TL
36The runtime costs for containers is low, usually negligible. However, there are
37some drawbacks that need be considered:
4a2ae9ed 38
6d718b9b
TL
39* Only Linux distributions can be run in containers.It is not possible to run
40 other Operating Systems like, for example, FreeBSD or Microsoft Windows
41 inside a container.
4a2ae9ed 42
6d718b9b
TL
43* For security reasons, access to host resources needs to be restricted.
44 Containers run in their own separate namespaces. Additionally some syscalls
45 are not allowed within containers.
4a2ae9ed 46
6d718b9b
TL
47{pve} uses https://linuxcontainers.org/[Linux Containers (LXC)] as underlying
48container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49usage and management of LXC containers.
4a2ae9ed 50
14e97811
OB
51Containers are tightly integrated with {pve}. This means that they are aware of
52the cluster setup, and they can use the same network and storage resources as
53virtual machines. You can also use the {pve} firewall, or manage containers
54using the HA framework.
4a2ae9ed 55
6d718b9b
TL
56Our primary goal is to offer an environment as one would get from a VM, but
57without the additional overhead. We call this ``System Containers''.
4a2ae9ed 58
6d718b9b 59NOTE: If you want to run micro-containers, for example, 'Docker' or 'rkt', it
70a42028 60is best to run them inside a VM.
4a2ae9ed
DM
61
62
99f6ae1a
DM
63Technology Overview
64-------------------
65
66* LXC (https://linuxcontainers.org/)
67
6d718b9b 68* Integrated into {pve} graphical web user interface (GUI)
99f6ae1a
DM
69
70* Easy to use command line tool `pct`
71
72* Access via {pve} REST API
73
6d718b9b 74* 'lxcfs' to provide containerized /proc file system
99f6ae1a 75
6d718b9b 76* Control groups ('cgroups') for resource isolation and limitation
99f6ae1a 77
6d718b9b 78* 'AppArmor' and 'seccomp' to improve security
99f6ae1a 79
14e97811 80* Modern Linux kernels
99f6ae1a
DM
81
82* Image based deployment (templates)
83
6d718b9b 84* Uses {pve} xref:chapter_storage[storage library]
99f6ae1a 85
14e97811 86* Container setup from host (network, DNS, storage, etc.)
99f6ae1a 87
69ab602f 88
80c0adcb 89[[pct_container_images]]
d61bab51
DM
90Container Images
91----------------
92
8c1189b6 93Container images, sometimes also referred to as ``templates'' or
69ab602f
TL
94``appliances'', are `tar` archives which contain everything to run a container.
95`pct` uses them to create a new container, for example:
d61bab51 96
14e97811
OB
97----
98# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
99----
d61bab51 100
69ab602f
TL
101{pve} itself provides a variety of basic templates for the most common Linux
102distributions. They can be downloaded using the GUI or the `pveam` (short for
103{pve} Appliance Manager) command line utility.
104Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
105are also available to download.
d61bab51 106
69ab602f
TL
107The list of available templates is updated daily via cron. To trigger it
108manually:
3a6fa247 109
14e97811
OB
110----
111# pveam update
112----
3a6fa247 113
14e97811 114To view the list of available images run:
3a6fa247 115
14e97811
OB
116----
117# pveam available
118----
3a6fa247 119
8c1189b6
FG
120You can restrict this large list by specifying the `section` you are
121interested in, for example basic `system` images:
3a6fa247
DM
122
123.List available system images
124----
125# pveam available --section system
14e97811
OB
126system alpine-3.10-default_20190626_amd64.tar.xz
127system alpine-3.9-default_20190224_amd64.tar.xz
128system archlinux-base_20190924-1_amd64.tar.gz
129system centos-6-default_20191016_amd64.tar.xz
130system centos-7-default_20190926_amd64.tar.xz
131system centos-8-default_20191016_amd64.tar.xz
132system debian-10.0-standard_10.0-1_amd64.tar.gz
133system debian-8.0-standard_8.11-1_amd64.tar.gz
134system debian-9.0-standard_9.7-1_amd64.tar.gz
135system fedora-30-default_20190718_amd64.tar.xz
136system fedora-31-default_20191029_amd64.tar.xz
137system gentoo-current-default_20190718_amd64.tar.xz
138system opensuse-15.0-default_20180907_amd64.tar.xz
139system opensuse-15.1-default_20190719_amd64.tar.xz
140system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
141system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
142system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
143system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
3a6fa247
DM
144----
145
69ab602f
TL
146Before you can use such a template, you need to download them into one of your
147storages. You can simply use storage `local` for that purpose. For clustered
148installations, it is preferred to use a shared storage so that all nodes can
149access those images.
3a6fa247 150
14e97811
OB
151----
152# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
153----
3a6fa247 154
69ab602f
TL
155You are now ready to create containers using that image, and you can list all
156downloaded images on storage `local` with:
24f73a63
DM
157
158----
159# pveam list local
14e97811 160local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
24f73a63
DM
161----
162
69ab602f
TL
163The above command shows you the full {pve} volume identifiers. They include the
164storage name, and most other {pve} commands can use them. For example you can
165delete that image later with:
24f73a63 166
14e97811
OB
167----
168# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
169----
d61bab51 170
690cd737 171
f3afbb70 172[[pct_settings]]
4f785ca7
DM
173Container Settings
174------------------
175
304eb5a9
EK
176[[pct_general]]
177General Settings
178~~~~~~~~~~~~~~~~
179
1ff5e4e8 180[thumbnail="screenshot/gui-create-ct-general.png"]
2225402c 181
304eb5a9
EK
182General settings of a container include
183
184* the *Node* : the physical server on which the container will run
69ab602f
TL
185* the *CT ID*: a unique number in this {pve} installation used to identify your
186 container
304eb5a9
EK
187* *Hostname*: the hostname of the container
188* *Resource Pool*: a logical group of containers and VMs
189* *Password*: the root password of the container
190* *SSH Public Key*: a public key for connecting to the root account over SSH
191* *Unprivileged container*: this option allows to choose at creation time
69ab602f 192 if you want to create a privileged or unprivileged container.
304eb5a9 193
14e97811
OB
194Unprivileged Containers
195^^^^^^^^^^^^^^^^^^^^^^^
196
69ab602f
TL
197Unprivileged containers use a new kernel feature called user namespaces.
198The root UID 0 inside the container is mapped to an unprivileged user outside
199the container. This means that most security issues (container escape, resource
14e97811
OB
200abuse, etc.) in these containers will affect a random unprivileged user, and
201would be a generic kernel security bug rather than an LXC issue. The LXC team
202thinks unprivileged containers are safe by design.
203
204This is the default option when creating a new container.
205
69ab602f
TL
206NOTE: If the container uses systemd as an init system, please be aware the
207systemd version running inside the container should be equal to or greater than
208220.
14e97811 209
304eb5a9
EK
210
211Privileged Containers
212^^^^^^^^^^^^^^^^^^^^^
213
c02ac25b
TL
214Security in containers is achieved by using mandatory access control 'AppArmor'
215restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
216considers this kind of container as unsafe, and they will not consider new
217container escape exploits to be security issues worthy of a CVE and quick fix.
218That's why privileged containers should only be used in trusted environments.
304eb5a9 219
304eb5a9 220
9a5e9443 221[[pct_cpu]]
9a5e9443
DM
222CPU
223~~~
224
1ff5e4e8 225[thumbnail="screenshot/gui-create-ct-cpu.png"]
097aa949 226
14e97811
OB
227You can restrict the number of visible CPUs inside the container using the
228`cores` option. This is implemented using the Linux 'cpuset' cgroup
69ab602f
TL
229(**c**ontrol *group*).
230A special task inside `pvestatd` tries to distribute running containers among
231available CPUs periodically.
232To view the assigned CPUs run the following command:
9a5e9443
DM
233
234----
235# pct cpusets
236 ---------------------
237 102: 6 7
238 105: 2 3 4 5
239 108: 0 1
240 ---------------------
241----
242
14e97811
OB
243Containers use the host kernel directly. All tasks inside a container are
244handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
245**F**air **S**cheduler) scheduler by default, which has additional bandwidth
246control options.
9a5e9443
DM
247
248[horizontal]
0725e3c6 249
69ab602f
TL
250`cpulimit`: :: You can use this option to further limit assigned CPU time.
251Please note that this is a floating point number, so it is perfectly valid to
252assign two cores to a container, but restrict overall CPU consumption to half a
253core.
9a5e9443
DM
254+
255----
256cores: 2
257cpulimit: 0.5
258----
259
69ab602f
TL
260`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
261larger the number is, the more CPU time this container gets. Number is relative
262to the weights of all the other running containers. The default is 1024. You
263can use this setting to prioritize some containers.
9a5e9443
DM
264
265
266[[pct_memory]]
267Memory
268~~~~~~
269
1ff5e4e8 270[thumbnail="screenshot/gui-create-ct-memory.png"]
097aa949 271
9a5e9443
DM
272Container memory is controlled using the cgroup memory controller.
273
274[horizontal]
275
69ab602f
TL
276`memory`: :: Limit overall memory usage. This corresponds to the
277`memory.limit_in_bytes` cgroup setting.
9a5e9443 278
69ab602f
TL
279`swap`: :: Allows the container to use additional swap memory from the host
280swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
281setting, which is set to the sum of both value (`memory + swap`).
9a5e9443 282
4f785ca7
DM
283
284[[pct_mount_points]]
9e44e493
DM
285Mount Points
286~~~~~~~~~~~~
eeecce95 287
1ff5e4e8 288[thumbnail="screenshot/gui-create-ct-root-disk.png"]
097aa949 289
14e97811 290The root mount point is configured with the `rootfs` property. You can
69ab602f
TL
291configure up to 256 additional mount points. The corresponding options are
292called `mp0` to `mp255`. They can contain the following settings:
01639994
FG
293
294include::pct-mountpoint-opts.adoc[]
295
69ab602f
TL
296Currently there are three types of mount points: storage backed mount points,
297bind mounts, and device mounts.
9e44e493 298
5eba0743 299.Typical container `rootfs` configuration
4c3b5c77
DM
300----
301rootfs: thin1:base-100-disk-1,size=8G
302----
303
304
5eba0743 305Storage Backed Mount Points
4c3b5c77 306^^^^^^^^^^^^^^^^^^^^^^^^^^^
01639994 307
9e44e493 308Storage backed mount points are managed by the {pve} storage subsystem and come
eeecce95
WB
309in three different flavors:
310
5eba0743 311- Image based: these are raw images containing a single ext4 formatted file
eeecce95 312 system.
5eba0743 313- ZFS subvolumes: these are technically bind mounts, but with managed storage,
eeecce95
WB
314 and thus allow resizing and snapshotting.
315- Directories: passing `size=0` triggers a special case where instead of a raw
316 image a directory is created.
317
03782251
FG
318NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
319mount point volumes will automatically allocate a volume of the specified size
69ab602f
TL
320on the specified storage. For example, calling
321
322----
323pct set 100 -mp0 thin1:10,mp=/path/in/container
324----
325
326will allocate a 10GB volume on the storage `thin1` and replace the volume ID
327place holder `10` with the allocated volume ID, and setup the moutpoint in the
328container at `/path/in/container`
03782251 329
4c3b5c77 330
5eba0743 331Bind Mount Points
4c3b5c77 332^^^^^^^^^^^^^^^^^
01639994 333
9baca183
FG
334Bind mounts allow you to access arbitrary directories from your Proxmox VE host
335inside a container. Some potential use cases are:
336
337- Accessing your home directory in the guest
338- Accessing an USB device directory in the guest
acccc49b 339- Accessing an NFS mount from the host in the guest
9baca183 340
eeecce95 341Bind mounts are considered to not be managed by the storage subsystem, so you
9baca183 342cannot make snapshots or deal with quotas from inside the container. With
eeecce95 343unprivileged containers you might run into permission problems caused by the
9baca183
FG
344user mapping and cannot use ACLs.
345
8c1189b6 346NOTE: The contents of bind mount points are not backed up when using `vzdump`.
eeecce95 347
69ab602f
TL
348WARNING: For security reasons, bind mounts should only be established using
349source directories especially reserved for this purpose, e.g., a directory
350hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
351`/`, `/var` or `/etc` into a container - this poses a great security risk.
9baca183
FG
352
353NOTE: The bind mount source path must not contain any symlinks.
354
355For example, to make the directory `/mnt/bindmounts/shared` accessible in the
356container with ID `100` under the path `/shared`, use a configuration line like
8c1189b6
FG
357`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
358Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
9baca183 359achieve the same result.
6b707f2c 360
4c3b5c77 361
5eba0743 362Device Mount Points
4c3b5c77 363^^^^^^^^^^^^^^^^^^^
fe154a4f 364
7432d78e
FG
365Device mount points allow to mount block devices of the host directly into the
366container. Similar to bind mounts, device mounts are not managed by {PVE}'s
367storage subsystem, but the `quota` and `acl` options will be honored.
368
369NOTE: Device mount points should only be used under special circumstances. In
370most cases a storage backed mount point offers the same performance and a lot
371more features.
372
69ab602f
TL
373NOTE: The contents of device mount points are not backed up when using
374`vzdump`.
01639994 375
4c3b5c77 376
80c0adcb 377[[pct_container_network]]
f5c351f0
DM
378Network
379~~~~~~~
04c569f6 380
1ff5e4e8 381[thumbnail="screenshot/gui-create-ct-network.png"]
097aa949 382
69ab602f
TL
383You can configure up to 10 network interfaces for a single container.
384The corresponding options are called `net0` to `net9`, and they can contain the
385following setting:
bac8c385
DM
386
387include::pct-network-opts.adoc[]
04c569f6
DM
388
389
139a9019
DM
390[[pct_startup_and_shutdown]]
391Automatic Start and Shutdown of Containers
392~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
393
14e97811
OB
394To automatically start a container when the host system boots, select the
395option 'Start at boot' in the 'Options' panel of the container in the web
396interface or run the following command:
139a9019 397
14e97811
OB
398----
399# pct set CTID -onboot 1
400----
139a9019 401
4dbeb548
DM
402.Start and Shutdown Order
403// use the screenshot from qemu - its the same
1ff5e4e8 404[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548 405
69ab602f
TL
406If you want to fine tune the boot order of your containers, you can use the
407following parameters:
139a9019 408
69ab602f
TL
409* *Start/Shutdown order*: Defines the start order priority. For example, set it
410 to 1 if you want the CT to be the first to be started. (We use the reverse
411 startup order for shutdown, so a container with a start order of 1 would be
412 the last to be shut down)
413* *Startup delay*: Defines the interval between this container start and
414 subsequent containers starts. For example, set it to 240 if you want to wait
415 240 seconds before starting other containers.
139a9019 416* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
69ab602f
TL
417 for the container to be offline after issuing a shutdown command.
418 By default this value is set to 60, which means that {pve} will issue a
419 shutdown request, wait 60s for the machine to be offline, and if after 60s
420 the machine is still online will notify that the shutdown action failed.
139a9019 421
69ab602f
TL
422Please note that containers without a Start/Shutdown order parameter will
423always start after those where the parameter is set, and this parameter only
139a9019
DM
424makes sense between the machines running locally on a host, and not
425cluster-wide.
426
c2c8eb89
DC
427Hookscripts
428~~~~~~~~~~~
429
430You can add a hook script to CTs with the config property `hookscript`.
431
14e97811
OB
432----
433# pct set 100 -hookscript local:snippets/hookscript.pl
434----
c2c8eb89 435
69ab602f
TL
436It will be called during various phases of the guests lifetime. For an example
437and documentation see the example script under
c2c8eb89 438`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
139a9019 439
bf7f598a
TL
440Security Considerations
441-----------------------
442
443Containers use the kernel of the host system. This exposes an attack surface
444for malicious users. In general, full virtual machines provide better
445isolation. This should be considered if containers are provided to unkown or
446untrusted people.
447
448To reduce the attack surface, LXC uses many security features like AppArmor,
449CGroups and kernel namespaces.
450
c02ac25b
TL
451AppArmor
452~~~~~~~~
453
bf7f598a
TL
454AppArmor profiles are used to restrict access to possibly dangerous actions.
455Some system calls, i.e. `mount`, are prohibited from execution.
456
457To trace AppArmor activity, use:
458
459----
460# dmesg | grep apparmor
461----
462
c02ac25b
TL
463Although it is not recommended, AppArmor can be disabled for a container. This
464brings security risks with it. Some syscalls can lead to privilege escalation
465when executed within a container if the system is misconfigured or if a LXC or
466Linux Kernel vulnerability exists.
467
468To disable AppArmor for a container, add the following line to the container
469configuration file located at `/etc/pve/lxc/CTID.conf`:
470
471----
472lxc.apparmor_profile = unconfined
473----
474
475WARNING: Please note that this is not recommended for production use.
476
477
478// TODO: describe cgroups + seccomp a bit more.
479// TODO: pve-lxc-syscalld
480
481
0892a2c2
TL
482Guest Operating System Configuration
483------------------------------------
484
485{pve} tries to detect the Linux distribution in the container, and modifies
486some files. Here is a short list of things done at container startup:
487
488set /etc/hostname:: to set the container name
489
490modify /etc/hosts:: to allow lookup of the local hostname
491
492network setup:: pass the complete network setup to the container
493
494configure DNS:: pass information about DNS servers
495
496adapt the init system:: for example, fix the number of spawned getty processes
497
498set the root password:: when creating a new container
499
500rewrite ssh_host_keys:: so that each container has unique keys
501
502randomize crontab:: so that cron does not start at the same time on all containers
503
504Changes made by {PVE} are enclosed by comment markers:
505
506----
507# --- BEGIN PVE ---
508<data>
509# --- END PVE ---
510----
511
512Those markers will be inserted at a reasonable location in the file. If such a
513section already exists, it will be updated in place and will not be moved.
514
515Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
516For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
517file will not be touched. This can be a simple empty file created via:
518
519----
520# touch /etc/.pve-ignore.hosts
521----
522
523Most modifications are OS dependent, so they differ between different
524distributions and versions. You can completely disable modifications by
525manually setting the `ostype` to `unmanaged`.
526
527OS type detection is done by testing for certain files inside the
3d5c55fc
TL
528container. {pve} first checks the `/etc/os-release` file
529footnote:[/etc/os-release replaces the multitude of per-distribution
530release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
531If that file is not present, or it does not contain a clearly recognizable
532distribution identifier the following distribution specific release files are
533checked.
0892a2c2
TL
534
535Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
536
537Debian:: test /etc/debian_version
538
539Fedora:: test /etc/fedora-release
540
541RedHat or CentOS:: test /etc/redhat-release
542
543ArchLinux:: test /etc/arch-release
544
545Alpine:: test /etc/alpine-release
546
547Gentoo:: test /etc/gentoo-release
548
549NOTE: Container start fails if the configured `ostype` differs from the auto
550detected type.
551
552
b0df9949
TL
553[[pct_container_storage]]
554Container Storage
555-----------------
556
557The {pve} LXC container storage model is more flexible than traditional
558container storage models. A container can have multiple mount points. This
559makes it possible to use the best suited storage for each application.
560
561For example the root file system of the container can be on slow and cheap
562storage while the database can be on fast and distributed storage via a second
563mount point. See section <<pct_mount_points, Mount Points>> for further
564details.
565
566Any storage type supported by the {pve} storage library can be used. This means
567that containers can be stored on local (for example `lvm`, `zfs` or directory),
568shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
569Ceph. Advanced storage features like snapshots or clones can be used if the
570underlying storage supports them. The `vzdump` backup tool can use snapshots to
571provide consistent container backups.
572
573Furthermore, local devices or local directories can be mounted directly using
574'bind mounts'. This gives access to local resources inside a container with
575practically zero overhead. Bind mounts can be used as an easy way to share data
576between containers.
577
578
579FUSE Mounts
580~~~~~~~~~~~
581
582WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
583usage of FUSE mounts inside a container is strongly advised against, as
584containers need to be frozen for suspend or snapshot mode backups.
585
586If FUSE mounts cannot be replaced by other mounting mechanisms or storage
587technologies, it is possible to establish the FUSE mount on the Proxmox host
588and use a bind mount point to make it accessible inside the container.
589
590
591Using Quotas Inside Containers
592~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
593
594Quotas allow to set limits inside a container for the amount of disk space that
595each user can use.
596
597NOTE: This only works on ext4 image based storage types and currently only
598works with privileged containers.
599
600Activating the `quota` option causes the following mount options to be used for
601a mount point:
602`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
603
604This allows quotas to be used like on any other system. You can initialize the
605`/aquota.user` and `/aquota.group` files by running:
606
607----
608# quotacheck -cmug /
609# quotaon /
610----
611
612Then edit the quotas using the `edquota` command. Refer to the documentation of
613the distribution running inside the container for details.
614
615NOTE: You need to run the above commands for every mount point by passing the
616mount point's path instead of just `/`.
617
618
619Using ACLs Inside Containers
620~~~~~~~~~~~~~~~~~~~~~~~~~~~~
621
622The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
623containers. ACLs allow you to set more detailed file ownership than the
624traditional user/group/others model.
625
626
627Backup of Container mount points
628~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
629
630To include a mount point in backups, enable the `backup` option for it in the
631container configuration. For an existing mount point `mp0`
632
633----
634mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
635----
636
637add `backup=1` to enable it.
638
639----
640mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
641----
642
643NOTE: When creating a new mount point in the GUI, this option is enabled by
644default.
645
646To disable backups for a mount point, add `backup=0` in the way described
647above, or uncheck the *Backup* checkbox on the GUI.
648
649Replication of Containers mount points
650~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
651
652By default, additional mount points are replicated when the Root Disk is
653replicated. If you want the {pve} storage replication mechanism to skip a mount
654point, you can set the *Skip replication* option for that mount point.
655As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
656mount point to a different type of storage when the container has replication
657configured requires to have *Skip replication* enabled for that mount point.
658
659
51e33128
FG
660Backup and Restore
661------------------
662
5eba0743 663
2175e37b
FG
664Container Backup
665~~~~~~~~~~~~~~~~
666
69ab602f
TL
667It is possible to use the `vzdump` tool for container backup. Please refer to
668the `vzdump` manual page for details.
8c1189b6 669
51e33128 670
2175e37b
FG
671Restoring Container Backups
672~~~~~~~~~~~~~~~~~~~~~~~~~~~
673
69ab602f
TL
674Restoring container backups made with `vzdump` is possible using the `pct
675restore` command. By default, `pct restore` will attempt to restore as much of
676the backed up container configuration as possible. It is possible to override
677the backed up configuration by manually setting container options on the
678command line (see the `pct` manual page for details).
2175e37b 679
8c1189b6 680NOTE: `pvesm extractconfig` can be used to view the backed up configuration
2175e37b
FG
681contained in a vzdump archive.
682
683There are two basic restore modes, only differing by their handling of mount
684points:
685
4c3b5c77 686
8c1189b6
FG
687``Simple'' Restore Mode
688^^^^^^^^^^^^^^^^^^^^^^^
2175e37b 689
69ab602f
TL
690If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
691explicitly set, the mount point configuration from the backed up configuration
692file is restored using the following steps:
2175e37b
FG
693
694. Extract mount points and their options from backup
695. Create volumes for storage backed mount points (on storage provided with the
69ab602f 696 `storage` parameter, or default local storage if unset)
2175e37b 697. Extract files from backup archive
69ab602f
TL
698. Add bind and device mount points to restored configuration (limited to root
699 user)
2175e37b
FG
700
701NOTE: Since bind and device mount points are never backed up, no files are
702restored in the last step, but only the configuration options. The assumption
703is that such mount points are either backed up with another mechanism (e.g.,
704NFS space that is bind mounted into many containers), or not intended to be
705backed up at all.
706
707This simple mode is also used by the container restore operations in the web
708interface.
709
4c3b5c77 710
8c1189b6
FG
711``Advanced'' Restore Mode
712^^^^^^^^^^^^^^^^^^^^^^^^^
2175e37b
FG
713
714By setting the `rootfs` parameter (and optionally, any combination of `mpX`
8c1189b6 715parameters), the `pct restore` command is automatically switched into an
2175e37b 716advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
69ab602f
TL
717configuration options contained in the backup archive, and instead only uses
718the options explicitly provided as parameters.
2175e37b 719
69ab602f
TL
720This mode allows flexible configuration of mount point settings at restore
721time, for example:
2175e37b
FG
722
723* Set target storages, volume sizes and other options for each mount point
69ab602f 724 individually
2175e37b
FG
725* Redistribute backed up files according to new mount point scheme
726* Restore to device and/or bind mount points (limited to root user)
727
51e33128 728
8c1189b6 729Managing Containers with `pct`
04c569f6
DM
730------------------------------
731
6d718b9b
TL
732The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
733{pve} containers. It enables you to create or destroy containers, as well as
734control the container execution (start, stop, reboot, migrate, etc.). It can be
735used to set parameters in the config file of a container, for example the
736network configuration or memory limits.
5eba0743 737
04c569f6
DM
738CLI Usage Examples
739~~~~~~~~~~~~~~~~~~
740
69ab602f
TL
741Create a container based on a Debian template (provided you have already
742downloaded the template via the web interface)
04c569f6 743
14e97811
OB
744----
745# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
746----
04c569f6
DM
747
748Start container 100
749
14e97811
OB
750----
751# pct start 100
752----
04c569f6
DM
753
754Start a login session via getty
755
14e97811
OB
756----
757# pct console 100
758----
04c569f6
DM
759
760Enter the LXC namespace and run a shell as root user
761
14e97811
OB
762----
763# pct enter 100
764----
04c569f6
DM
765
766Display the configuration
767
14e97811
OB
768----
769# pct config 100
770----
04c569f6 771
69ab602f
TL
772Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
773the address and gateway, while it's running
04c569f6 774
14e97811
OB
775----
776# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
777----
04c569f6
DM
778
779Reduce the memory of the container to 512MB
780
14e97811
OB
781----
782# pct set 100 -memory 512
783----
0585f29a 784
04c569f6 785
fe57a420
FG
786Obtaining Debugging Logs
787~~~~~~~~~~~~~~~~~~~~~~~~
788
789In case `pct start` is unable to start a specific container, it might be
790helpful to collect debugging output by running `lxc-start` (replace `ID` with
791the container's ID):
792
14e97811
OB
793----
794# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
795----
fe57a420 796
69ab602f
TL
797This command will attempt to start the container in foreground mode, to stop
798the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
fe57a420
FG
799
800The collected debug log is written to `/tmp/lxc-ID.log`.
801
802NOTE: If you have changed the container's configuration since the last start
803attempt with `pct start`, you need to run `pct start` at least once to also
804update the configuration used by `lxc-start`.
805
33f50e04
DC
806[[pct_migration]]
807Migration
808---------
809
810If you have a cluster, you can migrate your Containers with
811
14e97811
OB
812----
813# pct migrate <ctid> <target>
814----
33f50e04
DC
815
816This works as long as your Container is offline. If it has local volumes or
14e97811 817mount points defined, the migration will copy the content over the network to
ba021358 818the target host if the same storage is defined there.
33f50e04 819
4c82550d
TL
820Running containers cannot live-migrated due to techincal limitations. You can
821do a restart migration, which shuts down, moves and then starts a container
822again on the target node. As containers are very lightweight, this results
823normally only in a downtime of some hundreds of milliseconds.
824
825A restart migration can be done through the web interface or by using the
826`--restart` flag with the `pct migrate` command.
33f50e04 827
69ab602f
TL
828A restart migration will shut down the Container and kill it after the
829specified timeout (the default is 180 seconds). Then it will migrate the
830Container like an offline migration and when finished, it starts the Container
831on the target node.
c7bc47af
DM
832
833[[pct_configuration]]
834Configuration
835-------------
836
69ab602f
TL
837The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
838`<CTID>` is the numeric ID of the given container. Like all other files stored
839inside `/etc/pve/`, they get automatically replicated to all other cluster
840nodes.
c7bc47af
DM
841
842NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
843unique cluster wide.
844
845.Example Container Configuration
846----
847ostype: debian
848arch: amd64
849hostname: www
850memory: 512
851swap: 512
852net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
853rootfs: local:107/vm-107-disk-1.raw,size=7G
854----
855
69ab602f 856The configuration files are simple text files. You can edit them using a normal
da9679b6 857text editor, for example, `vi` or `nano`.
69ab602f
TL
858This is sometimes useful to do small corrections, but keep in mind that you
859need to restart the container to apply such changes.
c7bc47af 860
69ab602f
TL
861For that reason, it is usually better to use the `pct` command to generate and
862modify those files, or do the whole thing using the GUI.
863Our toolkit is smart enough to instantaneously apply most changes to running
da9679b6 864containers. This feature is called ``hot plug'', and there is no need to restart
69ab602f 865the container in that case.
c7bc47af 866
da9679b6 867In cases where a change cannot be hot-plugged, it will be registered as a
69ab602f
TL
868pending change (shown in red color in the GUI).
869They will only be applied after rebooting the container.
14e97811 870
c7bc47af
DM
871
872File Format
873~~~~~~~~~~~
874
69ab602f
TL
875The container configuration file uses a simple colon separated key/value
876format. Each line has the following format:
c7bc47af
DM
877
878-----
879# this is a comment
880OPTION: value
881-----
882
69ab602f
TL
883Blank lines in those files are ignored, and lines starting with a `#` character
884are treated as comments and are also ignored.
c7bc47af 885
69ab602f 886It is possible to add low-level, LXC style configuration directly, for example:
c7bc47af 887
14e97811
OB
888----
889lxc.init_cmd: /sbin/my_own_init
890----
c7bc47af
DM
891
892or
893
14e97811
OB
894----
895lxc.init_cmd = /sbin/my_own_init
896----
c7bc47af 897
14e97811 898The settings are passed directly to the LXC low-level tools.
c7bc47af
DM
899
900
901[[pct_snapshots]]
902Snapshots
903~~~~~~~~~
904
69ab602f
TL
905When you create a snapshot, `pct` stores the configuration at snapshot time
906into a separate snapshot section within the same configuration file. For
907example, after creating a snapshot called ``testsnapshot'', your configuration
908file will look like this:
c7bc47af
DM
909
910.Container configuration with snapshot
911----
912memory: 512
913swap: 512
914parent: testsnaphot
915...
916
917[testsnaphot]
918memory: 512
919swap: 512
920snaptime: 1457170803
921...
922----
923
69ab602f
TL
924There are a few snapshot related properties like `parent` and `snaptime`. The
925`parent` property is used to store the parent/child relationship between
926snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
c7bc47af
DM
927
928
929[[pct_options]]
930Options
931~~~~~~~
932
933include::pct.conf.5-opts.adoc[]
934
935
2a11aa70
DM
936Locks
937-----
938
69ab602f
TL
939Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
940incompatible concurrent actions on the affected container. Sometimes you need
941to remove such a lock manually (e.g., after a power failure).
2a11aa70 942
14e97811
OB
943----
944# pct unlock <CTID>
945----
2a11aa70 946
69ab602f
TL
947CAUTION: Only do this if you are sure the action which set the lock is no
948longer running.
2a11aa70 949
fe57a420 950
0c6b782f 951ifdef::manvolnum[]
3bd9d0cf
DM
952
953Files
954------
955
956`/etc/pve/lxc/<CTID>.conf`::
957
958Configuration file for the container '<CTID>'.
959
960
0c6b782f
DM
961include::pve-copyright.adoc[]
962endif::manvolnum[]