]> git.proxmox.com Git - pve-docs.git/blame - pct.adoc
rewrite and extend pct documentation
[pve-docs.git] / pct.adoc
CommitLineData
80c0adcb 1[[chapter_pct]]
0c6b782f 2ifdef::manvolnum[]
b2f242ab 3pct(1)
7e2fdb3d 4======
5f09af76
DM
5:pve-toplevel:
6
0c6b782f
DM
7NAME
8----
9
10pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
49a5e11c 13SYNOPSIS
0c6b782f
DM
14--------
15
16include::pct.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23Proxmox Container Toolkit
24=========================
194d2f29 25:pve-toplevel:
0c6b782f 26endif::manvolnum[]
5f09af76 27ifdef::wiki[]
cb84ed18 28:title: Linux Container
5f09af76 29endif::wiki[]
4a2ae9ed 30
14e97811
OB
31Containers are a lightweight alternative to fully virtualized machines (VMs).
32They use the kernel of the host system that they run on, instead of emulating a
33full operating system (OS). This means that containers can access resources on
34the host system directly.
4a2ae9ed 35
14e97811
OB
36The runtime costs for containers is low, usually negligible. However, there
37are some drawbacks that need be considered:
4a2ae9ed 38
14e97811
OB
39* Only Linux distributions can be run in containers. (It is not
40 possible to run FreeBSD or MS Windows inside a container.)
4a2ae9ed 41
14e97811
OB
42* For security reasons, access to host resources needs to be restricted. Containers
43 run in their own separate namespaces. Additionally some syscalls are not
44 allowed within containers.
4a2ae9ed
DM
45
46{pve} uses https://linuxcontainers.org/[LXC] as underlying container
14e97811
OB
47technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the usage of LXC
48containers.
4a2ae9ed 49
14e97811
OB
50Containers are tightly integrated with {pve}. This means that they are aware of
51the cluster setup, and they can use the same network and storage resources as
52virtual machines. You can also use the {pve} firewall, or manage containers
53using the HA framework.
4a2ae9ed
DM
54
55Our primary goal is to offer an environment as one would get from a
56VM, but without the additional overhead. We call this "System
57Containers".
58
14e97811 59NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
70a42028 60is best to run them inside a VM.
4a2ae9ed
DM
61
62
99f6ae1a
DM
63Technology Overview
64-------------------
65
66* LXC (https://linuxcontainers.org/)
67
68* Integrated into {pve} graphical user interface (GUI)
69
70* Easy to use command line tool `pct`
71
72* Access via {pve} REST API
73
74* lxcfs to provide containerized /proc file system
75
14e97811 76* CGroups (control groups) for resource allocation
99f6ae1a 77
14e97811 78* AppArmor/Seccomp to improve security
99f6ae1a 79
14e97811 80* Modern Linux kernels
99f6ae1a
DM
81
82* Image based deployment (templates)
83
14e97811 84* Uses {pve} storage library
99f6ae1a 85
14e97811 86* Container setup from host (network, DNS, storage, etc.)
99f6ae1a 87
4a2ae9ed
DM
88Security Considerations
89-----------------------
90
14e97811
OB
91Containers use the kernel of the host system. This creates a big attack
92surface for malicious users. This should be considered if containers
93are provided to untrustworthy people. In general, full
94virtual machines provide better isolation.
95
96However, LXC uses many security features like AppArmor, CGroups and kernel
97namespaces to reduce the attack surface.
98
99AppArmor profiles are used to restrict access to possibly dangerous actions.
100Some system calls, i.e. `mount`, are prohibited from execution.
4a2ae9ed 101
14e97811
OB
102To trace AppArmor activity, use:
103
104----
105# dmesg | grep apparmor
106----
3bd9d0cf 107
53e3cd6f
DM
108Guest Operating System Configuration
109------------------------------------
110
14e97811
OB
111{pve} tries to detect the Linux distribution in the container, and modifies some
112files. Here is a short list of things done at container startup:
53e3cd6f
DM
113
114set /etc/hostname:: to set the container name
115
116modify /etc/hosts:: to allow lookup of the local hostname
117
118network setup:: pass the complete network setup to the container
119
120configure DNS:: pass information about DNS servers
121
122adapt the init system:: for example, fix the number of spawned getty processes
123
124set the root password:: when creating a new container
125
126rewrite ssh_host_keys:: so that each container has unique keys
127
128randomize crontab:: so that cron does not start at the same time on all containers
129
130Changes made by {PVE} are enclosed by comment markers:
131
132----
133# --- BEGIN PVE ---
134<data>
135# --- END PVE ---
136----
137
138Those markers will be inserted at a reasonable location in the
139file. If such a section already exists, it will be updated in place
140and will not be moved.
141
142Modification of a file can be prevented by adding a `.pve-ignore.`
143file for it. For instance, if the file `/etc/.pve-ignore.hosts`
144exists then the `/etc/hosts` file will not be touched. This can be a
470d4313 145simple empty file created via:
53e3cd6f 146
14e97811
OB
147----
148# touch /etc/.pve-ignore.hosts
149----
53e3cd6f
DM
150
151Most modifications are OS dependent, so they differ between different
152distributions and versions. You can completely disable modifications
153by manually setting the `ostype` to `unmanaged`.
154
155OS type detection is done by testing for certain files inside the
156container:
157
158Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
159
160Debian:: test /etc/debian_version
161
162Fedora:: test /etc/fedora-release
163
164RedHat or CentOS:: test /etc/redhat-release
165
166ArchLinux:: test /etc/arch-release
167
168Alpine:: test /etc/alpine-release
169
170Gentoo:: test /etc/gentoo-release
171
172NOTE: Container start fails if the configured `ostype` differs from the auto
173detected type.
174
175
80c0adcb 176[[pct_container_images]]
d61bab51
DM
177Container Images
178----------------
179
8c1189b6
FG
180Container images, sometimes also referred to as ``templates'' or
181``appliances'', are `tar` archives which contain everything to run a
14e97811 182container. `pct` uses them to create a new container, for example:
d61bab51 183
14e97811
OB
184----
185# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
186----
d61bab51 187
14e97811
OB
188{pve} itself provides a variety of basic templates for the most common
189Linux distributions. They can be downloaded using the GUI or the
190`pveam` (short for {pve} Appliance Manager) command line utility.
191Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
192container templates are also available to download.
d61bab51 193
14e97811 194The list of available templates is updated daily via cron. To trigger it manually:
3a6fa247 195
14e97811
OB
196----
197# pveam update
198----
3a6fa247 199
14e97811 200To view the list of available images run:
3a6fa247 201
14e97811
OB
202----
203# pveam available
204----
3a6fa247 205
8c1189b6
FG
206You can restrict this large list by specifying the `section` you are
207interested in, for example basic `system` images:
3a6fa247
DM
208
209.List available system images
210----
211# pveam available --section system
14e97811
OB
212system alpine-3.10-default_20190626_amd64.tar.xz
213system alpine-3.9-default_20190224_amd64.tar.xz
214system archlinux-base_20190924-1_amd64.tar.gz
215system centos-6-default_20191016_amd64.tar.xz
216system centos-7-default_20190926_amd64.tar.xz
217system centos-8-default_20191016_amd64.tar.xz
218system debian-10.0-standard_10.0-1_amd64.tar.gz
219system debian-8.0-standard_8.11-1_amd64.tar.gz
220system debian-9.0-standard_9.7-1_amd64.tar.gz
221system fedora-30-default_20190718_amd64.tar.xz
222system fedora-31-default_20191029_amd64.tar.xz
223system gentoo-current-default_20190718_amd64.tar.xz
224system opensuse-15.0-default_20180907_amd64.tar.xz
225system opensuse-15.1-default_20190719_amd64.tar.xz
226system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
227system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
228system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
229system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
3a6fa247
DM
230----
231
a8e99754 232Before you can use such a template, you need to download them into one
8c1189b6 233of your storages. You can simply use storage `local` for that
3a6fa247
DM
234purpose. For clustered installations, it is preferred to use a shared
235storage so that all nodes can access those images.
236
14e97811
OB
237----
238# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
239----
3a6fa247 240
24f73a63 241You are now ready to create containers using that image, and you can
8c1189b6 242list all downloaded images on storage `local` with:
24f73a63
DM
243
244----
245# pveam list local
14e97811 246local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
24f73a63
DM
247----
248
a8e99754 249The above command shows you the full {pve} volume identifiers. They include
24f73a63 250the storage name, and most other {pve} commands can use them. For
5eba0743 251example you can delete that image later with:
24f73a63 252
14e97811
OB
253----
254# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
255----
d61bab51 256
80c0adcb 257[[pct_container_storage]]
70a42028
DM
258Container Storage
259-----------------
260
14e97811
OB
261The {pve} LXC container storage model is more flexible than traditional
262container storage models. A container can have multiple mount points. This makes
263it possible to use the best suited storage for each application.
264
265For example the root file system of the container can be on slow and cheap
266storage while the database can be on fast and distributed storage via a second
267mount point. See section <<pct_mount_points, Mount Points>> for further details.
268
269Any storage type supported by the {pve} storage library can be used. This means
270that containers can be stored on local (for example `lvm`, `zfs` or directory),
271shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
272Ceph. Advanced storage features like snapshots or clones can be used if the
273underlying storage supports them. The `vzdump` backup tool can use snapshots to
274provide consistent container backups.
275
276Furthermore, local devices or local directories can be mounted directly using
277'bind mounts'. This gives access to local resources inside a container with
278practically zero overhead. Bind mounts can be used as an easy way to share data
279between containers.
70a42028 280
eeecce95 281
4f785ca7
DM
282FUSE Mounts
283~~~~~~~~~~~
284
285WARNING: Because of existing issues in the Linux kernel's freezer
286subsystem the usage of FUSE mounts inside a container is strongly
287advised against, as containers need to be frozen for suspend or
288snapshot mode backups.
289
290If FUSE mounts cannot be replaced by other mounting mechanisms or storage
291technologies, it is possible to establish the FUSE mount on the Proxmox host
292and use a bind mount point to make it accessible inside the container.
293
294
295Using Quotas Inside Containers
296~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
297
298Quotas allow to set limits inside a container for the amount of disk
14e97811
OB
299space that each user can use.
300
301NOTE: This only works on ext4 image based storage types and currently only works
302with privileged containers.
4f785ca7
DM
303
304Activating the `quota` option causes the following mount options to be
305used for a mount point:
306`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
307
14e97811 308This allows quotas to be used like on any other system. You
4f785ca7
DM
309can initialize the `/aquota.user` and `/aquota.group` files by running
310
311----
14e97811
OB
312# quotacheck -cmug /
313# quotaon /
4f785ca7
DM
314----
315
316and edit the quotas via the `edquota` command. Refer to the documentation
317of the distribution running inside the container for details.
318
319NOTE: You need to run the above commands for every mount point by passing
320the mount point's path instead of just `/`.
321
322
323Using ACLs Inside Containers
324~~~~~~~~~~~~~~~~~~~~~~~~~~~~
325
14e97811
OB
326The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
327containers. ACLs allow you to set more detailed file ownership than the
328traditional user/group/others model.
4f785ca7
DM
329
330
14e97811 331Backup of Container mount points
690cd737
EK
332~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333
14e97811
OB
334To include a mount point in backups, enable the `backup` option for it in the
335container configuration. For an existing mount point `mp0`
336
337----
338mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
339----
340
341add `backup=1` to enable it.
342
343----
344mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
345----
346
347NOTE: When creating a new mount point in the GUI, this option is enabled by
348default.
349
350To disable backups for a mount point, add `backup=0` in the way described above,
351or uncheck the *Backup* checkbox on the GUI.
690cd737
EK
352
353Replication of Containers mount points
354~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
355
14e97811
OB
356By default, additional mount points are replicated when the Root Disk is
357replicated. If you want the {pve} storage replication mechanism to skip a mount
358point, you can set the *Skip replication* option for that mount point. +
359As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
360mount point to a different type of storage when the container has replication
361configured requires to have *Skip replication* enabled for that mount point.
690cd737 362
f3afbb70 363[[pct_settings]]
4f785ca7
DM
364Container Settings
365------------------
366
304eb5a9
EK
367[[pct_general]]
368General Settings
369~~~~~~~~~~~~~~~~
370
1ff5e4e8 371[thumbnail="screenshot/gui-create-ct-general.png"]
2225402c 372
304eb5a9
EK
373General settings of a container include
374
375* the *Node* : the physical server on which the container will run
376* the *CT ID*: a unique number in this {pve} installation used to identify your container
377* *Hostname*: the hostname of the container
378* *Resource Pool*: a logical group of containers and VMs
379* *Password*: the root password of the container
380* *SSH Public Key*: a public key for connecting to the root account over SSH
381* *Unprivileged container*: this option allows to choose at creation time
382if you want to create a privileged or unprivileged container.
383
14e97811
OB
384Unprivileged Containers
385^^^^^^^^^^^^^^^^^^^^^^^
386
387Unprivileged containers use a new kernel feature called user namespaces. The
388root UID 0 inside the container is mapped to an unprivileged user outside the
389container. This means that most security issues (container escape, resource
390abuse, etc.) in these containers will affect a random unprivileged user, and
391would be a generic kernel security bug rather than an LXC issue. The LXC team
392thinks unprivileged containers are safe by design.
393
394This is the default option when creating a new container.
395
396NOTE: If the container uses systemd as an init system, please be
397aware the systemd version running inside the container should be equal to
398or greater than 220.
399
304eb5a9
EK
400
401Privileged Containers
402^^^^^^^^^^^^^^^^^^^^^
403
14e97811
OB
404Security in containers is achieved by using mandatory access control
405(AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
406container as unsafe, and they will not consider new container escape exploits
407to be security issues worthy of a CVE and quick fix. That's why privileged
408containers should only be used in trusted environments.
304eb5a9 409
14e97811
OB
410WARNING: Although it is not recommended, AppArmor can be disabled for a
411container. This brings security risks with it. Some syscalls can lead to
412privilege escalation when executed within a container if the system is
413misconfigured or if a LXC or Linux Kernel vulnerability exists.
304eb5a9 414
14e97811
OB
415To disable AppArmor for a container, add the following line to the container
416configuration file located at `/etc/pve/lxc/CTID.conf`:
417
418----
419lxc.apparmor_profile = unconfined
420----
421
422Please note that this is not recommended for production use.
304eb5a9 423
304eb5a9 424
304eb5a9 425
9a5e9443 426[[pct_cpu]]
9a5e9443
DM
427CPU
428~~~
429
1ff5e4e8 430[thumbnail="screenshot/gui-create-ct-cpu.png"]
097aa949 431
14e97811
OB
432You can restrict the number of visible CPUs inside the container using the
433`cores` option. This is implemented using the Linux 'cpuset' cgroup
434(**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
435running containers among available CPUs. To view the assigned CPUs run
436the following command:
9a5e9443
DM
437
438----
439# pct cpusets
440 ---------------------
441 102: 6 7
442 105: 2 3 4 5
443 108: 0 1
444 ---------------------
445----
446
14e97811
OB
447Containers use the host kernel directly. All tasks inside a container are
448handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
449**F**air **S**cheduler) scheduler by default, which has additional bandwidth
450control options.
9a5e9443
DM
451
452[horizontal]
0725e3c6
DM
453
454`cpulimit`: :: You can use this option to further limit assigned CPU
9a5e9443
DM
455time. Please note that this is a floating point number, so it is
456perfectly valid to assign two cores to a container, but restrict
457overall CPU consumption to half a core.
458+
459----
460cores: 2
461cpulimit: 0.5
462----
463
0725e3c6 464`cpuunits`: :: This is a relative weight passed to the kernel
9a5e9443
DM
465scheduler. The larger the number is, the more CPU time this container
466gets. Number is relative to the weights of all the other running
467containers. The default is 1024. You can use this setting to
468prioritize some containers.
469
470
471[[pct_memory]]
472Memory
473~~~~~~
474
1ff5e4e8 475[thumbnail="screenshot/gui-create-ct-memory.png"]
097aa949 476
9a5e9443
DM
477Container memory is controlled using the cgroup memory controller.
478
479[horizontal]
480
0725e3c6 481`memory`: :: Limit overall memory usage. This corresponds
9a5e9443
DM
482to the `memory.limit_in_bytes` cgroup setting.
483
0725e3c6 484`swap`: :: Allows the container to use additional swap memory from the
9a5e9443
DM
485host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
486cgroup setting, which is set to the sum of both value (`memory +
487swap`).
488
4f785ca7
DM
489
490[[pct_mount_points]]
9e44e493
DM
491Mount Points
492~~~~~~~~~~~~
eeecce95 493
1ff5e4e8 494[thumbnail="screenshot/gui-create-ct-root-disk.png"]
097aa949 495
14e97811
OB
496The root mount point is configured with the `rootfs` property. You can
497configure up to 256 additional mount points. The corresponding options
498are called `mp0` to `mp255`. They can contain the following settings:
01639994
FG
499
500include::pct-mountpoint-opts.adoc[]
501
14e97811
OB
502Currently there are three types of mount points: storage backed
503mount points, bind mounts, and device mounts.
9e44e493 504
5eba0743 505.Typical container `rootfs` configuration
4c3b5c77
DM
506----
507rootfs: thin1:base-100-disk-1,size=8G
508----
509
510
5eba0743 511Storage Backed Mount Points
4c3b5c77 512^^^^^^^^^^^^^^^^^^^^^^^^^^^
01639994 513
9e44e493 514Storage backed mount points are managed by the {pve} storage subsystem and come
eeecce95
WB
515in three different flavors:
516
5eba0743 517- Image based: these are raw images containing a single ext4 formatted file
eeecce95 518 system.
5eba0743 519- ZFS subvolumes: these are technically bind mounts, but with managed storage,
eeecce95
WB
520 and thus allow resizing and snapshotting.
521- Directories: passing `size=0` triggers a special case where instead of a raw
522 image a directory is created.
523
03782251
FG
524NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
525mount point volumes will automatically allocate a volume of the specified size
526on the specified storage. E.g., calling
527`pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
528on the storage `thin1` and replace the volume ID place holder `10` with the
529allocated volume ID.
530
4c3b5c77 531
5eba0743 532Bind Mount Points
4c3b5c77 533^^^^^^^^^^^^^^^^^
01639994 534
9baca183
FG
535Bind mounts allow you to access arbitrary directories from your Proxmox VE host
536inside a container. Some potential use cases are:
537
538- Accessing your home directory in the guest
539- Accessing an USB device directory in the guest
acccc49b 540- Accessing an NFS mount from the host in the guest
9baca183 541
eeecce95 542Bind mounts are considered to not be managed by the storage subsystem, so you
9baca183 543cannot make snapshots or deal with quotas from inside the container. With
eeecce95 544unprivileged containers you might run into permission problems caused by the
9baca183
FG
545user mapping and cannot use ACLs.
546
8c1189b6 547NOTE: The contents of bind mount points are not backed up when using `vzdump`.
eeecce95 548
6b707f2c
FG
549WARNING: For security reasons, bind mounts should only be established
550using source directories especially reserved for this purpose, e.g., a
551directory hierarchy under `/mnt/bindmounts`. Never bind mount system
552directories like `/`, `/var` or `/etc` into a container - this poses a
9baca183
FG
553great security risk.
554
555NOTE: The bind mount source path must not contain any symlinks.
556
557For example, to make the directory `/mnt/bindmounts/shared` accessible in the
558container with ID `100` under the path `/shared`, use a configuration line like
8c1189b6
FG
559`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
560Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
9baca183 561achieve the same result.
6b707f2c 562
4c3b5c77 563
5eba0743 564Device Mount Points
4c3b5c77 565^^^^^^^^^^^^^^^^^^^
fe154a4f 566
7432d78e
FG
567Device mount points allow to mount block devices of the host directly into the
568container. Similar to bind mounts, device mounts are not managed by {PVE}'s
569storage subsystem, but the `quota` and `acl` options will be honored.
570
571NOTE: Device mount points should only be used under special circumstances. In
572most cases a storage backed mount point offers the same performance and a lot
573more features.
574
8c1189b6 575NOTE: The contents of device mount points are not backed up when using `vzdump`.
01639994 576
4c3b5c77 577
80c0adcb 578[[pct_container_network]]
f5c351f0
DM
579Network
580~~~~~~~
04c569f6 581
1ff5e4e8 582[thumbnail="screenshot/gui-create-ct-network.png"]
097aa949 583
bac8c385 584You can configure up to 10 network interfaces for a single
8c1189b6 585container. The corresponding options are called `net0` to `net9`, and
bac8c385
DM
586they can contain the following setting:
587
588include::pct-network-opts.adoc[]
04c569f6
DM
589
590
139a9019
DM
591[[pct_startup_and_shutdown]]
592Automatic Start and Shutdown of Containers
593~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
594
14e97811
OB
595To automatically start a container when the host system boots, select the
596option 'Start at boot' in the 'Options' panel of the container in the web
597interface or run the following command:
139a9019 598
14e97811
OB
599----
600# pct set CTID -onboot 1
601----
139a9019 602
4dbeb548
DM
603.Start and Shutdown Order
604// use the screenshot from qemu - its the same
1ff5e4e8 605[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548 606
139a9019 607If you want to fine tune the boot order of your containers, you can use the following
14e97811 608parameters:
139a9019 609
14e97811 610* *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
139a9019
DM
611you want the CT to be the first to be started. (We use the reverse startup
612order for shutdown, so a container with a start order of 1 would be the last to
613be shut down)
614* *Startup delay*: Defines the interval between this container start and subsequent
14e97811 615containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
139a9019
DM
616other containers.
617* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
618for the container to be offline after issuing a shutdown command.
619By default this value is set to 60, which means that {pve} will issue a
620shutdown request, wait 60s for the machine to be offline, and if after 60s
621the machine is still online will notify that the shutdown action failed.
622
623Please note that containers without a Start/Shutdown order parameter will always
624start after those where the parameter is set, and this parameter only
625makes sense between the machines running locally on a host, and not
626cluster-wide.
627
c2c8eb89
DC
628Hookscripts
629~~~~~~~~~~~
630
631You can add a hook script to CTs with the config property `hookscript`.
632
14e97811
OB
633----
634# pct set 100 -hookscript local:snippets/hookscript.pl
635----
c2c8eb89
DC
636
637It will be called during various phases of the guests lifetime.
638For an example and documentation see the example script under
639`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
139a9019 640
51e33128
FG
641Backup and Restore
642------------------
643
5eba0743 644
2175e37b
FG
645Container Backup
646~~~~~~~~~~~~~~~~
647
8c1189b6
FG
648It is possible to use the `vzdump` tool for container backup. Please
649refer to the `vzdump` manual page for details.
650
51e33128 651
2175e37b
FG
652Restoring Container Backups
653~~~~~~~~~~~~~~~~~~~~~~~~~~~
654
8c1189b6
FG
655Restoring container backups made with `vzdump` is possible using the
656`pct restore` command. By default, `pct restore` will attempt to restore as much
2175e37b
FG
657of the backed up container configuration as possible. It is possible to override
658the backed up configuration by manually setting container options on the command
8c1189b6 659line (see the `pct` manual page for details).
2175e37b 660
8c1189b6 661NOTE: `pvesm extractconfig` can be used to view the backed up configuration
2175e37b
FG
662contained in a vzdump archive.
663
664There are two basic restore modes, only differing by their handling of mount
665points:
666
4c3b5c77 667
8c1189b6
FG
668``Simple'' Restore Mode
669^^^^^^^^^^^^^^^^^^^^^^^
2175e37b
FG
670
671If neither the `rootfs` parameter nor any of the optional `mpX` parameters
672are explicitly set, the mount point configuration from the backed up
673configuration file is restored using the following steps:
674
675. Extract mount points and their options from backup
676. Create volumes for storage backed mount points (on storage provided with the
677`storage` parameter, or default local storage if unset)
678. Extract files from backup archive
679. Add bind and device mount points to restored configuration (limited to root user)
680
681NOTE: Since bind and device mount points are never backed up, no files are
682restored in the last step, but only the configuration options. The assumption
683is that such mount points are either backed up with another mechanism (e.g.,
684NFS space that is bind mounted into many containers), or not intended to be
685backed up at all.
686
687This simple mode is also used by the container restore operations in the web
688interface.
689
4c3b5c77 690
8c1189b6
FG
691``Advanced'' Restore Mode
692^^^^^^^^^^^^^^^^^^^^^^^^^
2175e37b
FG
693
694By setting the `rootfs` parameter (and optionally, any combination of `mpX`
8c1189b6 695parameters), the `pct restore` command is automatically switched into an
2175e37b
FG
696advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
697configuration options contained in the backup archive, and instead only
698uses the options explicitly provided as parameters.
699
700This mode allows flexible configuration of mount point settings at restore time,
701for example:
702
703* Set target storages, volume sizes and other options for each mount point
704individually
705* Redistribute backed up files according to new mount point scheme
706* Restore to device and/or bind mount points (limited to root user)
707
51e33128 708
8c1189b6 709Managing Containers with `pct`
04c569f6
DM
710------------------------------
711
14e97811
OB
712The "Proxmox Container Toolkit" (`pct`) is the command line tool to manage {pve}
713containers. It enables you to create or destroy containers, as well as control the
714container execution (start, stop, reboot, migrate, etc.). It can be used to set
715parameters in the config file of a container, for example the network
716configuration or memory limits.
5eba0743 717
04c569f6
DM
718CLI Usage Examples
719~~~~~~~~~~~~~~~~~~
720
721Create a container based on a Debian template (provided you have
5eba0743 722already downloaded the template via the web interface)
04c569f6 723
14e97811
OB
724----
725# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
726----
04c569f6
DM
727
728Start container 100
729
14e97811
OB
730----
731# pct start 100
732----
04c569f6
DM
733
734Start a login session via getty
735
14e97811
OB
736----
737# pct console 100
738----
04c569f6
DM
739
740Enter the LXC namespace and run a shell as root user
741
14e97811
OB
742----
743# pct enter 100
744----
04c569f6
DM
745
746Display the configuration
747
14e97811
OB
748----
749# pct config 100
750----
04c569f6 751
8c1189b6 752Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
04c569f6
DM
753set the address and gateway, while it's running
754
14e97811
OB
755----
756# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
757----
04c569f6
DM
758
759Reduce the memory of the container to 512MB
760
14e97811
OB
761----
762# pct set 100 -memory 512
763----
0585f29a 764
04c569f6 765
fe57a420
FG
766Obtaining Debugging Logs
767~~~~~~~~~~~~~~~~~~~~~~~~
768
769In case `pct start` is unable to start a specific container, it might be
770helpful to collect debugging output by running `lxc-start` (replace `ID` with
771the container's ID):
772
14e97811
OB
773----
774# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
775----
fe57a420 776
14e97811
OB
777This command will attempt to start the container in foreground mode,
778to stop the container run `pct shutdown ID` or `pct stop ID` in a
779second terminal.
fe57a420
FG
780
781The collected debug log is written to `/tmp/lxc-ID.log`.
782
783NOTE: If you have changed the container's configuration since the last start
784attempt with `pct start`, you need to run `pct start` at least once to also
785update the configuration used by `lxc-start`.
786
33f50e04
DC
787[[pct_migration]]
788Migration
789---------
790
791If you have a cluster, you can migrate your Containers with
792
14e97811
OB
793----
794# pct migrate <ctid> <target>
795----
33f50e04
DC
796
797This works as long as your Container is offline. If it has local volumes or
14e97811 798mount points defined, the migration will copy the content over the network to
ba021358 799the target host if the same storage is defined there.
33f50e04
DC
800
801If you want to migrate online Containers, the only way is to use
802restart migration. This can be initiated with the -restart flag and the optional
803-timeout parameter.
804
805A restart migration will shut down the Container and kill it after the specified
806timeout (the default is 180 seconds). Then it will migrate the Container
807like an offline migration and when finished, it starts the Container on the
808target node.
c7bc47af
DM
809
810[[pct_configuration]]
811Configuration
812-------------
813
814The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
815where `<CTID>` is the numeric ID of the given container. Like all
816other files stored inside `/etc/pve/`, they get automatically
817replicated to all other cluster nodes.
818
819NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
820unique cluster wide.
821
822.Example Container Configuration
823----
824ostype: debian
825arch: amd64
826hostname: www
827memory: 512
828swap: 512
829net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
830rootfs: local:107/vm-107-disk-1.raw,size=7G
831----
832
14e97811
OB
833The configuration files are simple text files. You can edit them
834using a normal text editor (`vi`, `nano`, etc). This is sometimes
c7bc47af
DM
835useful to do small corrections, but keep in mind that you need to
836restart the container to apply such changes.
837
838For that reason, it is usually better to use the `pct` command to
839generate and modify those files, or do the whole thing using the GUI.
840Our toolkit is smart enough to instantaneously apply most changes to
841running containers. This feature is called "hot plug", and there is no
842need to restart the container in that case.
843
14e97811
OB
844In cases where a change cannot be hot plugged, it will be registered
845as a pending change (shown in red color in the GUI). They will only
846be applied after rebooting the container.
847
c7bc47af
DM
848
849File Format
850~~~~~~~~~~~
851
14e97811
OB
852The container configuration file uses a simple colon separated
853key/value format. Each line has the following format:
c7bc47af
DM
854
855-----
856# this is a comment
857OPTION: value
858-----
859
860Blank lines in those files are ignored, and lines starting with a `#`
861character are treated as comments and are also ignored.
862
863It is possible to add low-level, LXC style configuration directly, for
864example:
865
14e97811
OB
866----
867lxc.init_cmd: /sbin/my_own_init
868----
c7bc47af
DM
869
870or
871
14e97811
OB
872----
873lxc.init_cmd = /sbin/my_own_init
874----
c7bc47af 875
14e97811 876The settings are passed directly to the LXC low-level tools.
c7bc47af
DM
877
878
879[[pct_snapshots]]
880Snapshots
881~~~~~~~~~
882
883When you create a snapshot, `pct` stores the configuration at snapshot
884time into a separate snapshot section within the same configuration
885file. For example, after creating a snapshot called ``testsnapshot'',
886your configuration file will look like this:
887
888.Container configuration with snapshot
889----
890memory: 512
891swap: 512
892parent: testsnaphot
893...
894
895[testsnaphot]
896memory: 512
897swap: 512
898snaptime: 1457170803
899...
900----
901
902There are a few snapshot related properties like `parent` and
903`snaptime`. The `parent` property is used to store the parent/child
904relationship between snapshots. `snaptime` is the snapshot creation
905time stamp (Unix epoch).
906
907
908[[pct_options]]
909Options
910~~~~~~~
911
912include::pct.conf.5-opts.adoc[]
913
914
2a11aa70
DM
915Locks
916-----
917
918Container migrations, snapshots and backups (`vzdump`) set a lock to
919prevent incompatible concurrent actions on the affected container. Sometimes
920you need to remove such a lock manually (e.g., after a power failure).
921
14e97811
OB
922----
923# pct unlock <CTID>
924----
2a11aa70 925
14e97811 926CAUTION: Only do this if you are sure the action which set the lock is
2a11aa70
DM
927no longer running.
928
fe57a420 929
0c6b782f 930ifdef::manvolnum[]
3bd9d0cf
DM
931
932Files
933------
934
935`/etc/pve/lxc/<CTID>.conf`::
936
937Configuration file for the container '<CTID>'.
938
939
0c6b782f
DM
940include::pve-copyright.adoc[]
941endif::manvolnum[]
942
943
944
945
946
947
948