]> git.proxmox.com Git - pve-docs.git/blame - pct.adoc
pct: fix text width to 80cc
[pve-docs.git] / pct.adoc
CommitLineData
80c0adcb 1[[chapter_pct]]
0c6b782f 2ifdef::manvolnum[]
b2f242ab 3pct(1)
7e2fdb3d 4======
5f09af76
DM
5:pve-toplevel:
6
0c6b782f
DM
7NAME
8----
9
10pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
49a5e11c 13SYNOPSIS
0c6b782f
DM
14--------
15
16include::pct.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23Proxmox Container Toolkit
24=========================
194d2f29 25:pve-toplevel:
0c6b782f 26endif::manvolnum[]
5f09af76 27ifdef::wiki[]
cb84ed18 28:title: Linux Container
5f09af76 29endif::wiki[]
4a2ae9ed 30
14e97811
OB
31Containers are a lightweight alternative to fully virtualized machines (VMs).
32They use the kernel of the host system that they run on, instead of emulating a
33full operating system (OS). This means that containers can access resources on
34the host system directly.
4a2ae9ed 35
6d718b9b
TL
36The runtime costs for containers is low, usually negligible. However, there are
37some drawbacks that need be considered:
4a2ae9ed 38
6d718b9b
TL
39* Only Linux distributions can be run in containers.It is not possible to run
40 other Operating Systems like, for example, FreeBSD or Microsoft Windows
41 inside a container.
4a2ae9ed 42
6d718b9b
TL
43* For security reasons, access to host resources needs to be restricted.
44 Containers run in their own separate namespaces. Additionally some syscalls
45 are not allowed within containers.
4a2ae9ed 46
6d718b9b
TL
47{pve} uses https://linuxcontainers.org/[Linux Containers (LXC)] as underlying
48container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49usage and management of LXC containers.
4a2ae9ed 50
14e97811
OB
51Containers are tightly integrated with {pve}. This means that they are aware of
52the cluster setup, and they can use the same network and storage resources as
53virtual machines. You can also use the {pve} firewall, or manage containers
54using the HA framework.
4a2ae9ed 55
6d718b9b
TL
56Our primary goal is to offer an environment as one would get from a VM, but
57without the additional overhead. We call this ``System Containers''.
4a2ae9ed 58
6d718b9b 59NOTE: If you want to run micro-containers, for example, 'Docker' or 'rkt', it
70a42028 60is best to run them inside a VM.
4a2ae9ed
DM
61
62
99f6ae1a
DM
63Technology Overview
64-------------------
65
66* LXC (https://linuxcontainers.org/)
67
6d718b9b 68* Integrated into {pve} graphical web user interface (GUI)
99f6ae1a
DM
69
70* Easy to use command line tool `pct`
71
72* Access via {pve} REST API
73
6d718b9b 74* 'lxcfs' to provide containerized /proc file system
99f6ae1a 75
6d718b9b 76* Control groups ('cgroups') for resource isolation and limitation
99f6ae1a 77
6d718b9b 78* 'AppArmor' and 'seccomp' to improve security
99f6ae1a 79
14e97811 80* Modern Linux kernels
99f6ae1a
DM
81
82* Image based deployment (templates)
83
6d718b9b 84* Uses {pve} xref:chapter_storage[storage library]
99f6ae1a 85
14e97811 86* Container setup from host (network, DNS, storage, etc.)
99f6ae1a 87
69ab602f 88
4a2ae9ed
DM
89Security Considerations
90-----------------------
91
69ab602f
TL
92Containers use the kernel of the host system. This creates a big attack surface
93for malicious users. This should be considered if containers are provided to
94untrustworthy people. In general, full virtual machines provide better
95isolation.
14e97811
OB
96
97However, LXC uses many security features like AppArmor, CGroups and kernel
98namespaces to reduce the attack surface.
99
100AppArmor profiles are used to restrict access to possibly dangerous actions.
101Some system calls, i.e. `mount`, are prohibited from execution.
4a2ae9ed 102
14e97811
OB
103To trace AppArmor activity, use:
104
105----
106# dmesg | grep apparmor
107----
3bd9d0cf 108
53e3cd6f
DM
109Guest Operating System Configuration
110------------------------------------
111
69ab602f
TL
112{pve} tries to detect the Linux distribution in the container, and modifies
113some files. Here is a short list of things done at container startup:
53e3cd6f
DM
114
115set /etc/hostname:: to set the container name
116
117modify /etc/hosts:: to allow lookup of the local hostname
118
119network setup:: pass the complete network setup to the container
120
121configure DNS:: pass information about DNS servers
122
123adapt the init system:: for example, fix the number of spawned getty processes
124
125set the root password:: when creating a new container
126
127rewrite ssh_host_keys:: so that each container has unique keys
128
129randomize crontab:: so that cron does not start at the same time on all containers
130
131Changes made by {PVE} are enclosed by comment markers:
132
133----
134# --- BEGIN PVE ---
135<data>
136# --- END PVE ---
137----
138
69ab602f
TL
139Those markers will be inserted at a reasonable location in the file. If such a
140section already exists, it will be updated in place and will not be moved.
53e3cd6f 141
69ab602f
TL
142Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
143For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
144file will not be touched. This can be a simple empty file created via:
53e3cd6f 145
14e97811
OB
146----
147# touch /etc/.pve-ignore.hosts
148----
53e3cd6f
DM
149
150Most modifications are OS dependent, so they differ between different
69ab602f
TL
151distributions and versions. You can completely disable modifications by
152manually setting the `ostype` to `unmanaged`.
53e3cd6f
DM
153
154OS type detection is done by testing for certain files inside the
155container:
156
157Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
158
159Debian:: test /etc/debian_version
160
161Fedora:: test /etc/fedora-release
162
163RedHat or CentOS:: test /etc/redhat-release
164
165ArchLinux:: test /etc/arch-release
166
167Alpine:: test /etc/alpine-release
168
169Gentoo:: test /etc/gentoo-release
170
171NOTE: Container start fails if the configured `ostype` differs from the auto
172detected type.
173
174
80c0adcb 175[[pct_container_images]]
d61bab51
DM
176Container Images
177----------------
178
8c1189b6 179Container images, sometimes also referred to as ``templates'' or
69ab602f
TL
180``appliances'', are `tar` archives which contain everything to run a container.
181`pct` uses them to create a new container, for example:
d61bab51 182
14e97811
OB
183----
184# pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
185----
d61bab51 186
69ab602f
TL
187{pve} itself provides a variety of basic templates for the most common Linux
188distributions. They can be downloaded using the GUI or the `pveam` (short for
189{pve} Appliance Manager) command line utility.
190Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
191are also available to download.
d61bab51 192
69ab602f
TL
193The list of available templates is updated daily via cron. To trigger it
194manually:
3a6fa247 195
14e97811
OB
196----
197# pveam update
198----
3a6fa247 199
14e97811 200To view the list of available images run:
3a6fa247 201
14e97811
OB
202----
203# pveam available
204----
3a6fa247 205
8c1189b6
FG
206You can restrict this large list by specifying the `section` you are
207interested in, for example basic `system` images:
3a6fa247
DM
208
209.List available system images
210----
211# pveam available --section system
14e97811
OB
212system alpine-3.10-default_20190626_amd64.tar.xz
213system alpine-3.9-default_20190224_amd64.tar.xz
214system archlinux-base_20190924-1_amd64.tar.gz
215system centos-6-default_20191016_amd64.tar.xz
216system centos-7-default_20190926_amd64.tar.xz
217system centos-8-default_20191016_amd64.tar.xz
218system debian-10.0-standard_10.0-1_amd64.tar.gz
219system debian-8.0-standard_8.11-1_amd64.tar.gz
220system debian-9.0-standard_9.7-1_amd64.tar.gz
221system fedora-30-default_20190718_amd64.tar.xz
222system fedora-31-default_20191029_amd64.tar.xz
223system gentoo-current-default_20190718_amd64.tar.xz
224system opensuse-15.0-default_20180907_amd64.tar.xz
225system opensuse-15.1-default_20190719_amd64.tar.xz
226system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
227system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
228system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
229system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
3a6fa247
DM
230----
231
69ab602f
TL
232Before you can use such a template, you need to download them into one of your
233storages. You can simply use storage `local` for that purpose. For clustered
234installations, it is preferred to use a shared storage so that all nodes can
235access those images.
3a6fa247 236
14e97811
OB
237----
238# pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
239----
3a6fa247 240
69ab602f
TL
241You are now ready to create containers using that image, and you can list all
242downloaded images on storage `local` with:
24f73a63
DM
243
244----
245# pveam list local
14e97811 246local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
24f73a63
DM
247----
248
69ab602f
TL
249The above command shows you the full {pve} volume identifiers. They include the
250storage name, and most other {pve} commands can use them. For example you can
251delete that image later with:
24f73a63 252
14e97811
OB
253----
254# pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
255----
d61bab51 256
80c0adcb 257[[pct_container_storage]]
70a42028
DM
258Container Storage
259-----------------
260
14e97811 261The {pve} LXC container storage model is more flexible than traditional
69ab602f
TL
262container storage models. A container can have multiple mount points. This
263makes it possible to use the best suited storage for each application.
14e97811
OB
264
265For example the root file system of the container can be on slow and cheap
266storage while the database can be on fast and distributed storage via a second
69ab602f
TL
267mount point. See section <<pct_mount_points, Mount Points>> for further
268details.
14e97811
OB
269
270Any storage type supported by the {pve} storage library can be used. This means
271that containers can be stored on local (for example `lvm`, `zfs` or directory),
272shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
273Ceph. Advanced storage features like snapshots or clones can be used if the
274underlying storage supports them. The `vzdump` backup tool can use snapshots to
275provide consistent container backups.
276
277Furthermore, local devices or local directories can be mounted directly using
278'bind mounts'. This gives access to local resources inside a container with
279practically zero overhead. Bind mounts can be used as an easy way to share data
280between containers.
70a42028 281
eeecce95 282
4f785ca7
DM
283FUSE Mounts
284~~~~~~~~~~~
285
69ab602f
TL
286WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
287usage of FUSE mounts inside a container is strongly advised against, as
288containers need to be frozen for suspend or snapshot mode backups.
4f785ca7
DM
289
290If FUSE mounts cannot be replaced by other mounting mechanisms or storage
291technologies, it is possible to establish the FUSE mount on the Proxmox host
292and use a bind mount point to make it accessible inside the container.
293
294
295Using Quotas Inside Containers
296~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
297
69ab602f
TL
298Quotas allow to set limits inside a container for the amount of disk space that
299each user can use.
14e97811 300
69ab602f
TL
301NOTE: This only works on ext4 image based storage types and currently only
302works with privileged containers.
4f785ca7 303
69ab602f
TL
304Activating the `quota` option causes the following mount options to be used for
305a mount point:
4f785ca7
DM
306`usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
307
69ab602f
TL
308This allows quotas to be used like on any other system. You can initialize the
309`/aquota.user` and `/aquota.group` files by running:
4f785ca7
DM
310
311----
14e97811
OB
312# quotacheck -cmug /
313# quotaon /
4f785ca7
DM
314----
315
69ab602f
TL
316Then edit the quotas using the `edquota` command. Refer to the documentation of
317the distribution running inside the container for details.
4f785ca7 318
69ab602f
TL
319NOTE: You need to run the above commands for every mount point by passing the
320mount point's path instead of just `/`.
4f785ca7
DM
321
322
323Using ACLs Inside Containers
324~~~~~~~~~~~~~~~~~~~~~~~~~~~~
325
14e97811
OB
326The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
327containers. ACLs allow you to set more detailed file ownership than the
328traditional user/group/others model.
4f785ca7
DM
329
330
14e97811 331Backup of Container mount points
690cd737
EK
332~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333
14e97811
OB
334To include a mount point in backups, enable the `backup` option for it in the
335container configuration. For an existing mount point `mp0`
336
337----
338mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
339----
340
341add `backup=1` to enable it.
342
343----
344mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
345----
346
347NOTE: When creating a new mount point in the GUI, this option is enabled by
348default.
349
69ab602f
TL
350To disable backups for a mount point, add `backup=0` in the way described
351above, or uncheck the *Backup* checkbox on the GUI.
690cd737
EK
352
353Replication of Containers mount points
354~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
355
14e97811
OB
356By default, additional mount points are replicated when the Root Disk is
357replicated. If you want the {pve} storage replication mechanism to skip a mount
69ab602f 358point, you can set the *Skip replication* option for that mount point.
14e97811
OB
359As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
360mount point to a different type of storage when the container has replication
361configured requires to have *Skip replication* enabled for that mount point.
690cd737 362
f3afbb70 363[[pct_settings]]
4f785ca7
DM
364Container Settings
365------------------
366
304eb5a9
EK
367[[pct_general]]
368General Settings
369~~~~~~~~~~~~~~~~
370
1ff5e4e8 371[thumbnail="screenshot/gui-create-ct-general.png"]
2225402c 372
304eb5a9
EK
373General settings of a container include
374
375* the *Node* : the physical server on which the container will run
69ab602f
TL
376* the *CT ID*: a unique number in this {pve} installation used to identify your
377 container
304eb5a9
EK
378* *Hostname*: the hostname of the container
379* *Resource Pool*: a logical group of containers and VMs
380* *Password*: the root password of the container
381* *SSH Public Key*: a public key for connecting to the root account over SSH
382* *Unprivileged container*: this option allows to choose at creation time
69ab602f 383 if you want to create a privileged or unprivileged container.
304eb5a9 384
14e97811
OB
385Unprivileged Containers
386^^^^^^^^^^^^^^^^^^^^^^^
387
69ab602f
TL
388Unprivileged containers use a new kernel feature called user namespaces.
389The root UID 0 inside the container is mapped to an unprivileged user outside
390the container. This means that most security issues (container escape, resource
14e97811
OB
391abuse, etc.) in these containers will affect a random unprivileged user, and
392would be a generic kernel security bug rather than an LXC issue. The LXC team
393thinks unprivileged containers are safe by design.
394
395This is the default option when creating a new container.
396
69ab602f
TL
397NOTE: If the container uses systemd as an init system, please be aware the
398systemd version running inside the container should be equal to or greater than
399220.
14e97811 400
304eb5a9
EK
401
402Privileged Containers
403^^^^^^^^^^^^^^^^^^^^^
404
14e97811 405Security in containers is achieved by using mandatory access control
69ab602f
TL
406('AppArmor'), 'seccomp' filters and namespaces. The LXC team considers this
407kind of container as unsafe, and they will not consider new container escape
408exploits to be security issues worthy of a CVE and quick fix. That's why
409privileged containers should only be used in trusted environments.
304eb5a9 410
69ab602f
TL
411Although it is not recommended, AppArmor can be disabled for a container. This
412brings security risks with it. Some syscalls can lead to privilege escalation
413when executed within a container if the system is misconfigured or if a LXC or
414Linux Kernel vulnerability exists.
304eb5a9 415
14e97811
OB
416To disable AppArmor for a container, add the following line to the container
417configuration file located at `/etc/pve/lxc/CTID.conf`:
418
419----
420lxc.apparmor_profile = unconfined
421----
422
69ab602f 423WARNING: Please note that this is not recommended for production use.
304eb5a9 424
304eb5a9 425
9a5e9443 426[[pct_cpu]]
9a5e9443
DM
427CPU
428~~~
429
1ff5e4e8 430[thumbnail="screenshot/gui-create-ct-cpu.png"]
097aa949 431
14e97811
OB
432You can restrict the number of visible CPUs inside the container using the
433`cores` option. This is implemented using the Linux 'cpuset' cgroup
69ab602f
TL
434(**c**ontrol *group*).
435A special task inside `pvestatd` tries to distribute running containers among
436available CPUs periodically.
437To view the assigned CPUs run the following command:
9a5e9443
DM
438
439----
440# pct cpusets
441 ---------------------
442 102: 6 7
443 105: 2 3 4 5
444 108: 0 1
445 ---------------------
446----
447
14e97811
OB
448Containers use the host kernel directly. All tasks inside a container are
449handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
450**F**air **S**cheduler) scheduler by default, which has additional bandwidth
451control options.
9a5e9443
DM
452
453[horizontal]
0725e3c6 454
69ab602f
TL
455`cpulimit`: :: You can use this option to further limit assigned CPU time.
456Please note that this is a floating point number, so it is perfectly valid to
457assign two cores to a container, but restrict overall CPU consumption to half a
458core.
9a5e9443
DM
459+
460----
461cores: 2
462cpulimit: 0.5
463----
464
69ab602f
TL
465`cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
466larger the number is, the more CPU time this container gets. Number is relative
467to the weights of all the other running containers. The default is 1024. You
468can use this setting to prioritize some containers.
9a5e9443
DM
469
470
471[[pct_memory]]
472Memory
473~~~~~~
474
1ff5e4e8 475[thumbnail="screenshot/gui-create-ct-memory.png"]
097aa949 476
9a5e9443
DM
477Container memory is controlled using the cgroup memory controller.
478
479[horizontal]
480
69ab602f
TL
481`memory`: :: Limit overall memory usage. This corresponds to the
482`memory.limit_in_bytes` cgroup setting.
9a5e9443 483
69ab602f
TL
484`swap`: :: Allows the container to use additional swap memory from the host
485swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
486setting, which is set to the sum of both value (`memory + swap`).
9a5e9443 487
4f785ca7
DM
488
489[[pct_mount_points]]
9e44e493
DM
490Mount Points
491~~~~~~~~~~~~
eeecce95 492
1ff5e4e8 493[thumbnail="screenshot/gui-create-ct-root-disk.png"]
097aa949 494
14e97811 495The root mount point is configured with the `rootfs` property. You can
69ab602f
TL
496configure up to 256 additional mount points. The corresponding options are
497called `mp0` to `mp255`. They can contain the following settings:
01639994
FG
498
499include::pct-mountpoint-opts.adoc[]
500
69ab602f
TL
501Currently there are three types of mount points: storage backed mount points,
502bind mounts, and device mounts.
9e44e493 503
5eba0743 504.Typical container `rootfs` configuration
4c3b5c77
DM
505----
506rootfs: thin1:base-100-disk-1,size=8G
507----
508
509
5eba0743 510Storage Backed Mount Points
4c3b5c77 511^^^^^^^^^^^^^^^^^^^^^^^^^^^
01639994 512
9e44e493 513Storage backed mount points are managed by the {pve} storage subsystem and come
eeecce95
WB
514in three different flavors:
515
5eba0743 516- Image based: these are raw images containing a single ext4 formatted file
eeecce95 517 system.
5eba0743 518- ZFS subvolumes: these are technically bind mounts, but with managed storage,
eeecce95
WB
519 and thus allow resizing and snapshotting.
520- Directories: passing `size=0` triggers a special case where instead of a raw
521 image a directory is created.
522
03782251
FG
523NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
524mount point volumes will automatically allocate a volume of the specified size
69ab602f
TL
525on the specified storage. For example, calling
526
527----
528pct set 100 -mp0 thin1:10,mp=/path/in/container
529----
530
531will allocate a 10GB volume on the storage `thin1` and replace the volume ID
532place holder `10` with the allocated volume ID, and setup the moutpoint in the
533container at `/path/in/container`
03782251 534
4c3b5c77 535
5eba0743 536Bind Mount Points
4c3b5c77 537^^^^^^^^^^^^^^^^^
01639994 538
9baca183
FG
539Bind mounts allow you to access arbitrary directories from your Proxmox VE host
540inside a container. Some potential use cases are:
541
542- Accessing your home directory in the guest
543- Accessing an USB device directory in the guest
acccc49b 544- Accessing an NFS mount from the host in the guest
9baca183 545
eeecce95 546Bind mounts are considered to not be managed by the storage subsystem, so you
9baca183 547cannot make snapshots or deal with quotas from inside the container. With
eeecce95 548unprivileged containers you might run into permission problems caused by the
9baca183
FG
549user mapping and cannot use ACLs.
550
8c1189b6 551NOTE: The contents of bind mount points are not backed up when using `vzdump`.
eeecce95 552
69ab602f
TL
553WARNING: For security reasons, bind mounts should only be established using
554source directories especially reserved for this purpose, e.g., a directory
555hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
556`/`, `/var` or `/etc` into a container - this poses a great security risk.
9baca183
FG
557
558NOTE: The bind mount source path must not contain any symlinks.
559
560For example, to make the directory `/mnt/bindmounts/shared` accessible in the
561container with ID `100` under the path `/shared`, use a configuration line like
8c1189b6
FG
562`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
563Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
9baca183 564achieve the same result.
6b707f2c 565
4c3b5c77 566
5eba0743 567Device Mount Points
4c3b5c77 568^^^^^^^^^^^^^^^^^^^
fe154a4f 569
7432d78e
FG
570Device mount points allow to mount block devices of the host directly into the
571container. Similar to bind mounts, device mounts are not managed by {PVE}'s
572storage subsystem, but the `quota` and `acl` options will be honored.
573
574NOTE: Device mount points should only be used under special circumstances. In
575most cases a storage backed mount point offers the same performance and a lot
576more features.
577
69ab602f
TL
578NOTE: The contents of device mount points are not backed up when using
579`vzdump`.
01639994 580
4c3b5c77 581
80c0adcb 582[[pct_container_network]]
f5c351f0
DM
583Network
584~~~~~~~
04c569f6 585
1ff5e4e8 586[thumbnail="screenshot/gui-create-ct-network.png"]
097aa949 587
69ab602f
TL
588You can configure up to 10 network interfaces for a single container.
589The corresponding options are called `net0` to `net9`, and they can contain the
590following setting:
bac8c385
DM
591
592include::pct-network-opts.adoc[]
04c569f6
DM
593
594
139a9019
DM
595[[pct_startup_and_shutdown]]
596Automatic Start and Shutdown of Containers
597~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
598
14e97811
OB
599To automatically start a container when the host system boots, select the
600option 'Start at boot' in the 'Options' panel of the container in the web
601interface or run the following command:
139a9019 602
14e97811
OB
603----
604# pct set CTID -onboot 1
605----
139a9019 606
4dbeb548
DM
607.Start and Shutdown Order
608// use the screenshot from qemu - its the same
1ff5e4e8 609[thumbnail="screenshot/gui-qemu-edit-start-order.png"]
4dbeb548 610
69ab602f
TL
611If you want to fine tune the boot order of your containers, you can use the
612following parameters:
139a9019 613
69ab602f
TL
614* *Start/Shutdown order*: Defines the start order priority. For example, set it
615 to 1 if you want the CT to be the first to be started. (We use the reverse
616 startup order for shutdown, so a container with a start order of 1 would be
617 the last to be shut down)
618* *Startup delay*: Defines the interval between this container start and
619 subsequent containers starts. For example, set it to 240 if you want to wait
620 240 seconds before starting other containers.
139a9019 621* *Shutdown timeout*: Defines the duration in seconds {pve} should wait
69ab602f
TL
622 for the container to be offline after issuing a shutdown command.
623 By default this value is set to 60, which means that {pve} will issue a
624 shutdown request, wait 60s for the machine to be offline, and if after 60s
625 the machine is still online will notify that the shutdown action failed.
139a9019 626
69ab602f
TL
627Please note that containers without a Start/Shutdown order parameter will
628always start after those where the parameter is set, and this parameter only
139a9019
DM
629makes sense between the machines running locally on a host, and not
630cluster-wide.
631
c2c8eb89
DC
632Hookscripts
633~~~~~~~~~~~
634
635You can add a hook script to CTs with the config property `hookscript`.
636
14e97811
OB
637----
638# pct set 100 -hookscript local:snippets/hookscript.pl
639----
c2c8eb89 640
69ab602f
TL
641It will be called during various phases of the guests lifetime. For an example
642and documentation see the example script under
c2c8eb89 643`/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
139a9019 644
51e33128
FG
645Backup and Restore
646------------------
647
5eba0743 648
2175e37b
FG
649Container Backup
650~~~~~~~~~~~~~~~~
651
69ab602f
TL
652It is possible to use the `vzdump` tool for container backup. Please refer to
653the `vzdump` manual page for details.
8c1189b6 654
51e33128 655
2175e37b
FG
656Restoring Container Backups
657~~~~~~~~~~~~~~~~~~~~~~~~~~~
658
69ab602f
TL
659Restoring container backups made with `vzdump` is possible using the `pct
660restore` command. By default, `pct restore` will attempt to restore as much of
661the backed up container configuration as possible. It is possible to override
662the backed up configuration by manually setting container options on the
663command line (see the `pct` manual page for details).
2175e37b 664
8c1189b6 665NOTE: `pvesm extractconfig` can be used to view the backed up configuration
2175e37b
FG
666contained in a vzdump archive.
667
668There are two basic restore modes, only differing by their handling of mount
669points:
670
4c3b5c77 671
8c1189b6
FG
672``Simple'' Restore Mode
673^^^^^^^^^^^^^^^^^^^^^^^
2175e37b 674
69ab602f
TL
675If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
676explicitly set, the mount point configuration from the backed up configuration
677file is restored using the following steps:
2175e37b
FG
678
679. Extract mount points and their options from backup
680. Create volumes for storage backed mount points (on storage provided with the
69ab602f 681 `storage` parameter, or default local storage if unset)
2175e37b 682. Extract files from backup archive
69ab602f
TL
683. Add bind and device mount points to restored configuration (limited to root
684 user)
2175e37b
FG
685
686NOTE: Since bind and device mount points are never backed up, no files are
687restored in the last step, but only the configuration options. The assumption
688is that such mount points are either backed up with another mechanism (e.g.,
689NFS space that is bind mounted into many containers), or not intended to be
690backed up at all.
691
692This simple mode is also used by the container restore operations in the web
693interface.
694
4c3b5c77 695
8c1189b6
FG
696``Advanced'' Restore Mode
697^^^^^^^^^^^^^^^^^^^^^^^^^
2175e37b
FG
698
699By setting the `rootfs` parameter (and optionally, any combination of `mpX`
8c1189b6 700parameters), the `pct restore` command is automatically switched into an
2175e37b 701advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
69ab602f
TL
702configuration options contained in the backup archive, and instead only uses
703the options explicitly provided as parameters.
2175e37b 704
69ab602f
TL
705This mode allows flexible configuration of mount point settings at restore
706time, for example:
2175e37b
FG
707
708* Set target storages, volume sizes and other options for each mount point
69ab602f 709 individually
2175e37b
FG
710* Redistribute backed up files according to new mount point scheme
711* Restore to device and/or bind mount points (limited to root user)
712
51e33128 713
8c1189b6 714Managing Containers with `pct`
04c569f6
DM
715------------------------------
716
6d718b9b
TL
717The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
718{pve} containers. It enables you to create or destroy containers, as well as
719control the container execution (start, stop, reboot, migrate, etc.). It can be
720used to set parameters in the config file of a container, for example the
721network configuration or memory limits.
5eba0743 722
04c569f6
DM
723CLI Usage Examples
724~~~~~~~~~~~~~~~~~~
725
69ab602f
TL
726Create a container based on a Debian template (provided you have already
727downloaded the template via the web interface)
04c569f6 728
14e97811
OB
729----
730# pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
731----
04c569f6
DM
732
733Start container 100
734
14e97811
OB
735----
736# pct start 100
737----
04c569f6
DM
738
739Start a login session via getty
740
14e97811
OB
741----
742# pct console 100
743----
04c569f6
DM
744
745Enter the LXC namespace and run a shell as root user
746
14e97811
OB
747----
748# pct enter 100
749----
04c569f6
DM
750
751Display the configuration
752
14e97811
OB
753----
754# pct config 100
755----
04c569f6 756
69ab602f
TL
757Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
758the address and gateway, while it's running
04c569f6 759
14e97811
OB
760----
761# pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
762----
04c569f6
DM
763
764Reduce the memory of the container to 512MB
765
14e97811
OB
766----
767# pct set 100 -memory 512
768----
0585f29a 769
04c569f6 770
fe57a420
FG
771Obtaining Debugging Logs
772~~~~~~~~~~~~~~~~~~~~~~~~
773
774In case `pct start` is unable to start a specific container, it might be
775helpful to collect debugging output by running `lxc-start` (replace `ID` with
776the container's ID):
777
14e97811
OB
778----
779# lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
780----
fe57a420 781
69ab602f
TL
782This command will attempt to start the container in foreground mode, to stop
783the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
fe57a420
FG
784
785The collected debug log is written to `/tmp/lxc-ID.log`.
786
787NOTE: If you have changed the container's configuration since the last start
788attempt with `pct start`, you need to run `pct start` at least once to also
789update the configuration used by `lxc-start`.
790
33f50e04
DC
791[[pct_migration]]
792Migration
793---------
794
795If you have a cluster, you can migrate your Containers with
796
14e97811
OB
797----
798# pct migrate <ctid> <target>
799----
33f50e04
DC
800
801This works as long as your Container is offline. If it has local volumes or
14e97811 802mount points defined, the migration will copy the content over the network to
ba021358 803the target host if the same storage is defined there.
33f50e04 804
69ab602f
TL
805If you want to migrate online Containers, the only way is to use restart
806migration. This can be initiated with the -restart flag and the optional
33f50e04
DC
807-timeout parameter.
808
69ab602f
TL
809A restart migration will shut down the Container and kill it after the
810specified timeout (the default is 180 seconds). Then it will migrate the
811Container like an offline migration and when finished, it starts the Container
812on the target node.
c7bc47af
DM
813
814[[pct_configuration]]
815Configuration
816-------------
817
69ab602f
TL
818The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
819`<CTID>` is the numeric ID of the given container. Like all other files stored
820inside `/etc/pve/`, they get automatically replicated to all other cluster
821nodes.
c7bc47af
DM
822
823NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
824unique cluster wide.
825
826.Example Container Configuration
827----
828ostype: debian
829arch: amd64
830hostname: www
831memory: 512
832swap: 512
833net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
834rootfs: local:107/vm-107-disk-1.raw,size=7G
835----
836
69ab602f
TL
837The configuration files are simple text files. You can edit them using a normal
838text editor (`vi`, `nano`, etc).
839This is sometimes useful to do small corrections, but keep in mind that you
840need to restart the container to apply such changes.
c7bc47af 841
69ab602f
TL
842For that reason, it is usually better to use the `pct` command to generate and
843modify those files, or do the whole thing using the GUI.
844Our toolkit is smart enough to instantaneously apply most changes to running
845containers. This feature is called "hot plug", and there is no need to restart
846the container in that case.
c7bc47af 847
69ab602f
TL
848In cases where a change cannot be hot plugged, it will be registered as a
849pending change (shown in red color in the GUI).
850They will only be applied after rebooting the container.
14e97811 851
c7bc47af
DM
852
853File Format
854~~~~~~~~~~~
855
69ab602f
TL
856The container configuration file uses a simple colon separated key/value
857format. Each line has the following format:
c7bc47af
DM
858
859-----
860# this is a comment
861OPTION: value
862-----
863
69ab602f
TL
864Blank lines in those files are ignored, and lines starting with a `#` character
865are treated as comments and are also ignored.
c7bc47af 866
69ab602f 867It is possible to add low-level, LXC style configuration directly, for example:
c7bc47af 868
14e97811
OB
869----
870lxc.init_cmd: /sbin/my_own_init
871----
c7bc47af
DM
872
873or
874
14e97811
OB
875----
876lxc.init_cmd = /sbin/my_own_init
877----
c7bc47af 878
14e97811 879The settings are passed directly to the LXC low-level tools.
c7bc47af
DM
880
881
882[[pct_snapshots]]
883Snapshots
884~~~~~~~~~
885
69ab602f
TL
886When you create a snapshot, `pct` stores the configuration at snapshot time
887into a separate snapshot section within the same configuration file. For
888example, after creating a snapshot called ``testsnapshot'', your configuration
889file will look like this:
c7bc47af
DM
890
891.Container configuration with snapshot
892----
893memory: 512
894swap: 512
895parent: testsnaphot
896...
897
898[testsnaphot]
899memory: 512
900swap: 512
901snaptime: 1457170803
902...
903----
904
69ab602f
TL
905There are a few snapshot related properties like `parent` and `snaptime`. The
906`parent` property is used to store the parent/child relationship between
907snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
c7bc47af
DM
908
909
910[[pct_options]]
911Options
912~~~~~~~
913
914include::pct.conf.5-opts.adoc[]
915
916
2a11aa70
DM
917Locks
918-----
919
69ab602f
TL
920Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
921incompatible concurrent actions on the affected container. Sometimes you need
922to remove such a lock manually (e.g., after a power failure).
2a11aa70 923
14e97811
OB
924----
925# pct unlock <CTID>
926----
2a11aa70 927
69ab602f
TL
928CAUTION: Only do this if you are sure the action which set the lock is no
929longer running.
2a11aa70 930
fe57a420 931
0c6b782f 932ifdef::manvolnum[]
3bd9d0cf
DM
933
934Files
935------
936
937`/etc/pve/lxc/<CTID>.conf`::
938
939Configuration file for the container '<CTID>'.
940
941
0c6b782f
DM
942include::pve-copyright.adoc[]
943endif::manvolnum[]