]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_pct]] |
0c6b782f | 2 | ifdef::manvolnum[] |
b2f242ab | 3 | pct(1) |
7e2fdb3d | 4 | ====== |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
0c6b782f DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | pct - Tool to manage Linux Containers (LXC) on Proxmox VE | |
11 | ||
12 | ||
49a5e11c | 13 | SYNOPSIS |
0c6b782f DM |
14 | -------- |
15 | ||
16 | include::pct.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ||
22 | ifndef::manvolnum[] | |
23 | Proxmox Container Toolkit | |
24 | ========================= | |
194d2f29 | 25 | :pve-toplevel: |
0c6b782f | 26 | endif::manvolnum[] |
5f09af76 | 27 | ifdef::wiki[] |
cb84ed18 | 28 | :title: Linux Container |
5f09af76 | 29 | endif::wiki[] |
4a2ae9ed | 30 | |
14e97811 OB |
31 | Containers are a lightweight alternative to fully virtualized machines (VMs). |
32 | They use the kernel of the host system that they run on, instead of emulating a | |
33 | full operating system (OS). This means that containers can access resources on | |
34 | the host system directly. | |
4a2ae9ed | 35 | |
6d718b9b TL |
36 | The runtime costs for containers is low, usually negligible. However, there are |
37 | some drawbacks that need be considered: | |
4a2ae9ed | 38 | |
fd7fb228 DW |
39 | * Only Linux distributions can be run in Proxmox Containers. It is not possible to run |
40 | other operating systems like, for example, FreeBSD or Microsoft Windows | |
6d718b9b | 41 | inside a container. |
4a2ae9ed | 42 | |
6d718b9b | 43 | * For security reasons, access to host resources needs to be restricted. |
fd7fb228 DW |
44 | Therefore, containers run in their own separate namespaces. Additionally some |
45 | syscalls (user space requests to the Linux kernel) are not allowed within containers. | |
4a2ae9ed | 46 | |
fd7fb228 | 47 | {pve} uses https://linuxcontainers.org/lxc/introduction/[Linux Containers (LXC)] as its underlying |
6d718b9b | 48 | container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the |
fd7fb228 DW |
49 | usage and management of LXC, by providing an interface that abstracts |
50 | complex tasks. | |
4a2ae9ed | 51 | |
14e97811 OB |
52 | Containers are tightly integrated with {pve}. This means that they are aware of |
53 | the cluster setup, and they can use the same network and storage resources as | |
54 | virtual machines. You can also use the {pve} firewall, or manage containers | |
55 | using the HA framework. | |
4a2ae9ed | 56 | |
fd7fb228 DW |
57 | Our primary goal is to offer an environment that provides the benefits of using a |
58 | VM, but without the additional overhead. This means that Proxmox Containers can | |
59 | be categorized as ``System Containers'', rather than ``Application Containers''. | |
4a2ae9ed | 60 | |
fd7fb228 | 61 | NOTE: If you want to run application containers, for example, 'Docker' images, it |
c730e973 | 62 | is recommended that you run them inside a Proxmox QEMU VM. This will give you |
fd7fb228 DW |
63 | all the advantages of application containerization, while also providing the |
64 | benefits that VMs offer, such as strong isolation from the host and the ability | |
65 | to live-migrate, which otherwise isn't possible with containers. | |
4a2ae9ed DM |
66 | |
67 | ||
99f6ae1a DM |
68 | Technology Overview |
69 | ------------------- | |
70 | ||
71 | * LXC (https://linuxcontainers.org/) | |
72 | ||
6d718b9b | 73 | * Integrated into {pve} graphical web user interface (GUI) |
99f6ae1a DM |
74 | |
75 | * Easy to use command line tool `pct` | |
76 | ||
77 | * Access via {pve} REST API | |
78 | ||
6d718b9b | 79 | * 'lxcfs' to provide containerized /proc file system |
99f6ae1a | 80 | |
6d718b9b | 81 | * Control groups ('cgroups') for resource isolation and limitation |
99f6ae1a | 82 | |
6d718b9b | 83 | * 'AppArmor' and 'seccomp' to improve security |
99f6ae1a | 84 | |
14e97811 | 85 | * Modern Linux kernels |
99f6ae1a | 86 | |
a645c907 | 87 | * Image based deployment (xref:pct_supported_distributions[templates]) |
99f6ae1a | 88 | |
6d718b9b | 89 | * Uses {pve} xref:chapter_storage[storage library] |
99f6ae1a | 90 | |
14e97811 | 91 | * Container setup from host (network, DNS, storage, etc.) |
99f6ae1a | 92 | |
69ab602f | 93 | |
a645c907 OB |
94 | [[pct_supported_distributions]] |
95 | Supported Distributions | |
109ca764 | 96 | ----------------------- |
a645c907 OB |
97 | |
98 | List of officially supported distributions can be found below. | |
99 | ||
100 | Templates for the following distributions are available through our | |
101 | repositories. You can use xref:pct_container_images[pveam] tool or the | |
102 | Graphical User Interface to download them. | |
103 | ||
104 | Alpine Linux | |
109ca764 | 105 | ~~~~~~~~~~~~ |
a645c907 OB |
106 | |
107 | [quote, 'https://alpinelinux.org'] | |
108 | ____ | |
70292f72 TL |
109 | Alpine Linux is a security-oriented, lightweight Linux distribution based on |
110 | musl libc and busybox. | |
a645c907 OB |
111 | ____ |
112 | ||
fc9c969d TL |
113 | For currently supported releases see: |
114 | ||
115 | https://alpinelinux.org/releases/ | |
a645c907 | 116 | |
109ca764 TL |
117 | Arch Linux |
118 | ~~~~~~~~~~ | |
a645c907 | 119 | |
70292f72 | 120 | [quote, 'https://archlinux.org/'] |
a645c907 | 121 | ____ |
70292f72 | 122 | Arch Linux, a lightweight and flexible Linux® distribution that tries to Keep It Simple. |
a645c907 OB |
123 | ____ |
124 | ||
e1508ce8 TL |
125 | Arch Linux is using a rolling-release model, see its wiki for more details: |
126 | ||
127 | https://wiki.archlinux.org/title/Arch_Linux | |
a645c907 OB |
128 | |
129 | CentOS, Almalinux, Rocky Linux | |
109ca764 | 130 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
a645c907 | 131 | |
109ca764 TL |
132 | CentOS / CentOS Stream |
133 | ^^^^^^^^^^^^^^^^^^^^^^ | |
a645c907 OB |
134 | |
135 | [quote, 'https://centos.org'] | |
136 | ____ | |
70292f72 | 137 | The CentOS Linux distribution is a stable, predictable, manageable and |
a645c907 | 138 | reproducible platform derived from the sources of Red Hat Enterprise Linux |
70292f72 | 139 | (RHEL) |
a645c907 OB |
140 | ____ |
141 | ||
e1508ce8 TL |
142 | For currently supported releases see: |
143 | ||
a645c907 OB |
144 | https://wiki.centos.org/About/Product |
145 | ||
109ca764 TL |
146 | Almalinux |
147 | ^^^^^^^^^ | |
a645c907 OB |
148 | |
149 | [quote, 'https://almalinux.org'] | |
150 | ____ | |
70292f72 TL |
151 | An Open Source, community owned and governed, forever-free enterprise Linux |
152 | distribution, focused on long-term stability, providing a robust | |
153 | production-grade platform. AlmaLinux OS is 1:1 binary compatible with RHEL® and | |
154 | pre-Stream CentOS. | |
a645c907 OB |
155 | ____ |
156 | ||
157 | ||
e1508ce8 TL |
158 | For currently supported releases see: |
159 | ||
a645c907 OB |
160 | https://en.wikipedia.org/wiki/AlmaLinux#Releases |
161 | ||
109ca764 TL |
162 | Rocky Linux |
163 | ^^^^^^^^^^^ | |
a645c907 OB |
164 | |
165 | [quote, 'https://rockylinux.org'] | |
166 | ____ | |
70292f72 TL |
167 | Rocky Linux is a community enterprise operating system designed to be 100% |
168 | bug-for-bug compatible with America's top enterprise Linux distribution now | |
169 | that its downstream partner has shifted direction. | |
a645c907 OB |
170 | ____ |
171 | ||
e1508ce8 TL |
172 | For currently supported releases see: |
173 | ||
a645c907 OB |
174 | https://en.wikipedia.org/wiki/Rocky_Linux#Releases |
175 | ||
a645c907 | 176 | Debian |
109ca764 | 177 | ~~~~~~ |
a645c907 OB |
178 | |
179 | [quote, 'https://www.debian.org/intro/index#software'] | |
180 | ____ | |
70292f72 | 181 | Debian is a free operating system, developed and maintained by the Debian |
a645c907 | 182 | project. A free Linux distribution with thousands of applications to meet our |
70292f72 | 183 | users' needs. |
a645c907 OB |
184 | ____ |
185 | ||
e1508ce8 TL |
186 | For currently supported releases see: |
187 | ||
a645c907 OB |
188 | https://www.debian.org/releases/stable/releasenotes |
189 | ||
190 | Devuan | |
109ca764 | 191 | ~~~~~~ |
a645c907 OB |
192 | |
193 | [quote, 'https://www.devuan.org'] | |
194 | ____ | |
70292f72 | 195 | Devuan GNU+Linux is a fork of Debian without systemd that allows users to |
a645c907 | 196 | reclaim control over their system by avoiding unnecessary entanglements and |
70292f72 | 197 | ensuring Init Freedom. |
a645c907 OB |
198 | ____ |
199 | ||
e1508ce8 TL |
200 | For currently supported releases see: |
201 | ||
202 | https://www.devuan.org/os/releases | |
a645c907 OB |
203 | |
204 | Fedora | |
109ca764 | 205 | ~~~~~~ |
a645c907 OB |
206 | |
207 | [quote, 'https://getfedora.org'] | |
208 | ____ | |
70292f72 | 209 | Fedora creates an innovative, free, and open source platform for hardware, |
a645c907 | 210 | clouds, and containers that enables software developers and community members |
70292f72 | 211 | to build tailored solutions for their users. |
a645c907 OB |
212 | ____ |
213 | ||
e1508ce8 TL |
214 | For currently supported releases see: |
215 | ||
a645c907 OB |
216 | https://fedoraproject.org/wiki/Releases |
217 | ||
218 | Gentoo | |
109ca764 | 219 | ~~~~~~ |
a645c907 OB |
220 | |
221 | [quote, 'https://www.gentoo.org'] | |
222 | ____ | |
70292f72 | 223 | a highly flexible, source-based Linux distribution. |
a645c907 OB |
224 | ____ |
225 | ||
e1508ce8 TL |
226 | Gentoo is using a rolling-release model. |
227 | ||
a645c907 | 228 | OpenSUSE |
109ca764 | 229 | ~~~~~~~~ |
a645c907 OB |
230 | |
231 | [quote, 'https://www.opensuse.org'] | |
232 | ____ | |
70292f72 | 233 | The makers' choice for sysadmins, developers and desktop users. |
a645c907 OB |
234 | ____ |
235 | ||
e1508ce8 TL |
236 | For currently supported releases see: |
237 | ||
a645c907 OB |
238 | https://get.opensuse.org/leap/ |
239 | ||
240 | Ubuntu | |
109ca764 | 241 | ~~~~~~ |
a645c907 | 242 | |
70292f72 | 243 | [quote, 'https://ubuntu.com/'] |
a645c907 | 244 | ____ |
70292f72 TL |
245 | Ubuntu is the modern, open source operating system on Linux for the enterprise |
246 | server, desktop, cloud, and IoT. | |
a645c907 OB |
247 | ____ |
248 | ||
e1508ce8 TL |
249 | For currently supported releases see: |
250 | ||
a645c907 OB |
251 | https://wiki.ubuntu.com/Releases |
252 | ||
80c0adcb | 253 | [[pct_container_images]] |
d61bab51 DM |
254 | Container Images |
255 | ---------------- | |
256 | ||
8c1189b6 | 257 | Container images, sometimes also referred to as ``templates'' or |
69ab602f | 258 | ``appliances'', are `tar` archives which contain everything to run a container. |
d61bab51 | 259 | |
a645c907 OB |
260 | {pve} itself provides a variety of basic templates for the |
261 | xref:pct_supported_distributions[most common Linux distributions]. They can be | |
262 | downloaded using the GUI or the `pveam` (short for {pve} Appliance Manager) | |
263 | command line utility. Additionally, https://www.turnkeylinux.org/[TurnKey | |
264 | Linux] container templates are also available to download. | |
d61bab51 | 265 | |
2a368b1e TL |
266 | The list of available templates is updated daily through the 'pve-daily-update' |
267 | timer. You can also trigger an update manually by executing: | |
3a6fa247 | 268 | |
14e97811 OB |
269 | ---- |
270 | # pveam update | |
271 | ---- | |
3a6fa247 | 272 | |
14e97811 | 273 | To view the list of available images run: |
3a6fa247 | 274 | |
14e97811 OB |
275 | ---- |
276 | # pveam available | |
277 | ---- | |
3a6fa247 | 278 | |
8c1189b6 FG |
279 | You can restrict this large list by specifying the `section` you are |
280 | interested in, for example basic `system` images: | |
3a6fa247 DM |
281 | |
282 | .List available system images | |
283 | ---- | |
284 | # pveam available --section system | |
151bbda8 TL |
285 | system alpine-3.12-default_20200823_amd64.tar.xz |
286 | system alpine-3.13-default_20210419_amd64.tar.xz | |
287 | system alpine-3.14-default_20210623_amd64.tar.xz | |
288 | system archlinux-base_20210420-1_amd64.tar.gz | |
14e97811 | 289 | system centos-7-default_20190926_amd64.tar.xz |
151bbda8 | 290 | system centos-8-default_20201210_amd64.tar.xz |
14e97811 | 291 | system debian-9.0-standard_9.7-1_amd64.tar.gz |
151bbda8 TL |
292 | system debian-10-standard_10.7-1_amd64.tar.gz |
293 | system devuan-3.0-standard_3.0_amd64.tar.gz | |
294 | system fedora-33-default_20201115_amd64.tar.xz | |
295 | system fedora-34-default_20210427_amd64.tar.xz | |
296 | system gentoo-current-default_20200310_amd64.tar.xz | |
297 | system opensuse-15.2-default_20200824_amd64.tar.xz | |
14e97811 OB |
298 | system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz |
299 | system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz | |
151bbda8 TL |
300 | system ubuntu-20.04-standard_20.04-1_amd64.tar.gz |
301 | system ubuntu-20.10-standard_20.10-1_amd64.tar.gz | |
302 | system ubuntu-21.04-standard_21.04-1_amd64.tar.gz | |
3a6fa247 DM |
303 | ---- |
304 | ||
69ab602f | 305 | Before you can use such a template, you need to download them into one of your |
2a368b1e TL |
306 | storages. If you're unsure to which one, you can simply use the `local` named |
307 | storage for that purpose. For clustered installations, it is preferred to use a | |
308 | shared storage so that all nodes can access those images. | |
3a6fa247 | 309 | |
14e97811 OB |
310 | ---- |
311 | # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz | |
312 | ---- | |
3a6fa247 | 313 | |
69ab602f TL |
314 | You are now ready to create containers using that image, and you can list all |
315 | downloaded images on storage `local` with: | |
24f73a63 DM |
316 | |
317 | ---- | |
318 | # pveam list local | |
14e97811 | 319 | local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB |
24f73a63 DM |
320 | ---- |
321 | ||
2a368b1e TL |
322 | TIP: You can also use the {pve} web interface GUI to download, list and delete |
323 | container templates. | |
324 | ||
325 | `pct` uses them to create a new container, for example: | |
326 | ||
327 | ---- | |
328 | # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
329 | ---- | |
330 | ||
69ab602f TL |
331 | The above command shows you the full {pve} volume identifiers. They include the |
332 | storage name, and most other {pve} commands can use them. For example you can | |
333 | delete that image later with: | |
24f73a63 | 334 | |
14e97811 OB |
335 | ---- |
336 | # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
337 | ---- | |
d61bab51 | 338 | |
690cd737 | 339 | |
f3afbb70 | 340 | [[pct_settings]] |
4f785ca7 DM |
341 | Container Settings |
342 | ------------------ | |
343 | ||
304eb5a9 EK |
344 | [[pct_general]] |
345 | General Settings | |
346 | ~~~~~~~~~~~~~~~~ | |
347 | ||
1ff5e4e8 | 348 | [thumbnail="screenshot/gui-create-ct-general.png"] |
2225402c | 349 | |
304eb5a9 EK |
350 | General settings of a container include |
351 | ||
352 | * the *Node* : the physical server on which the container will run | |
69ab602f TL |
353 | * the *CT ID*: a unique number in this {pve} installation used to identify your |
354 | container | |
304eb5a9 EK |
355 | * *Hostname*: the hostname of the container |
356 | * *Resource Pool*: a logical group of containers and VMs | |
357 | * *Password*: the root password of the container | |
358 | * *SSH Public Key*: a public key for connecting to the root account over SSH | |
359 | * *Unprivileged container*: this option allows to choose at creation time | |
69ab602f | 360 | if you want to create a privileged or unprivileged container. |
304eb5a9 | 361 | |
14e97811 OB |
362 | Unprivileged Containers |
363 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
364 | ||
69ab602f TL |
365 | Unprivileged containers use a new kernel feature called user namespaces. |
366 | The root UID 0 inside the container is mapped to an unprivileged user outside | |
367 | the container. This means that most security issues (container escape, resource | |
14e97811 OB |
368 | abuse, etc.) in these containers will affect a random unprivileged user, and |
369 | would be a generic kernel security bug rather than an LXC issue. The LXC team | |
370 | thinks unprivileged containers are safe by design. | |
371 | ||
372 | This is the default option when creating a new container. | |
373 | ||
69ab602f TL |
374 | NOTE: If the container uses systemd as an init system, please be aware the |
375 | systemd version running inside the container should be equal to or greater than | |
376 | 220. | |
14e97811 | 377 | |
304eb5a9 EK |
378 | |
379 | Privileged Containers | |
380 | ^^^^^^^^^^^^^^^^^^^^^ | |
381 | ||
c02ac25b TL |
382 | Security in containers is achieved by using mandatory access control 'AppArmor' |
383 | restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team | |
384 | considers this kind of container as unsafe, and they will not consider new | |
385 | container escape exploits to be security issues worthy of a CVE and quick fix. | |
386 | That's why privileged containers should only be used in trusted environments. | |
304eb5a9 | 387 | |
304eb5a9 | 388 | |
9a5e9443 | 389 | [[pct_cpu]] |
9a5e9443 DM |
390 | CPU |
391 | ~~~ | |
392 | ||
1ff5e4e8 | 393 | [thumbnail="screenshot/gui-create-ct-cpu.png"] |
097aa949 | 394 | |
14e97811 OB |
395 | You can restrict the number of visible CPUs inside the container using the |
396 | `cores` option. This is implemented using the Linux 'cpuset' cgroup | |
69ab602f TL |
397 | (**c**ontrol *group*). |
398 | A special task inside `pvestatd` tries to distribute running containers among | |
399 | available CPUs periodically. | |
400 | To view the assigned CPUs run the following command: | |
9a5e9443 DM |
401 | |
402 | ---- | |
403 | # pct cpusets | |
404 | --------------------- | |
405 | 102: 6 7 | |
406 | 105: 2 3 4 5 | |
407 | 108: 0 1 | |
408 | --------------------- | |
409 | ---- | |
410 | ||
14e97811 OB |
411 | Containers use the host kernel directly. All tasks inside a container are |
412 | handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely | |
413 | **F**air **S**cheduler) scheduler by default, which has additional bandwidth | |
414 | control options. | |
9a5e9443 DM |
415 | |
416 | [horizontal] | |
0725e3c6 | 417 | |
69ab602f TL |
418 | `cpulimit`: :: You can use this option to further limit assigned CPU time. |
419 | Please note that this is a floating point number, so it is perfectly valid to | |
420 | assign two cores to a container, but restrict overall CPU consumption to half a | |
421 | core. | |
9a5e9443 DM |
422 | + |
423 | ---- | |
424 | cores: 2 | |
425 | cpulimit: 0.5 | |
426 | ---- | |
427 | ||
69ab602f TL |
428 | `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The |
429 | larger the number is, the more CPU time this container gets. Number is relative | |
48219c58 FE |
430 | to the weights of all the other running containers. The default is `100` (or |
431 | `1024` if the host uses legacy cgroup v1). You can use this setting to | |
432 | prioritize some containers. | |
9a5e9443 DM |
433 | |
434 | ||
435 | [[pct_memory]] | |
436 | Memory | |
437 | ~~~~~~ | |
438 | ||
1ff5e4e8 | 439 | [thumbnail="screenshot/gui-create-ct-memory.png"] |
097aa949 | 440 | |
9a5e9443 DM |
441 | Container memory is controlled using the cgroup memory controller. |
442 | ||
443 | [horizontal] | |
444 | ||
69ab602f TL |
445 | `memory`: :: Limit overall memory usage. This corresponds to the |
446 | `memory.limit_in_bytes` cgroup setting. | |
9a5e9443 | 447 | |
69ab602f TL |
448 | `swap`: :: Allows the container to use additional swap memory from the host |
449 | swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup | |
450 | setting, which is set to the sum of both value (`memory + swap`). | |
9a5e9443 | 451 | |
4f785ca7 DM |
452 | |
453 | [[pct_mount_points]] | |
9e44e493 DM |
454 | Mount Points |
455 | ~~~~~~~~~~~~ | |
eeecce95 | 456 | |
1ff5e4e8 | 457 | [thumbnail="screenshot/gui-create-ct-root-disk.png"] |
097aa949 | 458 | |
14e97811 | 459 | The root mount point is configured with the `rootfs` property. You can |
69ab602f TL |
460 | configure up to 256 additional mount points. The corresponding options are |
461 | called `mp0` to `mp255`. They can contain the following settings: | |
01639994 FG |
462 | |
463 | include::pct-mountpoint-opts.adoc[] | |
464 | ||
69ab602f TL |
465 | Currently there are three types of mount points: storage backed mount points, |
466 | bind mounts, and device mounts. | |
9e44e493 | 467 | |
5eba0743 | 468 | .Typical container `rootfs` configuration |
4c3b5c77 DM |
469 | ---- |
470 | rootfs: thin1:base-100-disk-1,size=8G | |
471 | ---- | |
472 | ||
473 | ||
5eba0743 | 474 | Storage Backed Mount Points |
4c3b5c77 | 475 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
01639994 | 476 | |
9e44e493 | 477 | Storage backed mount points are managed by the {pve} storage subsystem and come |
eeecce95 WB |
478 | in three different flavors: |
479 | ||
5eba0743 | 480 | - Image based: these are raw images containing a single ext4 formatted file |
eeecce95 | 481 | system. |
5eba0743 | 482 | - ZFS subvolumes: these are technically bind mounts, but with managed storage, |
eeecce95 WB |
483 | and thus allow resizing and snapshotting. |
484 | - Directories: passing `size=0` triggers a special case where instead of a raw | |
485 | image a directory is created. | |
486 | ||
03782251 FG |
487 | NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed |
488 | mount point volumes will automatically allocate a volume of the specified size | |
69ab602f TL |
489 | on the specified storage. For example, calling |
490 | ||
491 | ---- | |
492 | pct set 100 -mp0 thin1:10,mp=/path/in/container | |
493 | ---- | |
494 | ||
495 | will allocate a 10GB volume on the storage `thin1` and replace the volume ID | |
496 | place holder `10` with the allocated volume ID, and setup the moutpoint in the | |
497 | container at `/path/in/container` | |
03782251 | 498 | |
4c3b5c77 | 499 | |
5eba0743 | 500 | Bind Mount Points |
4c3b5c77 | 501 | ^^^^^^^^^^^^^^^^^ |
01639994 | 502 | |
9baca183 FG |
503 | Bind mounts allow you to access arbitrary directories from your Proxmox VE host |
504 | inside a container. Some potential use cases are: | |
505 | ||
506 | - Accessing your home directory in the guest | |
507 | - Accessing an USB device directory in the guest | |
acccc49b | 508 | - Accessing an NFS mount from the host in the guest |
9baca183 | 509 | |
eeecce95 | 510 | Bind mounts are considered to not be managed by the storage subsystem, so you |
9baca183 | 511 | cannot make snapshots or deal with quotas from inside the container. With |
eeecce95 | 512 | unprivileged containers you might run into permission problems caused by the |
9baca183 FG |
513 | user mapping and cannot use ACLs. |
514 | ||
8c1189b6 | 515 | NOTE: The contents of bind mount points are not backed up when using `vzdump`. |
eeecce95 | 516 | |
69ab602f TL |
517 | WARNING: For security reasons, bind mounts should only be established using |
518 | source directories especially reserved for this purpose, e.g., a directory | |
519 | hierarchy under `/mnt/bindmounts`. Never bind mount system directories like | |
520 | `/`, `/var` or `/etc` into a container - this poses a great security risk. | |
9baca183 FG |
521 | |
522 | NOTE: The bind mount source path must not contain any symlinks. | |
523 | ||
524 | For example, to make the directory `/mnt/bindmounts/shared` accessible in the | |
55eeea33 OB |
525 | container with ID `100` under the path `/shared`, add a configuration line such as: |
526 | ||
527 | ---- | |
528 | mp0: /mnt/bindmounts/shared,mp=/shared | |
529 | ---- | |
530 | ||
531 | into `/etc/pve/lxc/100.conf`. | |
532 | ||
533 | Or alternatively use the `pct` tool: | |
534 | ||
535 | ---- | |
536 | pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared | |
537 | ---- | |
538 | ||
539 | to achieve the same result. | |
6b707f2c | 540 | |
4c3b5c77 | 541 | |
5eba0743 | 542 | Device Mount Points |
4c3b5c77 | 543 | ^^^^^^^^^^^^^^^^^^^ |
fe154a4f | 544 | |
7432d78e FG |
545 | Device mount points allow to mount block devices of the host directly into the |
546 | container. Similar to bind mounts, device mounts are not managed by {PVE}'s | |
547 | storage subsystem, but the `quota` and `acl` options will be honored. | |
548 | ||
549 | NOTE: Device mount points should only be used under special circumstances. In | |
550 | most cases a storage backed mount point offers the same performance and a lot | |
551 | more features. | |
552 | ||
69ab602f TL |
553 | NOTE: The contents of device mount points are not backed up when using |
554 | `vzdump`. | |
01639994 | 555 | |
4c3b5c77 | 556 | |
80c0adcb | 557 | [[pct_container_network]] |
f5c351f0 DM |
558 | Network |
559 | ~~~~~~~ | |
04c569f6 | 560 | |
1ff5e4e8 | 561 | [thumbnail="screenshot/gui-create-ct-network.png"] |
097aa949 | 562 | |
69ab602f TL |
563 | You can configure up to 10 network interfaces for a single container. |
564 | The corresponding options are called `net0` to `net9`, and they can contain the | |
565 | following setting: | |
bac8c385 DM |
566 | |
567 | include::pct-network-opts.adoc[] | |
04c569f6 DM |
568 | |
569 | ||
139a9019 DM |
570 | [[pct_startup_and_shutdown]] |
571 | Automatic Start and Shutdown of Containers | |
572 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
573 | ||
14e97811 OB |
574 | To automatically start a container when the host system boots, select the |
575 | option 'Start at boot' in the 'Options' panel of the container in the web | |
576 | interface or run the following command: | |
139a9019 | 577 | |
14e97811 OB |
578 | ---- |
579 | # pct set CTID -onboot 1 | |
580 | ---- | |
139a9019 | 581 | |
4dbeb548 DM |
582 | .Start and Shutdown Order |
583 | // use the screenshot from qemu - its the same | |
1ff5e4e8 | 584 | [thumbnail="screenshot/gui-qemu-edit-start-order.png"] |
4dbeb548 | 585 | |
69ab602f TL |
586 | If you want to fine tune the boot order of your containers, you can use the |
587 | following parameters: | |
139a9019 | 588 | |
69ab602f TL |
589 | * *Start/Shutdown order*: Defines the start order priority. For example, set it |
590 | to 1 if you want the CT to be the first to be started. (We use the reverse | |
591 | startup order for shutdown, so a container with a start order of 1 would be | |
592 | the last to be shut down) | |
593 | * *Startup delay*: Defines the interval between this container start and | |
594 | subsequent containers starts. For example, set it to 240 if you want to wait | |
595 | 240 seconds before starting other containers. | |
139a9019 | 596 | * *Shutdown timeout*: Defines the duration in seconds {pve} should wait |
69ab602f TL |
597 | for the container to be offline after issuing a shutdown command. |
598 | By default this value is set to 60, which means that {pve} will issue a | |
599 | shutdown request, wait 60s for the machine to be offline, and if after 60s | |
600 | the machine is still online will notify that the shutdown action failed. | |
139a9019 | 601 | |
69ab602f TL |
602 | Please note that containers without a Start/Shutdown order parameter will |
603 | always start after those where the parameter is set, and this parameter only | |
139a9019 DM |
604 | makes sense between the machines running locally on a host, and not |
605 | cluster-wide. | |
606 | ||
0f7778ac DW |
607 | If you require a delay between the host boot and the booting of the first |
608 | container, see the section on | |
609 | xref:first_guest_boot_delay[Proxmox VE Node Management]. | |
610 | ||
611 | ||
c2c8eb89 DC |
612 | Hookscripts |
613 | ~~~~~~~~~~~ | |
614 | ||
615 | You can add a hook script to CTs with the config property `hookscript`. | |
616 | ||
14e97811 OB |
617 | ---- |
618 | # pct set 100 -hookscript local:snippets/hookscript.pl | |
619 | ---- | |
c2c8eb89 | 620 | |
69ab602f TL |
621 | It will be called during various phases of the guests lifetime. For an example |
622 | and documentation see the example script under | |
c2c8eb89 | 623 | `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. |
139a9019 | 624 | |
bf7f598a TL |
625 | Security Considerations |
626 | ----------------------- | |
627 | ||
628 | Containers use the kernel of the host system. This exposes an attack surface | |
629 | for malicious users. In general, full virtual machines provide better | |
656d8b21 | 630 | isolation. This should be considered if containers are provided to unknown or |
bf7f598a TL |
631 | untrusted people. |
632 | ||
633 | To reduce the attack surface, LXC uses many security features like AppArmor, | |
634 | CGroups and kernel namespaces. | |
635 | ||
c02ac25b TL |
636 | AppArmor |
637 | ~~~~~~~~ | |
638 | ||
bf7f598a TL |
639 | AppArmor profiles are used to restrict access to possibly dangerous actions. |
640 | Some system calls, i.e. `mount`, are prohibited from execution. | |
641 | ||
642 | To trace AppArmor activity, use: | |
643 | ||
644 | ---- | |
645 | # dmesg | grep apparmor | |
646 | ---- | |
647 | ||
c02ac25b TL |
648 | Although it is not recommended, AppArmor can be disabled for a container. This |
649 | brings security risks with it. Some syscalls can lead to privilege escalation | |
650 | when executed within a container if the system is misconfigured or if a LXC or | |
651 | Linux Kernel vulnerability exists. | |
652 | ||
653 | To disable AppArmor for a container, add the following line to the container | |
654 | configuration file located at `/etc/pve/lxc/CTID.conf`: | |
655 | ||
656 | ---- | |
76aaaeab | 657 | lxc.apparmor.profile = unconfined |
c02ac25b TL |
658 | ---- |
659 | ||
660 | WARNING: Please note that this is not recommended for production use. | |
661 | ||
662 | ||
17238cd3 WB |
663 | [[pct_cgroup]] |
664 | Control Groups ('cgroup') | |
665 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
666 | ||
667 | 'cgroup' is a kernel | |
668 | mechanism used to hierarchically organize processes and distribute system | |
669 | resources. | |
670 | ||
671 | The main resources controlled via 'cgroups' are CPU time, memory and swap | |
672 | limits, and access to device nodes. 'cgroups' are also used to "freeze" a | |
673 | container before taking snapshots. | |
674 | ||
675 | There are 2 versions of 'cgroups' currently available, | |
676 | https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy] | |
677 | and | |
678 | https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2']. | |
679 | ||
680 | Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a | |
681 | "hybrid" setup was used, where resource control was mainly done in 'cgroupv1' | |
682 | with an additional 'cgroupv2' controller which could take over some subsystems | |
683 | via the 'cgroup_no_v1' kernel command line parameter. (See the | |
684 | https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel | |
685 | parameter documentation] for details.) | |
686 | ||
75d3c2be TL |
687 | [[pct_cgroup_compat]] |
688 | CGroup Version Compatibility | |
689 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
17238cd3 WB |
690 | The main difference between pure 'cgroupv2' and the old hybrid environments |
691 | regarding {pve} is that with 'cgroupv2' memory and swap are now controlled | |
692 | independently. The memory and swap settings for containers can map directly to | |
693 | these values, whereas previously only the memory limit and the limit of the | |
694 | *sum* of memory and swap could be limited. | |
695 | ||
696 | Another important difference is that the 'devices' controller is configured in a | |
697 | completely different way. Because of this, file system quotas are currently not | |
698 | supported in a pure 'cgroupv2' environment. | |
699 | ||
c80d381a SI |
700 | 'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2' |
701 | environment. Containers running 'systemd' version 231 or newer support | |
702 | 'cgroupv2' footnote:[this includes all newest major versions of container | |
703 | templates shipped by {pve}], as do containers not using 'systemd' as init | |
704 | system footnote:[for example Alpine Linux]. | |
705 | ||
75d3c2be TL |
706 | [NOTE] |
707 | ==== | |
708 | CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases, | |
709 | which have a 'systemd' version that is too old to run in a 'cgroupv2' | |
710 | environment, you can either | |
c80d381a | 711 | |
75d3c2be TL |
712 | * Upgrade the whole distribution to a newer release. For the examples above, that |
713 | could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like | |
714 | AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and | |
715 | security fixes, often also new features, and moving the EOL date in the future. | |
716 | ||
717 | * Upgrade the Containers systemd version. If the distribution provides a | |
718 | backports repository this can be an easy and quick stop-gap measurement. | |
719 | ||
720 | * Move the container, or its services, to a Virtual Machine. Virtual Machines | |
721 | have a much less interaction with the host, that's why one can install | |
722 | decades old OS versions just fine there. | |
723 | ||
724 | * Switch back to the legacy 'cgroup' controller. Note that while it can be a | |
725 | valid solution, it's not a permanent one. There's a high likelihood that a | |
726 | future {pve} major release, for example 8.0, cannot support the legacy | |
727 | controller anymore. | |
728 | ==== | |
729 | ||
730 | [[pct_cgroup_change_version]] | |
731 | Changing CGroup Version | |
732 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
733 | ||
734 | TIP: If file system quotas are not required and all containers support 'cgroupv2', | |
c80d381a | 735 | it is recommended to stick to the new default. |
17238cd3 WB |
736 | |
737 | To switch back to the previous version the following kernel command line | |
738 | parameter can be used: | |
739 | ||
740 | ---- | |
741 | systemd.unified_cgroup_hierarchy=0 | |
742 | ---- | |
743 | ||
744 | See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot | |
745 | command line on where to add the parameter. | |
746 | ||
747 | // TODO: seccomp a bit more. | |
c02ac25b TL |
748 | // TODO: pve-lxc-syscalld |
749 | ||
750 | ||
0892a2c2 TL |
751 | Guest Operating System Configuration |
752 | ------------------------------------ | |
753 | ||
754 | {pve} tries to detect the Linux distribution in the container, and modifies | |
755 | some files. Here is a short list of things done at container startup: | |
756 | ||
757 | set /etc/hostname:: to set the container name | |
758 | ||
759 | modify /etc/hosts:: to allow lookup of the local hostname | |
760 | ||
761 | network setup:: pass the complete network setup to the container | |
762 | ||
763 | configure DNS:: pass information about DNS servers | |
764 | ||
765 | adapt the init system:: for example, fix the number of spawned getty processes | |
766 | ||
767 | set the root password:: when creating a new container | |
768 | ||
769 | rewrite ssh_host_keys:: so that each container has unique keys | |
770 | ||
771 | randomize crontab:: so that cron does not start at the same time on all containers | |
772 | ||
773 | Changes made by {PVE} are enclosed by comment markers: | |
774 | ||
775 | ---- | |
776 | # --- BEGIN PVE --- | |
777 | <data> | |
778 | # --- END PVE --- | |
779 | ---- | |
780 | ||
781 | Those markers will be inserted at a reasonable location in the file. If such a | |
782 | section already exists, it will be updated in place and will not be moved. | |
783 | ||
784 | Modification of a file can be prevented by adding a `.pve-ignore.` file for it. | |
785 | For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts` | |
786 | file will not be touched. This can be a simple empty file created via: | |
787 | ||
788 | ---- | |
789 | # touch /etc/.pve-ignore.hosts | |
790 | ---- | |
791 | ||
792 | Most modifications are OS dependent, so they differ between different | |
793 | distributions and versions. You can completely disable modifications by | |
794 | manually setting the `ostype` to `unmanaged`. | |
795 | ||
796 | OS type detection is done by testing for certain files inside the | |
3d5c55fc TL |
797 | container. {pve} first checks the `/etc/os-release` file |
798 | footnote:[/etc/os-release replaces the multitude of per-distribution | |
799 | release files https://manpages.debian.org/stable/systemd/os-release.5.en.html]. | |
800 | If that file is not present, or it does not contain a clearly recognizable | |
801 | distribution identifier the following distribution specific release files are | |
802 | checked. | |
0892a2c2 TL |
803 | |
804 | Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`) | |
805 | ||
806 | Debian:: test /etc/debian_version | |
807 | ||
808 | Fedora:: test /etc/fedora-release | |
809 | ||
810 | RedHat or CentOS:: test /etc/redhat-release | |
811 | ||
812 | ArchLinux:: test /etc/arch-release | |
813 | ||
814 | Alpine:: test /etc/alpine-release | |
815 | ||
816 | Gentoo:: test /etc/gentoo-release | |
817 | ||
818 | NOTE: Container start fails if the configured `ostype` differs from the auto | |
819 | detected type. | |
820 | ||
821 | ||
b0df9949 TL |
822 | [[pct_container_storage]] |
823 | Container Storage | |
824 | ----------------- | |
825 | ||
826 | The {pve} LXC container storage model is more flexible than traditional | |
827 | container storage models. A container can have multiple mount points. This | |
828 | makes it possible to use the best suited storage for each application. | |
829 | ||
830 | For example the root file system of the container can be on slow and cheap | |
831 | storage while the database can be on fast and distributed storage via a second | |
832 | mount point. See section <<pct_mount_points, Mount Points>> for further | |
833 | details. | |
834 | ||
835 | Any storage type supported by the {pve} storage library can be used. This means | |
836 | that containers can be stored on local (for example `lvm`, `zfs` or directory), | |
837 | shared external (like `iSCSI`, `NFS`) or even distributed storage systems like | |
838 | Ceph. Advanced storage features like snapshots or clones can be used if the | |
839 | underlying storage supports them. The `vzdump` backup tool can use snapshots to | |
840 | provide consistent container backups. | |
841 | ||
842 | Furthermore, local devices or local directories can be mounted directly using | |
843 | 'bind mounts'. This gives access to local resources inside a container with | |
844 | practically zero overhead. Bind mounts can be used as an easy way to share data | |
845 | between containers. | |
846 | ||
847 | ||
848 | FUSE Mounts | |
849 | ~~~~~~~~~~~ | |
850 | ||
851 | WARNING: Because of existing issues in the Linux kernel's freezer subsystem the | |
852 | usage of FUSE mounts inside a container is strongly advised against, as | |
853 | containers need to be frozen for suspend or snapshot mode backups. | |
854 | ||
855 | If FUSE mounts cannot be replaced by other mounting mechanisms or storage | |
856 | technologies, it is possible to establish the FUSE mount on the Proxmox host | |
857 | and use a bind mount point to make it accessible inside the container. | |
858 | ||
859 | ||
860 | Using Quotas Inside Containers | |
861 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
862 | ||
863 | Quotas allow to set limits inside a container for the amount of disk space that | |
864 | each user can use. | |
865 | ||
17238cd3 WB |
866 | NOTE: This currently requires the use of legacy 'cgroups'. |
867 | ||
b0df9949 TL |
868 | NOTE: This only works on ext4 image based storage types and currently only |
869 | works with privileged containers. | |
870 | ||
871 | Activating the `quota` option causes the following mount options to be used for | |
872 | a mount point: | |
873 | `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` | |
874 | ||
875 | This allows quotas to be used like on any other system. You can initialize the | |
876 | `/aquota.user` and `/aquota.group` files by running: | |
877 | ||
878 | ---- | |
879 | # quotacheck -cmug / | |
880 | # quotaon / | |
881 | ---- | |
882 | ||
883 | Then edit the quotas using the `edquota` command. Refer to the documentation of | |
884 | the distribution running inside the container for details. | |
885 | ||
886 | NOTE: You need to run the above commands for every mount point by passing the | |
887 | mount point's path instead of just `/`. | |
888 | ||
889 | ||
890 | Using ACLs Inside Containers | |
891 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
892 | ||
893 | The standard Posix **A**ccess **C**ontrol **L**ists are also available inside | |
894 | containers. ACLs allow you to set more detailed file ownership than the | |
895 | traditional user/group/others model. | |
896 | ||
897 | ||
898 | Backup of Container mount points | |
899 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
900 | ||
901 | To include a mount point in backups, enable the `backup` option for it in the | |
902 | container configuration. For an existing mount point `mp0` | |
903 | ||
904 | ---- | |
905 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G | |
906 | ---- | |
907 | ||
908 | add `backup=1` to enable it. | |
909 | ||
910 | ---- | |
911 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1 | |
912 | ---- | |
913 | ||
914 | NOTE: When creating a new mount point in the GUI, this option is enabled by | |
915 | default. | |
916 | ||
917 | To disable backups for a mount point, add `backup=0` in the way described | |
918 | above, or uncheck the *Backup* checkbox on the GUI. | |
919 | ||
920 | Replication of Containers mount points | |
921 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
922 | ||
923 | By default, additional mount points are replicated when the Root Disk is | |
924 | replicated. If you want the {pve} storage replication mechanism to skip a mount | |
925 | point, you can set the *Skip replication* option for that mount point. | |
926 | As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a | |
927 | mount point to a different type of storage when the container has replication | |
928 | configured requires to have *Skip replication* enabled for that mount point. | |
929 | ||
930 | ||
51e33128 FG |
931 | Backup and Restore |
932 | ------------------ | |
933 | ||
5eba0743 | 934 | |
2175e37b FG |
935 | Container Backup |
936 | ~~~~~~~~~~~~~~~~ | |
937 | ||
69ab602f TL |
938 | It is possible to use the `vzdump` tool for container backup. Please refer to |
939 | the `vzdump` manual page for details. | |
8c1189b6 | 940 | |
51e33128 | 941 | |
2175e37b FG |
942 | Restoring Container Backups |
943 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
944 | ||
69ab602f TL |
945 | Restoring container backups made with `vzdump` is possible using the `pct |
946 | restore` command. By default, `pct restore` will attempt to restore as much of | |
947 | the backed up container configuration as possible. It is possible to override | |
948 | the backed up configuration by manually setting container options on the | |
949 | command line (see the `pct` manual page for details). | |
2175e37b | 950 | |
8c1189b6 | 951 | NOTE: `pvesm extractconfig` can be used to view the backed up configuration |
2175e37b FG |
952 | contained in a vzdump archive. |
953 | ||
954 | There are two basic restore modes, only differing by their handling of mount | |
955 | points: | |
956 | ||
4c3b5c77 | 957 | |
8c1189b6 FG |
958 | ``Simple'' Restore Mode |
959 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b | 960 | |
69ab602f TL |
961 | If neither the `rootfs` parameter nor any of the optional `mpX` parameters are |
962 | explicitly set, the mount point configuration from the backed up configuration | |
963 | file is restored using the following steps: | |
2175e37b FG |
964 | |
965 | . Extract mount points and their options from backup | |
324efba3 FG |
966 | . Create volumes for storage backed mount points on the storage provided with |
967 | the `storage` parameter (default: `local`). | |
2175e37b | 968 | . Extract files from backup archive |
69ab602f TL |
969 | . Add bind and device mount points to restored configuration (limited to root |
970 | user) | |
2175e37b FG |
971 | |
972 | NOTE: Since bind and device mount points are never backed up, no files are | |
973 | restored in the last step, but only the configuration options. The assumption | |
974 | is that such mount points are either backed up with another mechanism (e.g., | |
975 | NFS space that is bind mounted into many containers), or not intended to be | |
976 | backed up at all. | |
977 | ||
978 | This simple mode is also used by the container restore operations in the web | |
979 | interface. | |
980 | ||
4c3b5c77 | 981 | |
8c1189b6 FG |
982 | ``Advanced'' Restore Mode |
983 | ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b FG |
984 | |
985 | By setting the `rootfs` parameter (and optionally, any combination of `mpX` | |
8c1189b6 | 986 | parameters), the `pct restore` command is automatically switched into an |
2175e37b | 987 | advanced mode. This advanced mode completely ignores the `rootfs` and `mpX` |
69ab602f TL |
988 | configuration options contained in the backup archive, and instead only uses |
989 | the options explicitly provided as parameters. | |
2175e37b | 990 | |
69ab602f TL |
991 | This mode allows flexible configuration of mount point settings at restore |
992 | time, for example: | |
2175e37b FG |
993 | |
994 | * Set target storages, volume sizes and other options for each mount point | |
69ab602f | 995 | individually |
2175e37b FG |
996 | * Redistribute backed up files according to new mount point scheme |
997 | * Restore to device and/or bind mount points (limited to root user) | |
998 | ||
51e33128 | 999 | |
8c1189b6 | 1000 | Managing Containers with `pct` |
04c569f6 DM |
1001 | ------------------------------ |
1002 | ||
6d718b9b TL |
1003 | The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage |
1004 | {pve} containers. It enables you to create or destroy containers, as well as | |
1005 | control the container execution (start, stop, reboot, migrate, etc.). It can be | |
1006 | used to set parameters in the config file of a container, for example the | |
1007 | network configuration or memory limits. | |
5eba0743 | 1008 | |
04c569f6 DM |
1009 | CLI Usage Examples |
1010 | ~~~~~~~~~~~~~~~~~~ | |
1011 | ||
69ab602f TL |
1012 | Create a container based on a Debian template (provided you have already |
1013 | downloaded the template via the web interface) | |
04c569f6 | 1014 | |
14e97811 OB |
1015 | ---- |
1016 | # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz | |
1017 | ---- | |
04c569f6 DM |
1018 | |
1019 | Start container 100 | |
1020 | ||
14e97811 OB |
1021 | ---- |
1022 | # pct start 100 | |
1023 | ---- | |
04c569f6 DM |
1024 | |
1025 | Start a login session via getty | |
1026 | ||
14e97811 OB |
1027 | ---- |
1028 | # pct console 100 | |
1029 | ---- | |
04c569f6 DM |
1030 | |
1031 | Enter the LXC namespace and run a shell as root user | |
1032 | ||
14e97811 OB |
1033 | ---- |
1034 | # pct enter 100 | |
1035 | ---- | |
04c569f6 DM |
1036 | |
1037 | Display the configuration | |
1038 | ||
14e97811 OB |
1039 | ---- |
1040 | # pct config 100 | |
1041 | ---- | |
04c569f6 | 1042 | |
69ab602f TL |
1043 | Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set |
1044 | the address and gateway, while it's running | |
04c569f6 | 1045 | |
14e97811 OB |
1046 | ---- |
1047 | # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1 | |
1048 | ---- | |
04c569f6 DM |
1049 | |
1050 | Reduce the memory of the container to 512MB | |
1051 | ||
14e97811 OB |
1052 | ---- |
1053 | # pct set 100 -memory 512 | |
1054 | ---- | |
0585f29a | 1055 | |
87927c65 DJ |
1056 | Destroying a container always removes it from Access Control Lists and it always |
1057 | removes the firewall configuration of the container. You have to activate | |
1058 | '--purge', if you want to additionally remove the container from replication jobs, | |
1059 | backup jobs and HA resource configurations. | |
1060 | ||
1061 | ---- | |
1062 | # pct destroy 100 --purge | |
1063 | ---- | |
1064 | ||
66aecccb AL |
1065 | Move a mount point volume to a different storage. |
1066 | ||
1067 | ---- | |
1068 | # pct move-volume 100 mp0 other-storage | |
1069 | ---- | |
1070 | ||
1071 | Reassign a volume to a different CT. This will remove the volume `mp0` from | |
1072 | the source CT and attaches it as `mp1` to the target CT. In the background | |
1073 | the volume is being renamed so that the name matches the new owner. | |
1074 | ||
1075 | ---- | |
1076 | # pct move-volume 100 mp0 --target-vmid 200 --target-volume mp1 | |
1077 | ---- | |
87927c65 | 1078 | |
04c569f6 | 1079 | |
fe57a420 FG |
1080 | Obtaining Debugging Logs |
1081 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
1082 | ||
1083 | In case `pct start` is unable to start a specific container, it might be | |
59b89a69 OB |
1084 | helpful to collect debugging output by passing the `--debug` flag (replace `CTID` with |
1085 | the container's CTID): | |
fe57a420 | 1086 | |
14e97811 | 1087 | ---- |
59b89a69 OB |
1088 | # pct start CTID --debug |
1089 | ---- | |
1090 | ||
97e4455e TL |
1091 | Alternatively, you can use the following `lxc-start` command, which will save |
1092 | the debug log to the file specified by the `-o` output option: | |
59b89a69 OB |
1093 | |
1094 | ---- | |
1095 | # lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log | |
14e97811 | 1096 | ---- |
fe57a420 | 1097 | |
69ab602f | 1098 | This command will attempt to start the container in foreground mode, to stop |
59b89a69 | 1099 | the container run `pct shutdown CTID` or `pct stop CTID` in a second terminal. |
fe57a420 | 1100 | |
59b89a69 | 1101 | The collected debug log is written to `/tmp/lxc-CTID.log`. |
fe57a420 FG |
1102 | |
1103 | NOTE: If you have changed the container's configuration since the last start | |
1104 | attempt with `pct start`, you need to run `pct start` at least once to also | |
1105 | update the configuration used by `lxc-start`. | |
1106 | ||
33f50e04 DC |
1107 | [[pct_migration]] |
1108 | Migration | |
1109 | --------- | |
1110 | ||
1111 | If you have a cluster, you can migrate your Containers with | |
1112 | ||
14e97811 OB |
1113 | ---- |
1114 | # pct migrate <ctid> <target> | |
1115 | ---- | |
33f50e04 DC |
1116 | |
1117 | This works as long as your Container is offline. If it has local volumes or | |
14e97811 | 1118 | mount points defined, the migration will copy the content over the network to |
ba021358 | 1119 | the target host if the same storage is defined there. |
33f50e04 | 1120 | |
656d8b21 | 1121 | Running containers cannot live-migrated due to technical limitations. You can |
4c82550d TL |
1122 | do a restart migration, which shuts down, moves and then starts a container |
1123 | again on the target node. As containers are very lightweight, this results | |
1124 | normally only in a downtime of some hundreds of milliseconds. | |
1125 | ||
1126 | A restart migration can be done through the web interface or by using the | |
1127 | `--restart` flag with the `pct migrate` command. | |
33f50e04 | 1128 | |
69ab602f TL |
1129 | A restart migration will shut down the Container and kill it after the |
1130 | specified timeout (the default is 180 seconds). Then it will migrate the | |
1131 | Container like an offline migration and when finished, it starts the Container | |
1132 | on the target node. | |
c7bc47af DM |
1133 | |
1134 | [[pct_configuration]] | |
1135 | Configuration | |
1136 | ------------- | |
1137 | ||
69ab602f TL |
1138 | The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where |
1139 | `<CTID>` is the numeric ID of the given container. Like all other files stored | |
1140 | inside `/etc/pve/`, they get automatically replicated to all other cluster | |
1141 | nodes. | |
c7bc47af DM |
1142 | |
1143 | NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be | |
1144 | unique cluster wide. | |
1145 | ||
1146 | .Example Container Configuration | |
1147 | ---- | |
1148 | ostype: debian | |
1149 | arch: amd64 | |
1150 | hostname: www | |
1151 | memory: 512 | |
1152 | swap: 512 | |
1153 | net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth | |
1154 | rootfs: local:107/vm-107-disk-1.raw,size=7G | |
1155 | ---- | |
1156 | ||
69ab602f | 1157 | The configuration files are simple text files. You can edit them using a normal |
da9679b6 | 1158 | text editor, for example, `vi` or `nano`. |
69ab602f TL |
1159 | This is sometimes useful to do small corrections, but keep in mind that you |
1160 | need to restart the container to apply such changes. | |
c7bc47af | 1161 | |
69ab602f TL |
1162 | For that reason, it is usually better to use the `pct` command to generate and |
1163 | modify those files, or do the whole thing using the GUI. | |
1164 | Our toolkit is smart enough to instantaneously apply most changes to running | |
da9679b6 | 1165 | containers. This feature is called ``hot plug'', and there is no need to restart |
69ab602f | 1166 | the container in that case. |
c7bc47af | 1167 | |
da9679b6 | 1168 | In cases where a change cannot be hot-plugged, it will be registered as a |
69ab602f TL |
1169 | pending change (shown in red color in the GUI). |
1170 | They will only be applied after rebooting the container. | |
14e97811 | 1171 | |
c7bc47af DM |
1172 | |
1173 | File Format | |
1174 | ~~~~~~~~~~~ | |
1175 | ||
69ab602f TL |
1176 | The container configuration file uses a simple colon separated key/value |
1177 | format. Each line has the following format: | |
c7bc47af DM |
1178 | |
1179 | ----- | |
1180 | # this is a comment | |
1181 | OPTION: value | |
1182 | ----- | |
1183 | ||
69ab602f TL |
1184 | Blank lines in those files are ignored, and lines starting with a `#` character |
1185 | are treated as comments and are also ignored. | |
c7bc47af | 1186 | |
69ab602f | 1187 | It is possible to add low-level, LXC style configuration directly, for example: |
c7bc47af | 1188 | |
14e97811 OB |
1189 | ---- |
1190 | lxc.init_cmd: /sbin/my_own_init | |
1191 | ---- | |
c7bc47af DM |
1192 | |
1193 | or | |
1194 | ||
14e97811 OB |
1195 | ---- |
1196 | lxc.init_cmd = /sbin/my_own_init | |
1197 | ---- | |
c7bc47af | 1198 | |
14e97811 | 1199 | The settings are passed directly to the LXC low-level tools. |
c7bc47af DM |
1200 | |
1201 | ||
1202 | [[pct_snapshots]] | |
1203 | Snapshots | |
1204 | ~~~~~~~~~ | |
1205 | ||
69ab602f TL |
1206 | When you create a snapshot, `pct` stores the configuration at snapshot time |
1207 | into a separate snapshot section within the same configuration file. For | |
1208 | example, after creating a snapshot called ``testsnapshot'', your configuration | |
1209 | file will look like this: | |
c7bc47af DM |
1210 | |
1211 | .Container configuration with snapshot | |
1212 | ---- | |
1213 | memory: 512 | |
1214 | swap: 512 | |
1215 | parent: testsnaphot | |
1216 | ... | |
1217 | ||
1218 | [testsnaphot] | |
1219 | memory: 512 | |
1220 | swap: 512 | |
1221 | snaptime: 1457170803 | |
1222 | ... | |
1223 | ---- | |
1224 | ||
69ab602f TL |
1225 | There are a few snapshot related properties like `parent` and `snaptime`. The |
1226 | `parent` property is used to store the parent/child relationship between | |
1227 | snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch). | |
c7bc47af DM |
1228 | |
1229 | ||
1230 | [[pct_options]] | |
1231 | Options | |
1232 | ~~~~~~~ | |
1233 | ||
1234 | include::pct.conf.5-opts.adoc[] | |
1235 | ||
1236 | ||
2a11aa70 DM |
1237 | Locks |
1238 | ----- | |
1239 | ||
69ab602f TL |
1240 | Container migrations, snapshots and backups (`vzdump`) set a lock to prevent |
1241 | incompatible concurrent actions on the affected container. Sometimes you need | |
1242 | to remove such a lock manually (e.g., after a power failure). | |
2a11aa70 | 1243 | |
14e97811 OB |
1244 | ---- |
1245 | # pct unlock <CTID> | |
1246 | ---- | |
2a11aa70 | 1247 | |
69ab602f TL |
1248 | CAUTION: Only do this if you are sure the action which set the lock is no |
1249 | longer running. | |
2a11aa70 | 1250 | |
fe57a420 | 1251 | |
0c6b782f | 1252 | ifdef::manvolnum[] |
3bd9d0cf DM |
1253 | |
1254 | Files | |
1255 | ------ | |
1256 | ||
1257 | `/etc/pve/lxc/<CTID>.conf`:: | |
1258 | ||
1259 | Configuration file for the container '<CTID>'. | |
1260 | ||
1261 | ||
0c6b782f DM |
1262 | include::pve-copyright.adoc[] |
1263 | endif::manvolnum[] |