]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_pct]] |
0c6b782f | 2 | ifdef::manvolnum[] |
b2f242ab | 3 | pct(1) |
7e2fdb3d | 4 | ====== |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
0c6b782f DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | pct - Tool to manage Linux Containers (LXC) on Proxmox VE | |
11 | ||
12 | ||
49a5e11c | 13 | SYNOPSIS |
0c6b782f DM |
14 | -------- |
15 | ||
16 | include::pct.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ||
22 | ifndef::manvolnum[] | |
23 | Proxmox Container Toolkit | |
24 | ========================= | |
194d2f29 | 25 | :pve-toplevel: |
0c6b782f | 26 | endif::manvolnum[] |
5f09af76 | 27 | ifdef::wiki[] |
cb84ed18 | 28 | :title: Linux Container |
5f09af76 | 29 | endif::wiki[] |
4a2ae9ed | 30 | |
14e97811 OB |
31 | Containers are a lightweight alternative to fully virtualized machines (VMs). |
32 | They use the kernel of the host system that they run on, instead of emulating a | |
33 | full operating system (OS). This means that containers can access resources on | |
34 | the host system directly. | |
4a2ae9ed | 35 | |
6d718b9b TL |
36 | The runtime costs for containers is low, usually negligible. However, there are |
37 | some drawbacks that need be considered: | |
4a2ae9ed | 38 | |
fd7fb228 DW |
39 | * Only Linux distributions can be run in Proxmox Containers. It is not possible to run |
40 | other operating systems like, for example, FreeBSD or Microsoft Windows | |
6d718b9b | 41 | inside a container. |
4a2ae9ed | 42 | |
6d718b9b | 43 | * For security reasons, access to host resources needs to be restricted. |
fd7fb228 DW |
44 | Therefore, containers run in their own separate namespaces. Additionally some |
45 | syscalls (user space requests to the Linux kernel) are not allowed within containers. | |
4a2ae9ed | 46 | |
fd7fb228 | 47 | {pve} uses https://linuxcontainers.org/lxc/introduction/[Linux Containers (LXC)] as its underlying |
6d718b9b | 48 | container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the |
fd7fb228 DW |
49 | usage and management of LXC, by providing an interface that abstracts |
50 | complex tasks. | |
4a2ae9ed | 51 | |
14e97811 OB |
52 | Containers are tightly integrated with {pve}. This means that they are aware of |
53 | the cluster setup, and they can use the same network and storage resources as | |
54 | virtual machines. You can also use the {pve} firewall, or manage containers | |
55 | using the HA framework. | |
4a2ae9ed | 56 | |
fd7fb228 DW |
57 | Our primary goal is to offer an environment that provides the benefits of using a |
58 | VM, but without the additional overhead. This means that Proxmox Containers can | |
59 | be categorized as ``System Containers'', rather than ``Application Containers''. | |
4a2ae9ed | 60 | |
fd7fb228 DW |
61 | NOTE: If you want to run application containers, for example, 'Docker' images, it |
62 | is recommended that you run them inside a Proxmox Qemu VM. This will give you | |
63 | all the advantages of application containerization, while also providing the | |
64 | benefits that VMs offer, such as strong isolation from the host and the ability | |
65 | to live-migrate, which otherwise isn't possible with containers. | |
4a2ae9ed DM |
66 | |
67 | ||
99f6ae1a DM |
68 | Technology Overview |
69 | ------------------- | |
70 | ||
71 | * LXC (https://linuxcontainers.org/) | |
72 | ||
6d718b9b | 73 | * Integrated into {pve} graphical web user interface (GUI) |
99f6ae1a DM |
74 | |
75 | * Easy to use command line tool `pct` | |
76 | ||
77 | * Access via {pve} REST API | |
78 | ||
6d718b9b | 79 | * 'lxcfs' to provide containerized /proc file system |
99f6ae1a | 80 | |
6d718b9b | 81 | * Control groups ('cgroups') for resource isolation and limitation |
99f6ae1a | 82 | |
6d718b9b | 83 | * 'AppArmor' and 'seccomp' to improve security |
99f6ae1a | 84 | |
14e97811 | 85 | * Modern Linux kernels |
99f6ae1a DM |
86 | |
87 | * Image based deployment (templates) | |
88 | ||
6d718b9b | 89 | * Uses {pve} xref:chapter_storage[storage library] |
99f6ae1a | 90 | |
14e97811 | 91 | * Container setup from host (network, DNS, storage, etc.) |
99f6ae1a | 92 | |
69ab602f | 93 | |
80c0adcb | 94 | [[pct_container_images]] |
d61bab51 DM |
95 | Container Images |
96 | ---------------- | |
97 | ||
8c1189b6 | 98 | Container images, sometimes also referred to as ``templates'' or |
69ab602f | 99 | ``appliances'', are `tar` archives which contain everything to run a container. |
d61bab51 | 100 | |
69ab602f TL |
101 | {pve} itself provides a variety of basic templates for the most common Linux |
102 | distributions. They can be downloaded using the GUI or the `pveam` (short for | |
103 | {pve} Appliance Manager) command line utility. | |
104 | Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates | |
105 | are also available to download. | |
d61bab51 | 106 | |
2a368b1e TL |
107 | The list of available templates is updated daily through the 'pve-daily-update' |
108 | timer. You can also trigger an update manually by executing: | |
3a6fa247 | 109 | |
14e97811 OB |
110 | ---- |
111 | # pveam update | |
112 | ---- | |
3a6fa247 | 113 | |
14e97811 | 114 | To view the list of available images run: |
3a6fa247 | 115 | |
14e97811 OB |
116 | ---- |
117 | # pveam available | |
118 | ---- | |
3a6fa247 | 119 | |
8c1189b6 FG |
120 | You can restrict this large list by specifying the `section` you are |
121 | interested in, for example basic `system` images: | |
3a6fa247 DM |
122 | |
123 | .List available system images | |
124 | ---- | |
125 | # pveam available --section system | |
151bbda8 TL |
126 | system alpine-3.12-default_20200823_amd64.tar.xz |
127 | system alpine-3.13-default_20210419_amd64.tar.xz | |
128 | system alpine-3.14-default_20210623_amd64.tar.xz | |
129 | system archlinux-base_20210420-1_amd64.tar.gz | |
14e97811 | 130 | system centos-7-default_20190926_amd64.tar.xz |
151bbda8 | 131 | system centos-8-default_20201210_amd64.tar.xz |
14e97811 | 132 | system debian-9.0-standard_9.7-1_amd64.tar.gz |
151bbda8 TL |
133 | system debian-10-standard_10.7-1_amd64.tar.gz |
134 | system devuan-3.0-standard_3.0_amd64.tar.gz | |
135 | system fedora-33-default_20201115_amd64.tar.xz | |
136 | system fedora-34-default_20210427_amd64.tar.xz | |
137 | system gentoo-current-default_20200310_amd64.tar.xz | |
138 | system opensuse-15.2-default_20200824_amd64.tar.xz | |
14e97811 OB |
139 | system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz |
140 | system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz | |
151bbda8 TL |
141 | system ubuntu-20.04-standard_20.04-1_amd64.tar.gz |
142 | system ubuntu-20.10-standard_20.10-1_amd64.tar.gz | |
143 | system ubuntu-21.04-standard_21.04-1_amd64.tar.gz | |
3a6fa247 DM |
144 | ---- |
145 | ||
69ab602f | 146 | Before you can use such a template, you need to download them into one of your |
2a368b1e TL |
147 | storages. If you're unsure to which one, you can simply use the `local` named |
148 | storage for that purpose. For clustered installations, it is preferred to use a | |
149 | shared storage so that all nodes can access those images. | |
3a6fa247 | 150 | |
14e97811 OB |
151 | ---- |
152 | # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz | |
153 | ---- | |
3a6fa247 | 154 | |
69ab602f TL |
155 | You are now ready to create containers using that image, and you can list all |
156 | downloaded images on storage `local` with: | |
24f73a63 DM |
157 | |
158 | ---- | |
159 | # pveam list local | |
14e97811 | 160 | local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB |
24f73a63 DM |
161 | ---- |
162 | ||
2a368b1e TL |
163 | TIP: You can also use the {pve} web interface GUI to download, list and delete |
164 | container templates. | |
165 | ||
166 | `pct` uses them to create a new container, for example: | |
167 | ||
168 | ---- | |
169 | # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
170 | ---- | |
171 | ||
69ab602f TL |
172 | The above command shows you the full {pve} volume identifiers. They include the |
173 | storage name, and most other {pve} commands can use them. For example you can | |
174 | delete that image later with: | |
24f73a63 | 175 | |
14e97811 OB |
176 | ---- |
177 | # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz | |
178 | ---- | |
d61bab51 | 179 | |
690cd737 | 180 | |
f3afbb70 | 181 | [[pct_settings]] |
4f785ca7 DM |
182 | Container Settings |
183 | ------------------ | |
184 | ||
304eb5a9 EK |
185 | [[pct_general]] |
186 | General Settings | |
187 | ~~~~~~~~~~~~~~~~ | |
188 | ||
1ff5e4e8 | 189 | [thumbnail="screenshot/gui-create-ct-general.png"] |
2225402c | 190 | |
304eb5a9 EK |
191 | General settings of a container include |
192 | ||
193 | * the *Node* : the physical server on which the container will run | |
69ab602f TL |
194 | * the *CT ID*: a unique number in this {pve} installation used to identify your |
195 | container | |
304eb5a9 EK |
196 | * *Hostname*: the hostname of the container |
197 | * *Resource Pool*: a logical group of containers and VMs | |
198 | * *Password*: the root password of the container | |
199 | * *SSH Public Key*: a public key for connecting to the root account over SSH | |
200 | * *Unprivileged container*: this option allows to choose at creation time | |
69ab602f | 201 | if you want to create a privileged or unprivileged container. |
304eb5a9 | 202 | |
14e97811 OB |
203 | Unprivileged Containers |
204 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
205 | ||
69ab602f TL |
206 | Unprivileged containers use a new kernel feature called user namespaces. |
207 | The root UID 0 inside the container is mapped to an unprivileged user outside | |
208 | the container. This means that most security issues (container escape, resource | |
14e97811 OB |
209 | abuse, etc.) in these containers will affect a random unprivileged user, and |
210 | would be a generic kernel security bug rather than an LXC issue. The LXC team | |
211 | thinks unprivileged containers are safe by design. | |
212 | ||
213 | This is the default option when creating a new container. | |
214 | ||
69ab602f TL |
215 | NOTE: If the container uses systemd as an init system, please be aware the |
216 | systemd version running inside the container should be equal to or greater than | |
217 | 220. | |
14e97811 | 218 | |
304eb5a9 EK |
219 | |
220 | Privileged Containers | |
221 | ^^^^^^^^^^^^^^^^^^^^^ | |
222 | ||
c02ac25b TL |
223 | Security in containers is achieved by using mandatory access control 'AppArmor' |
224 | restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team | |
225 | considers this kind of container as unsafe, and they will not consider new | |
226 | container escape exploits to be security issues worthy of a CVE and quick fix. | |
227 | That's why privileged containers should only be used in trusted environments. | |
304eb5a9 | 228 | |
304eb5a9 | 229 | |
9a5e9443 | 230 | [[pct_cpu]] |
9a5e9443 DM |
231 | CPU |
232 | ~~~ | |
233 | ||
1ff5e4e8 | 234 | [thumbnail="screenshot/gui-create-ct-cpu.png"] |
097aa949 | 235 | |
14e97811 OB |
236 | You can restrict the number of visible CPUs inside the container using the |
237 | `cores` option. This is implemented using the Linux 'cpuset' cgroup | |
69ab602f TL |
238 | (**c**ontrol *group*). |
239 | A special task inside `pvestatd` tries to distribute running containers among | |
240 | available CPUs periodically. | |
241 | To view the assigned CPUs run the following command: | |
9a5e9443 DM |
242 | |
243 | ---- | |
244 | # pct cpusets | |
245 | --------------------- | |
246 | 102: 6 7 | |
247 | 105: 2 3 4 5 | |
248 | 108: 0 1 | |
249 | --------------------- | |
250 | ---- | |
251 | ||
14e97811 OB |
252 | Containers use the host kernel directly. All tasks inside a container are |
253 | handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely | |
254 | **F**air **S**cheduler) scheduler by default, which has additional bandwidth | |
255 | control options. | |
9a5e9443 DM |
256 | |
257 | [horizontal] | |
0725e3c6 | 258 | |
69ab602f TL |
259 | `cpulimit`: :: You can use this option to further limit assigned CPU time. |
260 | Please note that this is a floating point number, so it is perfectly valid to | |
261 | assign two cores to a container, but restrict overall CPU consumption to half a | |
262 | core. | |
9a5e9443 DM |
263 | + |
264 | ---- | |
265 | cores: 2 | |
266 | cpulimit: 0.5 | |
267 | ---- | |
268 | ||
69ab602f TL |
269 | `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The |
270 | larger the number is, the more CPU time this container gets. Number is relative | |
271 | to the weights of all the other running containers. The default is 1024. You | |
272 | can use this setting to prioritize some containers. | |
9a5e9443 DM |
273 | |
274 | ||
275 | [[pct_memory]] | |
276 | Memory | |
277 | ~~~~~~ | |
278 | ||
1ff5e4e8 | 279 | [thumbnail="screenshot/gui-create-ct-memory.png"] |
097aa949 | 280 | |
9a5e9443 DM |
281 | Container memory is controlled using the cgroup memory controller. |
282 | ||
283 | [horizontal] | |
284 | ||
69ab602f TL |
285 | `memory`: :: Limit overall memory usage. This corresponds to the |
286 | `memory.limit_in_bytes` cgroup setting. | |
9a5e9443 | 287 | |
69ab602f TL |
288 | `swap`: :: Allows the container to use additional swap memory from the host |
289 | swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup | |
290 | setting, which is set to the sum of both value (`memory + swap`). | |
9a5e9443 | 291 | |
4f785ca7 DM |
292 | |
293 | [[pct_mount_points]] | |
9e44e493 DM |
294 | Mount Points |
295 | ~~~~~~~~~~~~ | |
eeecce95 | 296 | |
1ff5e4e8 | 297 | [thumbnail="screenshot/gui-create-ct-root-disk.png"] |
097aa949 | 298 | |
14e97811 | 299 | The root mount point is configured with the `rootfs` property. You can |
69ab602f TL |
300 | configure up to 256 additional mount points. The corresponding options are |
301 | called `mp0` to `mp255`. They can contain the following settings: | |
01639994 FG |
302 | |
303 | include::pct-mountpoint-opts.adoc[] | |
304 | ||
69ab602f TL |
305 | Currently there are three types of mount points: storage backed mount points, |
306 | bind mounts, and device mounts. | |
9e44e493 | 307 | |
5eba0743 | 308 | .Typical container `rootfs` configuration |
4c3b5c77 DM |
309 | ---- |
310 | rootfs: thin1:base-100-disk-1,size=8G | |
311 | ---- | |
312 | ||
313 | ||
5eba0743 | 314 | Storage Backed Mount Points |
4c3b5c77 | 315 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
01639994 | 316 | |
9e44e493 | 317 | Storage backed mount points are managed by the {pve} storage subsystem and come |
eeecce95 WB |
318 | in three different flavors: |
319 | ||
5eba0743 | 320 | - Image based: these are raw images containing a single ext4 formatted file |
eeecce95 | 321 | system. |
5eba0743 | 322 | - ZFS subvolumes: these are technically bind mounts, but with managed storage, |
eeecce95 WB |
323 | and thus allow resizing and snapshotting. |
324 | - Directories: passing `size=0` triggers a special case where instead of a raw | |
325 | image a directory is created. | |
326 | ||
03782251 FG |
327 | NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed |
328 | mount point volumes will automatically allocate a volume of the specified size | |
69ab602f TL |
329 | on the specified storage. For example, calling |
330 | ||
331 | ---- | |
332 | pct set 100 -mp0 thin1:10,mp=/path/in/container | |
333 | ---- | |
334 | ||
335 | will allocate a 10GB volume on the storage `thin1` and replace the volume ID | |
336 | place holder `10` with the allocated volume ID, and setup the moutpoint in the | |
337 | container at `/path/in/container` | |
03782251 | 338 | |
4c3b5c77 | 339 | |
5eba0743 | 340 | Bind Mount Points |
4c3b5c77 | 341 | ^^^^^^^^^^^^^^^^^ |
01639994 | 342 | |
9baca183 FG |
343 | Bind mounts allow you to access arbitrary directories from your Proxmox VE host |
344 | inside a container. Some potential use cases are: | |
345 | ||
346 | - Accessing your home directory in the guest | |
347 | - Accessing an USB device directory in the guest | |
acccc49b | 348 | - Accessing an NFS mount from the host in the guest |
9baca183 | 349 | |
eeecce95 | 350 | Bind mounts are considered to not be managed by the storage subsystem, so you |
9baca183 | 351 | cannot make snapshots or deal with quotas from inside the container. With |
eeecce95 | 352 | unprivileged containers you might run into permission problems caused by the |
9baca183 FG |
353 | user mapping and cannot use ACLs. |
354 | ||
8c1189b6 | 355 | NOTE: The contents of bind mount points are not backed up when using `vzdump`. |
eeecce95 | 356 | |
69ab602f TL |
357 | WARNING: For security reasons, bind mounts should only be established using |
358 | source directories especially reserved for this purpose, e.g., a directory | |
359 | hierarchy under `/mnt/bindmounts`. Never bind mount system directories like | |
360 | `/`, `/var` or `/etc` into a container - this poses a great security risk. | |
9baca183 FG |
361 | |
362 | NOTE: The bind mount source path must not contain any symlinks. | |
363 | ||
364 | For example, to make the directory `/mnt/bindmounts/shared` accessible in the | |
365 | container with ID `100` under the path `/shared`, use a configuration line like | |
8c1189b6 FG |
366 | `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`. |
367 | Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to | |
9baca183 | 368 | achieve the same result. |
6b707f2c | 369 | |
4c3b5c77 | 370 | |
5eba0743 | 371 | Device Mount Points |
4c3b5c77 | 372 | ^^^^^^^^^^^^^^^^^^^ |
fe154a4f | 373 | |
7432d78e FG |
374 | Device mount points allow to mount block devices of the host directly into the |
375 | container. Similar to bind mounts, device mounts are not managed by {PVE}'s | |
376 | storage subsystem, but the `quota` and `acl` options will be honored. | |
377 | ||
378 | NOTE: Device mount points should only be used under special circumstances. In | |
379 | most cases a storage backed mount point offers the same performance and a lot | |
380 | more features. | |
381 | ||
69ab602f TL |
382 | NOTE: The contents of device mount points are not backed up when using |
383 | `vzdump`. | |
01639994 | 384 | |
4c3b5c77 | 385 | |
80c0adcb | 386 | [[pct_container_network]] |
f5c351f0 DM |
387 | Network |
388 | ~~~~~~~ | |
04c569f6 | 389 | |
1ff5e4e8 | 390 | [thumbnail="screenshot/gui-create-ct-network.png"] |
097aa949 | 391 | |
69ab602f TL |
392 | You can configure up to 10 network interfaces for a single container. |
393 | The corresponding options are called `net0` to `net9`, and they can contain the | |
394 | following setting: | |
bac8c385 DM |
395 | |
396 | include::pct-network-opts.adoc[] | |
04c569f6 DM |
397 | |
398 | ||
139a9019 DM |
399 | [[pct_startup_and_shutdown]] |
400 | Automatic Start and Shutdown of Containers | |
401 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
402 | ||
14e97811 OB |
403 | To automatically start a container when the host system boots, select the |
404 | option 'Start at boot' in the 'Options' panel of the container in the web | |
405 | interface or run the following command: | |
139a9019 | 406 | |
14e97811 OB |
407 | ---- |
408 | # pct set CTID -onboot 1 | |
409 | ---- | |
139a9019 | 410 | |
4dbeb548 DM |
411 | .Start and Shutdown Order |
412 | // use the screenshot from qemu - its the same | |
1ff5e4e8 | 413 | [thumbnail="screenshot/gui-qemu-edit-start-order.png"] |
4dbeb548 | 414 | |
69ab602f TL |
415 | If you want to fine tune the boot order of your containers, you can use the |
416 | following parameters: | |
139a9019 | 417 | |
69ab602f TL |
418 | * *Start/Shutdown order*: Defines the start order priority. For example, set it |
419 | to 1 if you want the CT to be the first to be started. (We use the reverse | |
420 | startup order for shutdown, so a container with a start order of 1 would be | |
421 | the last to be shut down) | |
422 | * *Startup delay*: Defines the interval between this container start and | |
423 | subsequent containers starts. For example, set it to 240 if you want to wait | |
424 | 240 seconds before starting other containers. | |
139a9019 | 425 | * *Shutdown timeout*: Defines the duration in seconds {pve} should wait |
69ab602f TL |
426 | for the container to be offline after issuing a shutdown command. |
427 | By default this value is set to 60, which means that {pve} will issue a | |
428 | shutdown request, wait 60s for the machine to be offline, and if after 60s | |
429 | the machine is still online will notify that the shutdown action failed. | |
139a9019 | 430 | |
69ab602f TL |
431 | Please note that containers without a Start/Shutdown order parameter will |
432 | always start after those where the parameter is set, and this parameter only | |
139a9019 DM |
433 | makes sense between the machines running locally on a host, and not |
434 | cluster-wide. | |
435 | ||
c2c8eb89 DC |
436 | Hookscripts |
437 | ~~~~~~~~~~~ | |
438 | ||
439 | You can add a hook script to CTs with the config property `hookscript`. | |
440 | ||
14e97811 OB |
441 | ---- |
442 | # pct set 100 -hookscript local:snippets/hookscript.pl | |
443 | ---- | |
c2c8eb89 | 444 | |
69ab602f TL |
445 | It will be called during various phases of the guests lifetime. For an example |
446 | and documentation see the example script under | |
c2c8eb89 | 447 | `/usr/share/pve-docs/examples/guest-example-hookscript.pl`. |
139a9019 | 448 | |
bf7f598a TL |
449 | Security Considerations |
450 | ----------------------- | |
451 | ||
452 | Containers use the kernel of the host system. This exposes an attack surface | |
453 | for malicious users. In general, full virtual machines provide better | |
656d8b21 | 454 | isolation. This should be considered if containers are provided to unknown or |
bf7f598a TL |
455 | untrusted people. |
456 | ||
457 | To reduce the attack surface, LXC uses many security features like AppArmor, | |
458 | CGroups and kernel namespaces. | |
459 | ||
c02ac25b TL |
460 | AppArmor |
461 | ~~~~~~~~ | |
462 | ||
bf7f598a TL |
463 | AppArmor profiles are used to restrict access to possibly dangerous actions. |
464 | Some system calls, i.e. `mount`, are prohibited from execution. | |
465 | ||
466 | To trace AppArmor activity, use: | |
467 | ||
468 | ---- | |
469 | # dmesg | grep apparmor | |
470 | ---- | |
471 | ||
c02ac25b TL |
472 | Although it is not recommended, AppArmor can be disabled for a container. This |
473 | brings security risks with it. Some syscalls can lead to privilege escalation | |
474 | when executed within a container if the system is misconfigured or if a LXC or | |
475 | Linux Kernel vulnerability exists. | |
476 | ||
477 | To disable AppArmor for a container, add the following line to the container | |
478 | configuration file located at `/etc/pve/lxc/CTID.conf`: | |
479 | ||
480 | ---- | |
76aaaeab | 481 | lxc.apparmor.profile = unconfined |
c02ac25b TL |
482 | ---- |
483 | ||
484 | WARNING: Please note that this is not recommended for production use. | |
485 | ||
486 | ||
17238cd3 WB |
487 | [[pct_cgroup]] |
488 | Control Groups ('cgroup') | |
489 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
490 | ||
491 | 'cgroup' is a kernel | |
492 | mechanism used to hierarchically organize processes and distribute system | |
493 | resources. | |
494 | ||
495 | The main resources controlled via 'cgroups' are CPU time, memory and swap | |
496 | limits, and access to device nodes. 'cgroups' are also used to "freeze" a | |
497 | container before taking snapshots. | |
498 | ||
499 | There are 2 versions of 'cgroups' currently available, | |
500 | https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v1/index.html[legacy] | |
501 | and | |
502 | https://www.kernel.org/doc/html/v5.11/admin-guide/cgroup-v2.html['cgroupv2']. | |
503 | ||
504 | Since {pve} 7.0, the default is a pure 'cgroupv2' environment. Previously a | |
505 | "hybrid" setup was used, where resource control was mainly done in 'cgroupv1' | |
506 | with an additional 'cgroupv2' controller which could take over some subsystems | |
507 | via the 'cgroup_no_v1' kernel command line parameter. (See the | |
508 | https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html[kernel | |
509 | parameter documentation] for details.) | |
510 | ||
75d3c2be TL |
511 | [[pct_cgroup_compat]] |
512 | CGroup Version Compatibility | |
513 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
17238cd3 WB |
514 | The main difference between pure 'cgroupv2' and the old hybrid environments |
515 | regarding {pve} is that with 'cgroupv2' memory and swap are now controlled | |
516 | independently. The memory and swap settings for containers can map directly to | |
517 | these values, whereas previously only the memory limit and the limit of the | |
518 | *sum* of memory and swap could be limited. | |
519 | ||
520 | Another important difference is that the 'devices' controller is configured in a | |
521 | completely different way. Because of this, file system quotas are currently not | |
522 | supported in a pure 'cgroupv2' environment. | |
523 | ||
c80d381a SI |
524 | 'cgroupv2' support by the container's OS is needed to run in a pure 'cgroupv2' |
525 | environment. Containers running 'systemd' version 231 or newer support | |
526 | 'cgroupv2' footnote:[this includes all newest major versions of container | |
527 | templates shipped by {pve}], as do containers not using 'systemd' as init | |
528 | system footnote:[for example Alpine Linux]. | |
529 | ||
75d3c2be TL |
530 | [NOTE] |
531 | ==== | |
532 | CentOS 7 and Ubuntu 16.10 are two prominent Linux distributions releases, | |
533 | which have a 'systemd' version that is too old to run in a 'cgroupv2' | |
534 | environment, you can either | |
c80d381a | 535 | |
75d3c2be TL |
536 | * Upgrade the whole distribution to a newer release. For the examples above, that |
537 | could be Ubuntu 18.04 or 20.04, and CentOS 8 (or RHEL/CentOS derivatives like | |
538 | AlmaLinux or Rocky Linux). This has the benefit to get the newest bug and | |
539 | security fixes, often also new features, and moving the EOL date in the future. | |
540 | ||
541 | * Upgrade the Containers systemd version. If the distribution provides a | |
542 | backports repository this can be an easy and quick stop-gap measurement. | |
543 | ||
544 | * Move the container, or its services, to a Virtual Machine. Virtual Machines | |
545 | have a much less interaction with the host, that's why one can install | |
546 | decades old OS versions just fine there. | |
547 | ||
548 | * Switch back to the legacy 'cgroup' controller. Note that while it can be a | |
549 | valid solution, it's not a permanent one. There's a high likelihood that a | |
550 | future {pve} major release, for example 8.0, cannot support the legacy | |
551 | controller anymore. | |
552 | ==== | |
553 | ||
554 | [[pct_cgroup_change_version]] | |
555 | Changing CGroup Version | |
556 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
557 | ||
558 | TIP: If file system quotas are not required and all containers support 'cgroupv2', | |
c80d381a | 559 | it is recommended to stick to the new default. |
17238cd3 WB |
560 | |
561 | To switch back to the previous version the following kernel command line | |
562 | parameter can be used: | |
563 | ||
564 | ---- | |
565 | systemd.unified_cgroup_hierarchy=0 | |
566 | ---- | |
567 | ||
568 | See xref:sysboot_edit_kernel_cmdline[this section] on editing the kernel boot | |
569 | command line on where to add the parameter. | |
570 | ||
571 | // TODO: seccomp a bit more. | |
c02ac25b TL |
572 | // TODO: pve-lxc-syscalld |
573 | ||
574 | ||
0892a2c2 TL |
575 | Guest Operating System Configuration |
576 | ------------------------------------ | |
577 | ||
578 | {pve} tries to detect the Linux distribution in the container, and modifies | |
579 | some files. Here is a short list of things done at container startup: | |
580 | ||
581 | set /etc/hostname:: to set the container name | |
582 | ||
583 | modify /etc/hosts:: to allow lookup of the local hostname | |
584 | ||
585 | network setup:: pass the complete network setup to the container | |
586 | ||
587 | configure DNS:: pass information about DNS servers | |
588 | ||
589 | adapt the init system:: for example, fix the number of spawned getty processes | |
590 | ||
591 | set the root password:: when creating a new container | |
592 | ||
593 | rewrite ssh_host_keys:: so that each container has unique keys | |
594 | ||
595 | randomize crontab:: so that cron does not start at the same time on all containers | |
596 | ||
597 | Changes made by {PVE} are enclosed by comment markers: | |
598 | ||
599 | ---- | |
600 | # --- BEGIN PVE --- | |
601 | <data> | |
602 | # --- END PVE --- | |
603 | ---- | |
604 | ||
605 | Those markers will be inserted at a reasonable location in the file. If such a | |
606 | section already exists, it will be updated in place and will not be moved. | |
607 | ||
608 | Modification of a file can be prevented by adding a `.pve-ignore.` file for it. | |
609 | For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts` | |
610 | file will not be touched. This can be a simple empty file created via: | |
611 | ||
612 | ---- | |
613 | # touch /etc/.pve-ignore.hosts | |
614 | ---- | |
615 | ||
616 | Most modifications are OS dependent, so they differ between different | |
617 | distributions and versions. You can completely disable modifications by | |
618 | manually setting the `ostype` to `unmanaged`. | |
619 | ||
620 | OS type detection is done by testing for certain files inside the | |
3d5c55fc TL |
621 | container. {pve} first checks the `/etc/os-release` file |
622 | footnote:[/etc/os-release replaces the multitude of per-distribution | |
623 | release files https://manpages.debian.org/stable/systemd/os-release.5.en.html]. | |
624 | If that file is not present, or it does not contain a clearly recognizable | |
625 | distribution identifier the following distribution specific release files are | |
626 | checked. | |
0892a2c2 TL |
627 | |
628 | Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`) | |
629 | ||
630 | Debian:: test /etc/debian_version | |
631 | ||
632 | Fedora:: test /etc/fedora-release | |
633 | ||
634 | RedHat or CentOS:: test /etc/redhat-release | |
635 | ||
636 | ArchLinux:: test /etc/arch-release | |
637 | ||
638 | Alpine:: test /etc/alpine-release | |
639 | ||
640 | Gentoo:: test /etc/gentoo-release | |
641 | ||
642 | NOTE: Container start fails if the configured `ostype` differs from the auto | |
643 | detected type. | |
644 | ||
645 | ||
b0df9949 TL |
646 | [[pct_container_storage]] |
647 | Container Storage | |
648 | ----------------- | |
649 | ||
650 | The {pve} LXC container storage model is more flexible than traditional | |
651 | container storage models. A container can have multiple mount points. This | |
652 | makes it possible to use the best suited storage for each application. | |
653 | ||
654 | For example the root file system of the container can be on slow and cheap | |
655 | storage while the database can be on fast and distributed storage via a second | |
656 | mount point. See section <<pct_mount_points, Mount Points>> for further | |
657 | details. | |
658 | ||
659 | Any storage type supported by the {pve} storage library can be used. This means | |
660 | that containers can be stored on local (for example `lvm`, `zfs` or directory), | |
661 | shared external (like `iSCSI`, `NFS`) or even distributed storage systems like | |
662 | Ceph. Advanced storage features like snapshots or clones can be used if the | |
663 | underlying storage supports them. The `vzdump` backup tool can use snapshots to | |
664 | provide consistent container backups. | |
665 | ||
666 | Furthermore, local devices or local directories can be mounted directly using | |
667 | 'bind mounts'. This gives access to local resources inside a container with | |
668 | practically zero overhead. Bind mounts can be used as an easy way to share data | |
669 | between containers. | |
670 | ||
671 | ||
672 | FUSE Mounts | |
673 | ~~~~~~~~~~~ | |
674 | ||
675 | WARNING: Because of existing issues in the Linux kernel's freezer subsystem the | |
676 | usage of FUSE mounts inside a container is strongly advised against, as | |
677 | containers need to be frozen for suspend or snapshot mode backups. | |
678 | ||
679 | If FUSE mounts cannot be replaced by other mounting mechanisms or storage | |
680 | technologies, it is possible to establish the FUSE mount on the Proxmox host | |
681 | and use a bind mount point to make it accessible inside the container. | |
682 | ||
683 | ||
684 | Using Quotas Inside Containers | |
685 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
686 | ||
687 | Quotas allow to set limits inside a container for the amount of disk space that | |
688 | each user can use. | |
689 | ||
17238cd3 WB |
690 | NOTE: This currently requires the use of legacy 'cgroups'. |
691 | ||
b0df9949 TL |
692 | NOTE: This only works on ext4 image based storage types and currently only |
693 | works with privileged containers. | |
694 | ||
695 | Activating the `quota` option causes the following mount options to be used for | |
696 | a mount point: | |
697 | `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` | |
698 | ||
699 | This allows quotas to be used like on any other system. You can initialize the | |
700 | `/aquota.user` and `/aquota.group` files by running: | |
701 | ||
702 | ---- | |
703 | # quotacheck -cmug / | |
704 | # quotaon / | |
705 | ---- | |
706 | ||
707 | Then edit the quotas using the `edquota` command. Refer to the documentation of | |
708 | the distribution running inside the container for details. | |
709 | ||
710 | NOTE: You need to run the above commands for every mount point by passing the | |
711 | mount point's path instead of just `/`. | |
712 | ||
713 | ||
714 | Using ACLs Inside Containers | |
715 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
716 | ||
717 | The standard Posix **A**ccess **C**ontrol **L**ists are also available inside | |
718 | containers. ACLs allow you to set more detailed file ownership than the | |
719 | traditional user/group/others model. | |
720 | ||
721 | ||
722 | Backup of Container mount points | |
723 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
724 | ||
725 | To include a mount point in backups, enable the `backup` option for it in the | |
726 | container configuration. For an existing mount point `mp0` | |
727 | ||
728 | ---- | |
729 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G | |
730 | ---- | |
731 | ||
732 | add `backup=1` to enable it. | |
733 | ||
734 | ---- | |
735 | mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1 | |
736 | ---- | |
737 | ||
738 | NOTE: When creating a new mount point in the GUI, this option is enabled by | |
739 | default. | |
740 | ||
741 | To disable backups for a mount point, add `backup=0` in the way described | |
742 | above, or uncheck the *Backup* checkbox on the GUI. | |
743 | ||
744 | Replication of Containers mount points | |
745 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
746 | ||
747 | By default, additional mount points are replicated when the Root Disk is | |
748 | replicated. If you want the {pve} storage replication mechanism to skip a mount | |
749 | point, you can set the *Skip replication* option for that mount point. | |
750 | As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a | |
751 | mount point to a different type of storage when the container has replication | |
752 | configured requires to have *Skip replication* enabled for that mount point. | |
753 | ||
754 | ||
51e33128 FG |
755 | Backup and Restore |
756 | ------------------ | |
757 | ||
5eba0743 | 758 | |
2175e37b FG |
759 | Container Backup |
760 | ~~~~~~~~~~~~~~~~ | |
761 | ||
69ab602f TL |
762 | It is possible to use the `vzdump` tool for container backup. Please refer to |
763 | the `vzdump` manual page for details. | |
8c1189b6 | 764 | |
51e33128 | 765 | |
2175e37b FG |
766 | Restoring Container Backups |
767 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
768 | ||
69ab602f TL |
769 | Restoring container backups made with `vzdump` is possible using the `pct |
770 | restore` command. By default, `pct restore` will attempt to restore as much of | |
771 | the backed up container configuration as possible. It is possible to override | |
772 | the backed up configuration by manually setting container options on the | |
773 | command line (see the `pct` manual page for details). | |
2175e37b | 774 | |
8c1189b6 | 775 | NOTE: `pvesm extractconfig` can be used to view the backed up configuration |
2175e37b FG |
776 | contained in a vzdump archive. |
777 | ||
778 | There are two basic restore modes, only differing by their handling of mount | |
779 | points: | |
780 | ||
4c3b5c77 | 781 | |
8c1189b6 FG |
782 | ``Simple'' Restore Mode |
783 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b | 784 | |
69ab602f TL |
785 | If neither the `rootfs` parameter nor any of the optional `mpX` parameters are |
786 | explicitly set, the mount point configuration from the backed up configuration | |
787 | file is restored using the following steps: | |
2175e37b FG |
788 | |
789 | . Extract mount points and their options from backup | |
790 | . Create volumes for storage backed mount points (on storage provided with the | |
69ab602f | 791 | `storage` parameter, or default local storage if unset) |
2175e37b | 792 | . Extract files from backup archive |
69ab602f TL |
793 | . Add bind and device mount points to restored configuration (limited to root |
794 | user) | |
2175e37b FG |
795 | |
796 | NOTE: Since bind and device mount points are never backed up, no files are | |
797 | restored in the last step, but only the configuration options. The assumption | |
798 | is that such mount points are either backed up with another mechanism (e.g., | |
799 | NFS space that is bind mounted into many containers), or not intended to be | |
800 | backed up at all. | |
801 | ||
802 | This simple mode is also used by the container restore operations in the web | |
803 | interface. | |
804 | ||
4c3b5c77 | 805 | |
8c1189b6 FG |
806 | ``Advanced'' Restore Mode |
807 | ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
2175e37b FG |
808 | |
809 | By setting the `rootfs` parameter (and optionally, any combination of `mpX` | |
8c1189b6 | 810 | parameters), the `pct restore` command is automatically switched into an |
2175e37b | 811 | advanced mode. This advanced mode completely ignores the `rootfs` and `mpX` |
69ab602f TL |
812 | configuration options contained in the backup archive, and instead only uses |
813 | the options explicitly provided as parameters. | |
2175e37b | 814 | |
69ab602f TL |
815 | This mode allows flexible configuration of mount point settings at restore |
816 | time, for example: | |
2175e37b FG |
817 | |
818 | * Set target storages, volume sizes and other options for each mount point | |
69ab602f | 819 | individually |
2175e37b FG |
820 | * Redistribute backed up files according to new mount point scheme |
821 | * Restore to device and/or bind mount points (limited to root user) | |
822 | ||
51e33128 | 823 | |
8c1189b6 | 824 | Managing Containers with `pct` |
04c569f6 DM |
825 | ------------------------------ |
826 | ||
6d718b9b TL |
827 | The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage |
828 | {pve} containers. It enables you to create or destroy containers, as well as | |
829 | control the container execution (start, stop, reboot, migrate, etc.). It can be | |
830 | used to set parameters in the config file of a container, for example the | |
831 | network configuration or memory limits. | |
5eba0743 | 832 | |
04c569f6 DM |
833 | CLI Usage Examples |
834 | ~~~~~~~~~~~~~~~~~~ | |
835 | ||
69ab602f TL |
836 | Create a container based on a Debian template (provided you have already |
837 | downloaded the template via the web interface) | |
04c569f6 | 838 | |
14e97811 OB |
839 | ---- |
840 | # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz | |
841 | ---- | |
04c569f6 DM |
842 | |
843 | Start container 100 | |
844 | ||
14e97811 OB |
845 | ---- |
846 | # pct start 100 | |
847 | ---- | |
04c569f6 DM |
848 | |
849 | Start a login session via getty | |
850 | ||
14e97811 OB |
851 | ---- |
852 | # pct console 100 | |
853 | ---- | |
04c569f6 DM |
854 | |
855 | Enter the LXC namespace and run a shell as root user | |
856 | ||
14e97811 OB |
857 | ---- |
858 | # pct enter 100 | |
859 | ---- | |
04c569f6 DM |
860 | |
861 | Display the configuration | |
862 | ||
14e97811 OB |
863 | ---- |
864 | # pct config 100 | |
865 | ---- | |
04c569f6 | 866 | |
69ab602f TL |
867 | Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set |
868 | the address and gateway, while it's running | |
04c569f6 | 869 | |
14e97811 OB |
870 | ---- |
871 | # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1 | |
872 | ---- | |
04c569f6 DM |
873 | |
874 | Reduce the memory of the container to 512MB | |
875 | ||
14e97811 OB |
876 | ---- |
877 | # pct set 100 -memory 512 | |
878 | ---- | |
0585f29a | 879 | |
87927c65 DJ |
880 | Destroying a container always removes it from Access Control Lists and it always |
881 | removes the firewall configuration of the container. You have to activate | |
882 | '--purge', if you want to additionally remove the container from replication jobs, | |
883 | backup jobs and HA resource configurations. | |
884 | ||
885 | ---- | |
886 | # pct destroy 100 --purge | |
887 | ---- | |
888 | ||
889 | ||
04c569f6 | 890 | |
fe57a420 FG |
891 | Obtaining Debugging Logs |
892 | ~~~~~~~~~~~~~~~~~~~~~~~~ | |
893 | ||
894 | In case `pct start` is unable to start a specific container, it might be | |
59b89a69 OB |
895 | helpful to collect debugging output by passing the `--debug` flag (replace `CTID` with |
896 | the container's CTID): | |
fe57a420 | 897 | |
14e97811 | 898 | ---- |
59b89a69 OB |
899 | # pct start CTID --debug |
900 | ---- | |
901 | ||
97e4455e TL |
902 | Alternatively, you can use the following `lxc-start` command, which will save |
903 | the debug log to the file specified by the `-o` output option: | |
59b89a69 OB |
904 | |
905 | ---- | |
906 | # lxc-start -n CTID -F -l DEBUG -o /tmp/lxc-CTID.log | |
14e97811 | 907 | ---- |
fe57a420 | 908 | |
69ab602f | 909 | This command will attempt to start the container in foreground mode, to stop |
59b89a69 | 910 | the container run `pct shutdown CTID` or `pct stop CTID` in a second terminal. |
fe57a420 | 911 | |
59b89a69 | 912 | The collected debug log is written to `/tmp/lxc-CTID.log`. |
fe57a420 FG |
913 | |
914 | NOTE: If you have changed the container's configuration since the last start | |
915 | attempt with `pct start`, you need to run `pct start` at least once to also | |
916 | update the configuration used by `lxc-start`. | |
917 | ||
33f50e04 DC |
918 | [[pct_migration]] |
919 | Migration | |
920 | --------- | |
921 | ||
922 | If you have a cluster, you can migrate your Containers with | |
923 | ||
14e97811 OB |
924 | ---- |
925 | # pct migrate <ctid> <target> | |
926 | ---- | |
33f50e04 DC |
927 | |
928 | This works as long as your Container is offline. If it has local volumes or | |
14e97811 | 929 | mount points defined, the migration will copy the content over the network to |
ba021358 | 930 | the target host if the same storage is defined there. |
33f50e04 | 931 | |
656d8b21 | 932 | Running containers cannot live-migrated due to technical limitations. You can |
4c82550d TL |
933 | do a restart migration, which shuts down, moves and then starts a container |
934 | again on the target node. As containers are very lightweight, this results | |
935 | normally only in a downtime of some hundreds of milliseconds. | |
936 | ||
937 | A restart migration can be done through the web interface or by using the | |
938 | `--restart` flag with the `pct migrate` command. | |
33f50e04 | 939 | |
69ab602f TL |
940 | A restart migration will shut down the Container and kill it after the |
941 | specified timeout (the default is 180 seconds). Then it will migrate the | |
942 | Container like an offline migration and when finished, it starts the Container | |
943 | on the target node. | |
c7bc47af DM |
944 | |
945 | [[pct_configuration]] | |
946 | Configuration | |
947 | ------------- | |
948 | ||
69ab602f TL |
949 | The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where |
950 | `<CTID>` is the numeric ID of the given container. Like all other files stored | |
951 | inside `/etc/pve/`, they get automatically replicated to all other cluster | |
952 | nodes. | |
c7bc47af DM |
953 | |
954 | NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be | |
955 | unique cluster wide. | |
956 | ||
957 | .Example Container Configuration | |
958 | ---- | |
959 | ostype: debian | |
960 | arch: amd64 | |
961 | hostname: www | |
962 | memory: 512 | |
963 | swap: 512 | |
964 | net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth | |
965 | rootfs: local:107/vm-107-disk-1.raw,size=7G | |
966 | ---- | |
967 | ||
69ab602f | 968 | The configuration files are simple text files. You can edit them using a normal |
da9679b6 | 969 | text editor, for example, `vi` or `nano`. |
69ab602f TL |
970 | This is sometimes useful to do small corrections, but keep in mind that you |
971 | need to restart the container to apply such changes. | |
c7bc47af | 972 | |
69ab602f TL |
973 | For that reason, it is usually better to use the `pct` command to generate and |
974 | modify those files, or do the whole thing using the GUI. | |
975 | Our toolkit is smart enough to instantaneously apply most changes to running | |
da9679b6 | 976 | containers. This feature is called ``hot plug'', and there is no need to restart |
69ab602f | 977 | the container in that case. |
c7bc47af | 978 | |
da9679b6 | 979 | In cases where a change cannot be hot-plugged, it will be registered as a |
69ab602f TL |
980 | pending change (shown in red color in the GUI). |
981 | They will only be applied after rebooting the container. | |
14e97811 | 982 | |
c7bc47af DM |
983 | |
984 | File Format | |
985 | ~~~~~~~~~~~ | |
986 | ||
69ab602f TL |
987 | The container configuration file uses a simple colon separated key/value |
988 | format. Each line has the following format: | |
c7bc47af DM |
989 | |
990 | ----- | |
991 | # this is a comment | |
992 | OPTION: value | |
993 | ----- | |
994 | ||
69ab602f TL |
995 | Blank lines in those files are ignored, and lines starting with a `#` character |
996 | are treated as comments and are also ignored. | |
c7bc47af | 997 | |
69ab602f | 998 | It is possible to add low-level, LXC style configuration directly, for example: |
c7bc47af | 999 | |
14e97811 OB |
1000 | ---- |
1001 | lxc.init_cmd: /sbin/my_own_init | |
1002 | ---- | |
c7bc47af DM |
1003 | |
1004 | or | |
1005 | ||
14e97811 OB |
1006 | ---- |
1007 | lxc.init_cmd = /sbin/my_own_init | |
1008 | ---- | |
c7bc47af | 1009 | |
14e97811 | 1010 | The settings are passed directly to the LXC low-level tools. |
c7bc47af DM |
1011 | |
1012 | ||
1013 | [[pct_snapshots]] | |
1014 | Snapshots | |
1015 | ~~~~~~~~~ | |
1016 | ||
69ab602f TL |
1017 | When you create a snapshot, `pct` stores the configuration at snapshot time |
1018 | into a separate snapshot section within the same configuration file. For | |
1019 | example, after creating a snapshot called ``testsnapshot'', your configuration | |
1020 | file will look like this: | |
c7bc47af DM |
1021 | |
1022 | .Container configuration with snapshot | |
1023 | ---- | |
1024 | memory: 512 | |
1025 | swap: 512 | |
1026 | parent: testsnaphot | |
1027 | ... | |
1028 | ||
1029 | [testsnaphot] | |
1030 | memory: 512 | |
1031 | swap: 512 | |
1032 | snaptime: 1457170803 | |
1033 | ... | |
1034 | ---- | |
1035 | ||
69ab602f TL |
1036 | There are a few snapshot related properties like `parent` and `snaptime`. The |
1037 | `parent` property is used to store the parent/child relationship between | |
1038 | snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch). | |
c7bc47af DM |
1039 | |
1040 | ||
1041 | [[pct_options]] | |
1042 | Options | |
1043 | ~~~~~~~ | |
1044 | ||
1045 | include::pct.conf.5-opts.adoc[] | |
1046 | ||
1047 | ||
2a11aa70 DM |
1048 | Locks |
1049 | ----- | |
1050 | ||
69ab602f TL |
1051 | Container migrations, snapshots and backups (`vzdump`) set a lock to prevent |
1052 | incompatible concurrent actions on the affected container. Sometimes you need | |
1053 | to remove such a lock manually (e.g., after a power failure). | |
2a11aa70 | 1054 | |
14e97811 OB |
1055 | ---- |
1056 | # pct unlock <CTID> | |
1057 | ---- | |
2a11aa70 | 1058 | |
69ab602f TL |
1059 | CAUTION: Only do this if you are sure the action which set the lock is no |
1060 | longer running. | |
2a11aa70 | 1061 | |
fe57a420 | 1062 | |
0c6b782f | 1063 | ifdef::manvolnum[] |
3bd9d0cf DM |
1064 | |
1065 | Files | |
1066 | ------ | |
1067 | ||
1068 | `/etc/pve/lxc/<CTID>.conf`:: | |
1069 | ||
1070 | Configuration file for the container '<CTID>'. | |
1071 | ||
1072 | ||
0c6b782f DM |
1073 | include::pve-copyright.adoc[] |
1074 | endif::manvolnum[] |