]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
pct: move "CT storage" below "guest OS"
[pve-docs.git] / pct.adoc
1 [[chapter_pct]]
2 ifdef::manvolnum[]
3 pct(1)
4 ======
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
13 SYNOPSIS
14 --------
15
16 include::pct.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 Proxmox Container Toolkit
24 =========================
25 :pve-toplevel:
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :title: Linux Container
29 endif::wiki[]
30
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
35
36 The runtime costs for containers is low, usually negligible. However, there are
37 some drawbacks that need be considered:
38
39 * Only Linux distributions can be run in containers.It is not possible to run
40 other Operating Systems like, for example, FreeBSD or Microsoft Windows
41 inside a container.
42
43 * For security reasons, access to host resources needs to be restricted.
44 Containers run in their own separate namespaces. Additionally some syscalls
45 are not allowed within containers.
46
47 {pve} uses https://linuxcontainers.org/[Linux Containers (LXC)] as underlying
48 container technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the
49 usage and management of LXC containers.
50
51 Containers are tightly integrated with {pve}. This means that they are aware of
52 the cluster setup, and they can use the same network and storage resources as
53 virtual machines. You can also use the {pve} firewall, or manage containers
54 using the HA framework.
55
56 Our primary goal is to offer an environment as one would get from a VM, but
57 without the additional overhead. We call this ``System Containers''.
58
59 NOTE: If you want to run micro-containers, for example, 'Docker' or 'rkt', it
60 is best to run them inside a VM.
61
62
63 Technology Overview
64 -------------------
65
66 * LXC (https://linuxcontainers.org/)
67
68 * Integrated into {pve} graphical web user interface (GUI)
69
70 * Easy to use command line tool `pct`
71
72 * Access via {pve} REST API
73
74 * 'lxcfs' to provide containerized /proc file system
75
76 * Control groups ('cgroups') for resource isolation and limitation
77
78 * 'AppArmor' and 'seccomp' to improve security
79
80 * Modern Linux kernels
81
82 * Image based deployment (templates)
83
84 * Uses {pve} xref:chapter_storage[storage library]
85
86 * Container setup from host (network, DNS, storage, etc.)
87
88
89 [[pct_container_images]]
90 Container Images
91 ----------------
92
93 Container images, sometimes also referred to as ``templates'' or
94 ``appliances'', are `tar` archives which contain everything to run a container.
95 `pct` uses them to create a new container, for example:
96
97 ----
98 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
99 ----
100
101 {pve} itself provides a variety of basic templates for the most common Linux
102 distributions. They can be downloaded using the GUI or the `pveam` (short for
103 {pve} Appliance Manager) command line utility.
104 Additionally, https://www.turnkeylinux.org/[TurnKey Linux] container templates
105 are also available to download.
106
107 The list of available templates is updated daily via cron. To trigger it
108 manually:
109
110 ----
111 # pveam update
112 ----
113
114 To view the list of available images run:
115
116 ----
117 # pveam available
118 ----
119
120 You can restrict this large list by specifying the `section` you are
121 interested in, for example basic `system` images:
122
123 .List available system images
124 ----
125 # pveam available --section system
126 system alpine-3.10-default_20190626_amd64.tar.xz
127 system alpine-3.9-default_20190224_amd64.tar.xz
128 system archlinux-base_20190924-1_amd64.tar.gz
129 system centos-6-default_20191016_amd64.tar.xz
130 system centos-7-default_20190926_amd64.tar.xz
131 system centos-8-default_20191016_amd64.tar.xz
132 system debian-10.0-standard_10.0-1_amd64.tar.gz
133 system debian-8.0-standard_8.11-1_amd64.tar.gz
134 system debian-9.0-standard_9.7-1_amd64.tar.gz
135 system fedora-30-default_20190718_amd64.tar.xz
136 system fedora-31-default_20191029_amd64.tar.xz
137 system gentoo-current-default_20190718_amd64.tar.xz
138 system opensuse-15.0-default_20180907_amd64.tar.xz
139 system opensuse-15.1-default_20190719_amd64.tar.xz
140 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
141 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
142 system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
143 system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
144 ----
145
146 Before you can use such a template, you need to download them into one of your
147 storages. You can simply use storage `local` for that purpose. For clustered
148 installations, it is preferred to use a shared storage so that all nodes can
149 access those images.
150
151 ----
152 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
153 ----
154
155 You are now ready to create containers using that image, and you can list all
156 downloaded images on storage `local` with:
157
158 ----
159 # pveam list local
160 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
161 ----
162
163 The above command shows you the full {pve} volume identifiers. They include the
164 storage name, and most other {pve} commands can use them. For example you can
165 delete that image later with:
166
167 ----
168 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
169 ----
170
171
172 [[pct_settings]]
173 Container Settings
174 ------------------
175
176 [[pct_general]]
177 General Settings
178 ~~~~~~~~~~~~~~~~
179
180 [thumbnail="screenshot/gui-create-ct-general.png"]
181
182 General settings of a container include
183
184 * the *Node* : the physical server on which the container will run
185 * the *CT ID*: a unique number in this {pve} installation used to identify your
186 container
187 * *Hostname*: the hostname of the container
188 * *Resource Pool*: a logical group of containers and VMs
189 * *Password*: the root password of the container
190 * *SSH Public Key*: a public key for connecting to the root account over SSH
191 * *Unprivileged container*: this option allows to choose at creation time
192 if you want to create a privileged or unprivileged container.
193
194 Unprivileged Containers
195 ^^^^^^^^^^^^^^^^^^^^^^^
196
197 Unprivileged containers use a new kernel feature called user namespaces.
198 The root UID 0 inside the container is mapped to an unprivileged user outside
199 the container. This means that most security issues (container escape, resource
200 abuse, etc.) in these containers will affect a random unprivileged user, and
201 would be a generic kernel security bug rather than an LXC issue. The LXC team
202 thinks unprivileged containers are safe by design.
203
204 This is the default option when creating a new container.
205
206 NOTE: If the container uses systemd as an init system, please be aware the
207 systemd version running inside the container should be equal to or greater than
208 220.
209
210
211 Privileged Containers
212 ^^^^^^^^^^^^^^^^^^^^^
213
214 Security in containers is achieved by using mandatory access control 'AppArmor'
215 restrictions, 'seccomp' filters and Linux kernel namespaces. The LXC team
216 considers this kind of container as unsafe, and they will not consider new
217 container escape exploits to be security issues worthy of a CVE and quick fix.
218 That's why privileged containers should only be used in trusted environments.
219
220
221 [[pct_cpu]]
222 CPU
223 ~~~
224
225 [thumbnail="screenshot/gui-create-ct-cpu.png"]
226
227 You can restrict the number of visible CPUs inside the container using the
228 `cores` option. This is implemented using the Linux 'cpuset' cgroup
229 (**c**ontrol *group*).
230 A special task inside `pvestatd` tries to distribute running containers among
231 available CPUs periodically.
232 To view the assigned CPUs run the following command:
233
234 ----
235 # pct cpusets
236 ---------------------
237 102: 6 7
238 105: 2 3 4 5
239 108: 0 1
240 ---------------------
241 ----
242
243 Containers use the host kernel directly. All tasks inside a container are
244 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
245 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
246 control options.
247
248 [horizontal]
249
250 `cpulimit`: :: You can use this option to further limit assigned CPU time.
251 Please note that this is a floating point number, so it is perfectly valid to
252 assign two cores to a container, but restrict overall CPU consumption to half a
253 core.
254 +
255 ----
256 cores: 2
257 cpulimit: 0.5
258 ----
259
260 `cpuunits`: :: This is a relative weight passed to the kernel scheduler. The
261 larger the number is, the more CPU time this container gets. Number is relative
262 to the weights of all the other running containers. The default is 1024. You
263 can use this setting to prioritize some containers.
264
265
266 [[pct_memory]]
267 Memory
268 ~~~~~~
269
270 [thumbnail="screenshot/gui-create-ct-memory.png"]
271
272 Container memory is controlled using the cgroup memory controller.
273
274 [horizontal]
275
276 `memory`: :: Limit overall memory usage. This corresponds to the
277 `memory.limit_in_bytes` cgroup setting.
278
279 `swap`: :: Allows the container to use additional swap memory from the host
280 swap space. This corresponds to the `memory.memsw.limit_in_bytes` cgroup
281 setting, which is set to the sum of both value (`memory + swap`).
282
283
284 [[pct_mount_points]]
285 Mount Points
286 ~~~~~~~~~~~~
287
288 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
289
290 The root mount point is configured with the `rootfs` property. You can
291 configure up to 256 additional mount points. The corresponding options are
292 called `mp0` to `mp255`. They can contain the following settings:
293
294 include::pct-mountpoint-opts.adoc[]
295
296 Currently there are three types of mount points: storage backed mount points,
297 bind mounts, and device mounts.
298
299 .Typical container `rootfs` configuration
300 ----
301 rootfs: thin1:base-100-disk-1,size=8G
302 ----
303
304
305 Storage Backed Mount Points
306 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
307
308 Storage backed mount points are managed by the {pve} storage subsystem and come
309 in three different flavors:
310
311 - Image based: these are raw images containing a single ext4 formatted file
312 system.
313 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
314 and thus allow resizing and snapshotting.
315 - Directories: passing `size=0` triggers a special case where instead of a raw
316 image a directory is created.
317
318 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
319 mount point volumes will automatically allocate a volume of the specified size
320 on the specified storage. For example, calling
321
322 ----
323 pct set 100 -mp0 thin1:10,mp=/path/in/container
324 ----
325
326 will allocate a 10GB volume on the storage `thin1` and replace the volume ID
327 place holder `10` with the allocated volume ID, and setup the moutpoint in the
328 container at `/path/in/container`
329
330
331 Bind Mount Points
332 ^^^^^^^^^^^^^^^^^
333
334 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
335 inside a container. Some potential use cases are:
336
337 - Accessing your home directory in the guest
338 - Accessing an USB device directory in the guest
339 - Accessing an NFS mount from the host in the guest
340
341 Bind mounts are considered to not be managed by the storage subsystem, so you
342 cannot make snapshots or deal with quotas from inside the container. With
343 unprivileged containers you might run into permission problems caused by the
344 user mapping and cannot use ACLs.
345
346 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
347
348 WARNING: For security reasons, bind mounts should only be established using
349 source directories especially reserved for this purpose, e.g., a directory
350 hierarchy under `/mnt/bindmounts`. Never bind mount system directories like
351 `/`, `/var` or `/etc` into a container - this poses a great security risk.
352
353 NOTE: The bind mount source path must not contain any symlinks.
354
355 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
356 container with ID `100` under the path `/shared`, use a configuration line like
357 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
358 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
359 achieve the same result.
360
361
362 Device Mount Points
363 ^^^^^^^^^^^^^^^^^^^
364
365 Device mount points allow to mount block devices of the host directly into the
366 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
367 storage subsystem, but the `quota` and `acl` options will be honored.
368
369 NOTE: Device mount points should only be used under special circumstances. In
370 most cases a storage backed mount point offers the same performance and a lot
371 more features.
372
373 NOTE: The contents of device mount points are not backed up when using
374 `vzdump`.
375
376
377 [[pct_container_network]]
378 Network
379 ~~~~~~~
380
381 [thumbnail="screenshot/gui-create-ct-network.png"]
382
383 You can configure up to 10 network interfaces for a single container.
384 The corresponding options are called `net0` to `net9`, and they can contain the
385 following setting:
386
387 include::pct-network-opts.adoc[]
388
389
390 [[pct_startup_and_shutdown]]
391 Automatic Start and Shutdown of Containers
392 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
393
394 To automatically start a container when the host system boots, select the
395 option 'Start at boot' in the 'Options' panel of the container in the web
396 interface or run the following command:
397
398 ----
399 # pct set CTID -onboot 1
400 ----
401
402 .Start and Shutdown Order
403 // use the screenshot from qemu - its the same
404 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
405
406 If you want to fine tune the boot order of your containers, you can use the
407 following parameters:
408
409 * *Start/Shutdown order*: Defines the start order priority. For example, set it
410 to 1 if you want the CT to be the first to be started. (We use the reverse
411 startup order for shutdown, so a container with a start order of 1 would be
412 the last to be shut down)
413 * *Startup delay*: Defines the interval between this container start and
414 subsequent containers starts. For example, set it to 240 if you want to wait
415 240 seconds before starting other containers.
416 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
417 for the container to be offline after issuing a shutdown command.
418 By default this value is set to 60, which means that {pve} will issue a
419 shutdown request, wait 60s for the machine to be offline, and if after 60s
420 the machine is still online will notify that the shutdown action failed.
421
422 Please note that containers without a Start/Shutdown order parameter will
423 always start after those where the parameter is set, and this parameter only
424 makes sense between the machines running locally on a host, and not
425 cluster-wide.
426
427 Hookscripts
428 ~~~~~~~~~~~
429
430 You can add a hook script to CTs with the config property `hookscript`.
431
432 ----
433 # pct set 100 -hookscript local:snippets/hookscript.pl
434 ----
435
436 It will be called during various phases of the guests lifetime. For an example
437 and documentation see the example script under
438 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
439
440 Security Considerations
441 -----------------------
442
443 Containers use the kernel of the host system. This exposes an attack surface
444 for malicious users. In general, full virtual machines provide better
445 isolation. This should be considered if containers are provided to unkown or
446 untrusted people.
447
448 To reduce the attack surface, LXC uses many security features like AppArmor,
449 CGroups and kernel namespaces.
450
451 AppArmor
452 ~~~~~~~~
453
454 AppArmor profiles are used to restrict access to possibly dangerous actions.
455 Some system calls, i.e. `mount`, are prohibited from execution.
456
457 To trace AppArmor activity, use:
458
459 ----
460 # dmesg | grep apparmor
461 ----
462
463 Although it is not recommended, AppArmor can be disabled for a container. This
464 brings security risks with it. Some syscalls can lead to privilege escalation
465 when executed within a container if the system is misconfigured or if a LXC or
466 Linux Kernel vulnerability exists.
467
468 To disable AppArmor for a container, add the following line to the container
469 configuration file located at `/etc/pve/lxc/CTID.conf`:
470
471 ----
472 lxc.apparmor_profile = unconfined
473 ----
474
475 WARNING: Please note that this is not recommended for production use.
476
477
478 // TODO: describe cgroups + seccomp a bit more.
479 // TODO: pve-lxc-syscalld
480
481
482 Guest Operating System Configuration
483 ------------------------------------
484
485 {pve} tries to detect the Linux distribution in the container, and modifies
486 some files. Here is a short list of things done at container startup:
487
488 set /etc/hostname:: to set the container name
489
490 modify /etc/hosts:: to allow lookup of the local hostname
491
492 network setup:: pass the complete network setup to the container
493
494 configure DNS:: pass information about DNS servers
495
496 adapt the init system:: for example, fix the number of spawned getty processes
497
498 set the root password:: when creating a new container
499
500 rewrite ssh_host_keys:: so that each container has unique keys
501
502 randomize crontab:: so that cron does not start at the same time on all containers
503
504 Changes made by {PVE} are enclosed by comment markers:
505
506 ----
507 # --- BEGIN PVE ---
508 <data>
509 # --- END PVE ---
510 ----
511
512 Those markers will be inserted at a reasonable location in the file. If such a
513 section already exists, it will be updated in place and will not be moved.
514
515 Modification of a file can be prevented by adding a `.pve-ignore.` file for it.
516 For instance, if the file `/etc/.pve-ignore.hosts` exists then the `/etc/hosts`
517 file will not be touched. This can be a simple empty file created via:
518
519 ----
520 # touch /etc/.pve-ignore.hosts
521 ----
522
523 Most modifications are OS dependent, so they differ between different
524 distributions and versions. You can completely disable modifications by
525 manually setting the `ostype` to `unmanaged`.
526
527 OS type detection is done by testing for certain files inside the
528 container. {pve} first checks the `/etc/os-release` file
529 footnote:[/etc/os-release replaces the multitude of per-distribution
530 release files https://manpages.debian.org/stable/systemd/os-release.5.en.html].
531 If that file is not present, or it does not contain a clearly recognizable
532 distribution identifier the following distribution specific release files are
533 checked.
534
535 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
536
537 Debian:: test /etc/debian_version
538
539 Fedora:: test /etc/fedora-release
540
541 RedHat or CentOS:: test /etc/redhat-release
542
543 ArchLinux:: test /etc/arch-release
544
545 Alpine:: test /etc/alpine-release
546
547 Gentoo:: test /etc/gentoo-release
548
549 NOTE: Container start fails if the configured `ostype` differs from the auto
550 detected type.
551
552
553 [[pct_container_storage]]
554 Container Storage
555 -----------------
556
557 The {pve} LXC container storage model is more flexible than traditional
558 container storage models. A container can have multiple mount points. This
559 makes it possible to use the best suited storage for each application.
560
561 For example the root file system of the container can be on slow and cheap
562 storage while the database can be on fast and distributed storage via a second
563 mount point. See section <<pct_mount_points, Mount Points>> for further
564 details.
565
566 Any storage type supported by the {pve} storage library can be used. This means
567 that containers can be stored on local (for example `lvm`, `zfs` or directory),
568 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
569 Ceph. Advanced storage features like snapshots or clones can be used if the
570 underlying storage supports them. The `vzdump` backup tool can use snapshots to
571 provide consistent container backups.
572
573 Furthermore, local devices or local directories can be mounted directly using
574 'bind mounts'. This gives access to local resources inside a container with
575 practically zero overhead. Bind mounts can be used as an easy way to share data
576 between containers.
577
578
579 FUSE Mounts
580 ~~~~~~~~~~~
581
582 WARNING: Because of existing issues in the Linux kernel's freezer subsystem the
583 usage of FUSE mounts inside a container is strongly advised against, as
584 containers need to be frozen for suspend or snapshot mode backups.
585
586 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
587 technologies, it is possible to establish the FUSE mount on the Proxmox host
588 and use a bind mount point to make it accessible inside the container.
589
590
591 Using Quotas Inside Containers
592 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
593
594 Quotas allow to set limits inside a container for the amount of disk space that
595 each user can use.
596
597 NOTE: This only works on ext4 image based storage types and currently only
598 works with privileged containers.
599
600 Activating the `quota` option causes the following mount options to be used for
601 a mount point:
602 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
603
604 This allows quotas to be used like on any other system. You can initialize the
605 `/aquota.user` and `/aquota.group` files by running:
606
607 ----
608 # quotacheck -cmug /
609 # quotaon /
610 ----
611
612 Then edit the quotas using the `edquota` command. Refer to the documentation of
613 the distribution running inside the container for details.
614
615 NOTE: You need to run the above commands for every mount point by passing the
616 mount point's path instead of just `/`.
617
618
619 Using ACLs Inside Containers
620 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
621
622 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
623 containers. ACLs allow you to set more detailed file ownership than the
624 traditional user/group/others model.
625
626
627 Backup of Container mount points
628 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
629
630 To include a mount point in backups, enable the `backup` option for it in the
631 container configuration. For an existing mount point `mp0`
632
633 ----
634 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
635 ----
636
637 add `backup=1` to enable it.
638
639 ----
640 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
641 ----
642
643 NOTE: When creating a new mount point in the GUI, this option is enabled by
644 default.
645
646 To disable backups for a mount point, add `backup=0` in the way described
647 above, or uncheck the *Backup* checkbox on the GUI.
648
649 Replication of Containers mount points
650 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
651
652 By default, additional mount points are replicated when the Root Disk is
653 replicated. If you want the {pve} storage replication mechanism to skip a mount
654 point, you can set the *Skip replication* option for that mount point.
655 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
656 mount point to a different type of storage when the container has replication
657 configured requires to have *Skip replication* enabled for that mount point.
658
659
660 Backup and Restore
661 ------------------
662
663
664 Container Backup
665 ~~~~~~~~~~~~~~~~
666
667 It is possible to use the `vzdump` tool for container backup. Please refer to
668 the `vzdump` manual page for details.
669
670
671 Restoring Container Backups
672 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
673
674 Restoring container backups made with `vzdump` is possible using the `pct
675 restore` command. By default, `pct restore` will attempt to restore as much of
676 the backed up container configuration as possible. It is possible to override
677 the backed up configuration by manually setting container options on the
678 command line (see the `pct` manual page for details).
679
680 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
681 contained in a vzdump archive.
682
683 There are two basic restore modes, only differing by their handling of mount
684 points:
685
686
687 ``Simple'' Restore Mode
688 ^^^^^^^^^^^^^^^^^^^^^^^
689
690 If neither the `rootfs` parameter nor any of the optional `mpX` parameters are
691 explicitly set, the mount point configuration from the backed up configuration
692 file is restored using the following steps:
693
694 . Extract mount points and their options from backup
695 . Create volumes for storage backed mount points (on storage provided with the
696 `storage` parameter, or default local storage if unset)
697 . Extract files from backup archive
698 . Add bind and device mount points to restored configuration (limited to root
699 user)
700
701 NOTE: Since bind and device mount points are never backed up, no files are
702 restored in the last step, but only the configuration options. The assumption
703 is that such mount points are either backed up with another mechanism (e.g.,
704 NFS space that is bind mounted into many containers), or not intended to be
705 backed up at all.
706
707 This simple mode is also used by the container restore operations in the web
708 interface.
709
710
711 ``Advanced'' Restore Mode
712 ^^^^^^^^^^^^^^^^^^^^^^^^^
713
714 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
715 parameters), the `pct restore` command is automatically switched into an
716 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
717 configuration options contained in the backup archive, and instead only uses
718 the options explicitly provided as parameters.
719
720 This mode allows flexible configuration of mount point settings at restore
721 time, for example:
722
723 * Set target storages, volume sizes and other options for each mount point
724 individually
725 * Redistribute backed up files according to new mount point scheme
726 * Restore to device and/or bind mount points (limited to root user)
727
728
729 Managing Containers with `pct`
730 ------------------------------
731
732 The ``Proxmox Container Toolkit'' (`pct`) is the command line tool to manage
733 {pve} containers. It enables you to create or destroy containers, as well as
734 control the container execution (start, stop, reboot, migrate, etc.). It can be
735 used to set parameters in the config file of a container, for example the
736 network configuration or memory limits.
737
738 CLI Usage Examples
739 ~~~~~~~~~~~~~~~~~~
740
741 Create a container based on a Debian template (provided you have already
742 downloaded the template via the web interface)
743
744 ----
745 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
746 ----
747
748 Start container 100
749
750 ----
751 # pct start 100
752 ----
753
754 Start a login session via getty
755
756 ----
757 # pct console 100
758 ----
759
760 Enter the LXC namespace and run a shell as root user
761
762 ----
763 # pct enter 100
764 ----
765
766 Display the configuration
767
768 ----
769 # pct config 100
770 ----
771
772 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`, set
773 the address and gateway, while it's running
774
775 ----
776 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
777 ----
778
779 Reduce the memory of the container to 512MB
780
781 ----
782 # pct set 100 -memory 512
783 ----
784
785
786 Obtaining Debugging Logs
787 ~~~~~~~~~~~~~~~~~~~~~~~~
788
789 In case `pct start` is unable to start a specific container, it might be
790 helpful to collect debugging output by running `lxc-start` (replace `ID` with
791 the container's ID):
792
793 ----
794 # lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
795 ----
796
797 This command will attempt to start the container in foreground mode, to stop
798 the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
799
800 The collected debug log is written to `/tmp/lxc-ID.log`.
801
802 NOTE: If you have changed the container's configuration since the last start
803 attempt with `pct start`, you need to run `pct start` at least once to also
804 update the configuration used by `lxc-start`.
805
806 [[pct_migration]]
807 Migration
808 ---------
809
810 If you have a cluster, you can migrate your Containers with
811
812 ----
813 # pct migrate <ctid> <target>
814 ----
815
816 This works as long as your Container is offline. If it has local volumes or
817 mount points defined, the migration will copy the content over the network to
818 the target host if the same storage is defined there.
819
820 Running containers cannot live-migrated due to techincal limitations. You can
821 do a restart migration, which shuts down, moves and then starts a container
822 again on the target node. As containers are very lightweight, this results
823 normally only in a downtime of some hundreds of milliseconds.
824
825 A restart migration can be done through the web interface or by using the
826 `--restart` flag with the `pct migrate` command.
827
828 A restart migration will shut down the Container and kill it after the
829 specified timeout (the default is 180 seconds). Then it will migrate the
830 Container like an offline migration and when finished, it starts the Container
831 on the target node.
832
833 [[pct_configuration]]
834 Configuration
835 -------------
836
837 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration, where
838 `<CTID>` is the numeric ID of the given container. Like all other files stored
839 inside `/etc/pve/`, they get automatically replicated to all other cluster
840 nodes.
841
842 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
843 unique cluster wide.
844
845 .Example Container Configuration
846 ----
847 ostype: debian
848 arch: amd64
849 hostname: www
850 memory: 512
851 swap: 512
852 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
853 rootfs: local:107/vm-107-disk-1.raw,size=7G
854 ----
855
856 The configuration files are simple text files. You can edit them using a normal
857 text editor, for example, `vi` or `nano`.
858 This is sometimes useful to do small corrections, but keep in mind that you
859 need to restart the container to apply such changes.
860
861 For that reason, it is usually better to use the `pct` command to generate and
862 modify those files, or do the whole thing using the GUI.
863 Our toolkit is smart enough to instantaneously apply most changes to running
864 containers. This feature is called ``hot plug'', and there is no need to restart
865 the container in that case.
866
867 In cases where a change cannot be hot-plugged, it will be registered as a
868 pending change (shown in red color in the GUI).
869 They will only be applied after rebooting the container.
870
871
872 File Format
873 ~~~~~~~~~~~
874
875 The container configuration file uses a simple colon separated key/value
876 format. Each line has the following format:
877
878 -----
879 # this is a comment
880 OPTION: value
881 -----
882
883 Blank lines in those files are ignored, and lines starting with a `#` character
884 are treated as comments and are also ignored.
885
886 It is possible to add low-level, LXC style configuration directly, for example:
887
888 ----
889 lxc.init_cmd: /sbin/my_own_init
890 ----
891
892 or
893
894 ----
895 lxc.init_cmd = /sbin/my_own_init
896 ----
897
898 The settings are passed directly to the LXC low-level tools.
899
900
901 [[pct_snapshots]]
902 Snapshots
903 ~~~~~~~~~
904
905 When you create a snapshot, `pct` stores the configuration at snapshot time
906 into a separate snapshot section within the same configuration file. For
907 example, after creating a snapshot called ``testsnapshot'', your configuration
908 file will look like this:
909
910 .Container configuration with snapshot
911 ----
912 memory: 512
913 swap: 512
914 parent: testsnaphot
915 ...
916
917 [testsnaphot]
918 memory: 512
919 swap: 512
920 snaptime: 1457170803
921 ...
922 ----
923
924 There are a few snapshot related properties like `parent` and `snaptime`. The
925 `parent` property is used to store the parent/child relationship between
926 snapshots. `snaptime` is the snapshot creation time stamp (Unix epoch).
927
928
929 [[pct_options]]
930 Options
931 ~~~~~~~
932
933 include::pct.conf.5-opts.adoc[]
934
935
936 Locks
937 -----
938
939 Container migrations, snapshots and backups (`vzdump`) set a lock to prevent
940 incompatible concurrent actions on the affected container. Sometimes you need
941 to remove such a lock manually (e.g., after a power failure).
942
943 ----
944 # pct unlock <CTID>
945 ----
946
947 CAUTION: Only do this if you are sure the action which set the lock is no
948 longer running.
949
950
951 ifdef::manvolnum[]
952
953 Files
954 ------
955
956 `/etc/pve/lxc/<CTID>.conf`::
957
958 Configuration file for the container '<CTID>'.
959
960
961 include::pve-copyright.adoc[]
962 endif::manvolnum[]