]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
rewrite and extend pct documentation
[pve-docs.git] / pct.adoc
1 [[chapter_pct]]
2 ifdef::manvolnum[]
3 pct(1)
4 ======
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
13 SYNOPSIS
14 --------
15
16 include::pct.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 Proxmox Container Toolkit
24 =========================
25 :pve-toplevel:
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :title: Linux Container
29 endif::wiki[]
30
31 Containers are a lightweight alternative to fully virtualized machines (VMs).
32 They use the kernel of the host system that they run on, instead of emulating a
33 full operating system (OS). This means that containers can access resources on
34 the host system directly.
35
36 The runtime costs for containers is low, usually negligible. However, there
37 are some drawbacks that need be considered:
38
39 * Only Linux distributions can be run in containers. (It is not
40 possible to run FreeBSD or MS Windows inside a container.)
41
42 * For security reasons, access to host resources needs to be restricted. Containers
43 run in their own separate namespaces. Additionally some syscalls are not
44 allowed within containers.
45
46 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
47 technology. The ``Proxmox Container Toolkit'' (`pct`) simplifies the usage of LXC
48 containers.
49
50 Containers are tightly integrated with {pve}. This means that they are aware of
51 the cluster setup, and they can use the same network and storage resources as
52 virtual machines. You can also use the {pve} firewall, or manage containers
53 using the HA framework.
54
55 Our primary goal is to offer an environment as one would get from a
56 VM, but without the additional overhead. We call this "System
57 Containers".
58
59 NOTE: If you want to run micro-containers (with docker, rkt, etc.) it
60 is best to run them inside a VM.
61
62
63 Technology Overview
64 -------------------
65
66 * LXC (https://linuxcontainers.org/)
67
68 * Integrated into {pve} graphical user interface (GUI)
69
70 * Easy to use command line tool `pct`
71
72 * Access via {pve} REST API
73
74 * lxcfs to provide containerized /proc file system
75
76 * CGroups (control groups) for resource allocation
77
78 * AppArmor/Seccomp to improve security
79
80 * Modern Linux kernels
81
82 * Image based deployment (templates)
83
84 * Uses {pve} storage library
85
86 * Container setup from host (network, DNS, storage, etc.)
87
88 Security Considerations
89 -----------------------
90
91 Containers use the kernel of the host system. This creates a big attack
92 surface for malicious users. This should be considered if containers
93 are provided to untrustworthy people. In general, full
94 virtual machines provide better isolation.
95
96 However, LXC uses many security features like AppArmor, CGroups and kernel
97 namespaces to reduce the attack surface.
98
99 AppArmor profiles are used to restrict access to possibly dangerous actions.
100 Some system calls, i.e. `mount`, are prohibited from execution.
101
102 To trace AppArmor activity, use:
103
104 ----
105 # dmesg | grep apparmor
106 ----
107
108 Guest Operating System Configuration
109 ------------------------------------
110
111 {pve} tries to detect the Linux distribution in the container, and modifies some
112 files. Here is a short list of things done at container startup:
113
114 set /etc/hostname:: to set the container name
115
116 modify /etc/hosts:: to allow lookup of the local hostname
117
118 network setup:: pass the complete network setup to the container
119
120 configure DNS:: pass information about DNS servers
121
122 adapt the init system:: for example, fix the number of spawned getty processes
123
124 set the root password:: when creating a new container
125
126 rewrite ssh_host_keys:: so that each container has unique keys
127
128 randomize crontab:: so that cron does not start at the same time on all containers
129
130 Changes made by {PVE} are enclosed by comment markers:
131
132 ----
133 # --- BEGIN PVE ---
134 <data>
135 # --- END PVE ---
136 ----
137
138 Those markers will be inserted at a reasonable location in the
139 file. If such a section already exists, it will be updated in place
140 and will not be moved.
141
142 Modification of a file can be prevented by adding a `.pve-ignore.`
143 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
144 exists then the `/etc/hosts` file will not be touched. This can be a
145 simple empty file created via:
146
147 ----
148 # touch /etc/.pve-ignore.hosts
149 ----
150
151 Most modifications are OS dependent, so they differ between different
152 distributions and versions. You can completely disable modifications
153 by manually setting the `ostype` to `unmanaged`.
154
155 OS type detection is done by testing for certain files inside the
156 container:
157
158 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
159
160 Debian:: test /etc/debian_version
161
162 Fedora:: test /etc/fedora-release
163
164 RedHat or CentOS:: test /etc/redhat-release
165
166 ArchLinux:: test /etc/arch-release
167
168 Alpine:: test /etc/alpine-release
169
170 Gentoo:: test /etc/gentoo-release
171
172 NOTE: Container start fails if the configured `ostype` differs from the auto
173 detected type.
174
175
176 [[pct_container_images]]
177 Container Images
178 ----------------
179
180 Container images, sometimes also referred to as ``templates'' or
181 ``appliances'', are `tar` archives which contain everything to run a
182 container. `pct` uses them to create a new container, for example:
183
184 ----
185 # pct create 999 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
186 ----
187
188 {pve} itself provides a variety of basic templates for the most common
189 Linux distributions. They can be downloaded using the GUI or the
190 `pveam` (short for {pve} Appliance Manager) command line utility.
191 Additionally, https://www.turnkeylinux.org/[TurnKey Linux]
192 container templates are also available to download.
193
194 The list of available templates is updated daily via cron. To trigger it manually:
195
196 ----
197 # pveam update
198 ----
199
200 To view the list of available images run:
201
202 ----
203 # pveam available
204 ----
205
206 You can restrict this large list by specifying the `section` you are
207 interested in, for example basic `system` images:
208
209 .List available system images
210 ----
211 # pveam available --section system
212 system alpine-3.10-default_20190626_amd64.tar.xz
213 system alpine-3.9-default_20190224_amd64.tar.xz
214 system archlinux-base_20190924-1_amd64.tar.gz
215 system centos-6-default_20191016_amd64.tar.xz
216 system centos-7-default_20190926_amd64.tar.xz
217 system centos-8-default_20191016_amd64.tar.xz
218 system debian-10.0-standard_10.0-1_amd64.tar.gz
219 system debian-8.0-standard_8.11-1_amd64.tar.gz
220 system debian-9.0-standard_9.7-1_amd64.tar.gz
221 system fedora-30-default_20190718_amd64.tar.xz
222 system fedora-31-default_20191029_amd64.tar.xz
223 system gentoo-current-default_20190718_amd64.tar.xz
224 system opensuse-15.0-default_20180907_amd64.tar.xz
225 system opensuse-15.1-default_20190719_amd64.tar.xz
226 system ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz
227 system ubuntu-18.04-standard_18.04.1-1_amd64.tar.gz
228 system ubuntu-19.04-standard_19.04-1_amd64.tar.gz
229 system ubuntu-19.10-standard_19.10-1_amd64.tar.gz
230 ----
231
232 Before you can use such a template, you need to download them into one
233 of your storages. You can simply use storage `local` for that
234 purpose. For clustered installations, it is preferred to use a shared
235 storage so that all nodes can access those images.
236
237 ----
238 # pveam download local debian-10.0-standard_10.0-1_amd64.tar.gz
239 ----
240
241 You are now ready to create containers using that image, and you can
242 list all downloaded images on storage `local` with:
243
244 ----
245 # pveam list local
246 local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz 219.95MB
247 ----
248
249 The above command shows you the full {pve} volume identifiers. They include
250 the storage name, and most other {pve} commands can use them. For
251 example you can delete that image later with:
252
253 ----
254 # pveam remove local:vztmpl/debian-10.0-standard_10.0-1_amd64.tar.gz
255 ----
256
257 [[pct_container_storage]]
258 Container Storage
259 -----------------
260
261 The {pve} LXC container storage model is more flexible than traditional
262 container storage models. A container can have multiple mount points. This makes
263 it possible to use the best suited storage for each application.
264
265 For example the root file system of the container can be on slow and cheap
266 storage while the database can be on fast and distributed storage via a second
267 mount point. See section <<pct_mount_points, Mount Points>> for further details.
268
269 Any storage type supported by the {pve} storage library can be used. This means
270 that containers can be stored on local (for example `lvm`, `zfs` or directory),
271 shared external (like `iSCSI`, `NFS`) or even distributed storage systems like
272 Ceph. Advanced storage features like snapshots or clones can be used if the
273 underlying storage supports them. The `vzdump` backup tool can use snapshots to
274 provide consistent container backups.
275
276 Furthermore, local devices or local directories can be mounted directly using
277 'bind mounts'. This gives access to local resources inside a container with
278 practically zero overhead. Bind mounts can be used as an easy way to share data
279 between containers.
280
281
282 FUSE Mounts
283 ~~~~~~~~~~~
284
285 WARNING: Because of existing issues in the Linux kernel's freezer
286 subsystem the usage of FUSE mounts inside a container is strongly
287 advised against, as containers need to be frozen for suspend or
288 snapshot mode backups.
289
290 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
291 technologies, it is possible to establish the FUSE mount on the Proxmox host
292 and use a bind mount point to make it accessible inside the container.
293
294
295 Using Quotas Inside Containers
296 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
297
298 Quotas allow to set limits inside a container for the amount of disk
299 space that each user can use.
300
301 NOTE: This only works on ext4 image based storage types and currently only works
302 with privileged containers.
303
304 Activating the `quota` option causes the following mount options to be
305 used for a mount point:
306 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
307
308 This allows quotas to be used like on any other system. You
309 can initialize the `/aquota.user` and `/aquota.group` files by running
310
311 ----
312 # quotacheck -cmug /
313 # quotaon /
314 ----
315
316 and edit the quotas via the `edquota` command. Refer to the documentation
317 of the distribution running inside the container for details.
318
319 NOTE: You need to run the above commands for every mount point by passing
320 the mount point's path instead of just `/`.
321
322
323 Using ACLs Inside Containers
324 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
325
326 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside
327 containers. ACLs allow you to set more detailed file ownership than the
328 traditional user/group/others model.
329
330
331 Backup of Container mount points
332 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333
334 To include a mount point in backups, enable the `backup` option for it in the
335 container configuration. For an existing mount point `mp0`
336
337 ----
338 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G
339 ----
340
341 add `backup=1` to enable it.
342
343 ----
344 mp0: guests:subvol-100-disk-1,mp=/root/files,size=8G,backup=1
345 ----
346
347 NOTE: When creating a new mount point in the GUI, this option is enabled by
348 default.
349
350 To disable backups for a mount point, add `backup=0` in the way described above,
351 or uncheck the *Backup* checkbox on the GUI.
352
353 Replication of Containers mount points
354 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
355
356 By default, additional mount points are replicated when the Root Disk is
357 replicated. If you want the {pve} storage replication mechanism to skip a mount
358 point, you can set the *Skip replication* option for that mount point. +
359 As of {pve} 5.0, replication requires a storage of type `zfspool`. Adding a
360 mount point to a different type of storage when the container has replication
361 configured requires to have *Skip replication* enabled for that mount point.
362
363 [[pct_settings]]
364 Container Settings
365 ------------------
366
367 [[pct_general]]
368 General Settings
369 ~~~~~~~~~~~~~~~~
370
371 [thumbnail="screenshot/gui-create-ct-general.png"]
372
373 General settings of a container include
374
375 * the *Node* : the physical server on which the container will run
376 * the *CT ID*: a unique number in this {pve} installation used to identify your container
377 * *Hostname*: the hostname of the container
378 * *Resource Pool*: a logical group of containers and VMs
379 * *Password*: the root password of the container
380 * *SSH Public Key*: a public key for connecting to the root account over SSH
381 * *Unprivileged container*: this option allows to choose at creation time
382 if you want to create a privileged or unprivileged container.
383
384 Unprivileged Containers
385 ^^^^^^^^^^^^^^^^^^^^^^^
386
387 Unprivileged containers use a new kernel feature called user namespaces. The
388 root UID 0 inside the container is mapped to an unprivileged user outside the
389 container. This means that most security issues (container escape, resource
390 abuse, etc.) in these containers will affect a random unprivileged user, and
391 would be a generic kernel security bug rather than an LXC issue. The LXC team
392 thinks unprivileged containers are safe by design.
393
394 This is the default option when creating a new container.
395
396 NOTE: If the container uses systemd as an init system, please be
397 aware the systemd version running inside the container should be equal to
398 or greater than 220.
399
400
401 Privileged Containers
402 ^^^^^^^^^^^^^^^^^^^^^
403
404 Security in containers is achieved by using mandatory access control
405 (AppArmor), SecComp filters and namespaces. The LXC team considers this kind of
406 container as unsafe, and they will not consider new container escape exploits
407 to be security issues worthy of a CVE and quick fix. That's why privileged
408 containers should only be used in trusted environments.
409
410 WARNING: Although it is not recommended, AppArmor can be disabled for a
411 container. This brings security risks with it. Some syscalls can lead to
412 privilege escalation when executed within a container if the system is
413 misconfigured or if a LXC or Linux Kernel vulnerability exists.
414
415 To disable AppArmor for a container, add the following line to the container
416 configuration file located at `/etc/pve/lxc/CTID.conf`:
417
418 ----
419 lxc.apparmor_profile = unconfined
420 ----
421
422 Please note that this is not recommended for production use.
423
424
425
426 [[pct_cpu]]
427 CPU
428 ~~~
429
430 [thumbnail="screenshot/gui-create-ct-cpu.png"]
431
432 You can restrict the number of visible CPUs inside the container using the
433 `cores` option. This is implemented using the Linux 'cpuset' cgroup
434 (**c**ontrol *group*). A special task inside `pvestatd` tries to distribute
435 running containers among available CPUs. To view the assigned CPUs run
436 the following command:
437
438 ----
439 # pct cpusets
440 ---------------------
441 102: 6 7
442 105: 2 3 4 5
443 108: 0 1
444 ---------------------
445 ----
446
447 Containers use the host kernel directly. All tasks inside a container are
448 handled by the host CPU scheduler. {pve} uses the Linux 'CFS' (**C**ompletely
449 **F**air **S**cheduler) scheduler by default, which has additional bandwidth
450 control options.
451
452 [horizontal]
453
454 `cpulimit`: :: You can use this option to further limit assigned CPU
455 time. Please note that this is a floating point number, so it is
456 perfectly valid to assign two cores to a container, but restrict
457 overall CPU consumption to half a core.
458 +
459 ----
460 cores: 2
461 cpulimit: 0.5
462 ----
463
464 `cpuunits`: :: This is a relative weight passed to the kernel
465 scheduler. The larger the number is, the more CPU time this container
466 gets. Number is relative to the weights of all the other running
467 containers. The default is 1024. You can use this setting to
468 prioritize some containers.
469
470
471 [[pct_memory]]
472 Memory
473 ~~~~~~
474
475 [thumbnail="screenshot/gui-create-ct-memory.png"]
476
477 Container memory is controlled using the cgroup memory controller.
478
479 [horizontal]
480
481 `memory`: :: Limit overall memory usage. This corresponds
482 to the `memory.limit_in_bytes` cgroup setting.
483
484 `swap`: :: Allows the container to use additional swap memory from the
485 host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
486 cgroup setting, which is set to the sum of both value (`memory +
487 swap`).
488
489
490 [[pct_mount_points]]
491 Mount Points
492 ~~~~~~~~~~~~
493
494 [thumbnail="screenshot/gui-create-ct-root-disk.png"]
495
496 The root mount point is configured with the `rootfs` property. You can
497 configure up to 256 additional mount points. The corresponding options
498 are called `mp0` to `mp255`. They can contain the following settings:
499
500 include::pct-mountpoint-opts.adoc[]
501
502 Currently there are three types of mount points: storage backed
503 mount points, bind mounts, and device mounts.
504
505 .Typical container `rootfs` configuration
506 ----
507 rootfs: thin1:base-100-disk-1,size=8G
508 ----
509
510
511 Storage Backed Mount Points
512 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
513
514 Storage backed mount points are managed by the {pve} storage subsystem and come
515 in three different flavors:
516
517 - Image based: these are raw images containing a single ext4 formatted file
518 system.
519 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
520 and thus allow resizing and snapshotting.
521 - Directories: passing `size=0` triggers a special case where instead of a raw
522 image a directory is created.
523
524 NOTE: The special option syntax `STORAGE_ID:SIZE_IN_GB` for storage backed
525 mount point volumes will automatically allocate a volume of the specified size
526 on the specified storage. E.g., calling
527 `pct set 100 -mp0 thin1:10,mp=/path/in/container` will allocate a 10GB volume
528 on the storage `thin1` and replace the volume ID place holder `10` with the
529 allocated volume ID.
530
531
532 Bind Mount Points
533 ^^^^^^^^^^^^^^^^^
534
535 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
536 inside a container. Some potential use cases are:
537
538 - Accessing your home directory in the guest
539 - Accessing an USB device directory in the guest
540 - Accessing an NFS mount from the host in the guest
541
542 Bind mounts are considered to not be managed by the storage subsystem, so you
543 cannot make snapshots or deal with quotas from inside the container. With
544 unprivileged containers you might run into permission problems caused by the
545 user mapping and cannot use ACLs.
546
547 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
548
549 WARNING: For security reasons, bind mounts should only be established
550 using source directories especially reserved for this purpose, e.g., a
551 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
552 directories like `/`, `/var` or `/etc` into a container - this poses a
553 great security risk.
554
555 NOTE: The bind mount source path must not contain any symlinks.
556
557 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
558 container with ID `100` under the path `/shared`, use a configuration line like
559 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
560 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
561 achieve the same result.
562
563
564 Device Mount Points
565 ^^^^^^^^^^^^^^^^^^^
566
567 Device mount points allow to mount block devices of the host directly into the
568 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
569 storage subsystem, but the `quota` and `acl` options will be honored.
570
571 NOTE: Device mount points should only be used under special circumstances. In
572 most cases a storage backed mount point offers the same performance and a lot
573 more features.
574
575 NOTE: The contents of device mount points are not backed up when using `vzdump`.
576
577
578 [[pct_container_network]]
579 Network
580 ~~~~~~~
581
582 [thumbnail="screenshot/gui-create-ct-network.png"]
583
584 You can configure up to 10 network interfaces for a single
585 container. The corresponding options are called `net0` to `net9`, and
586 they can contain the following setting:
587
588 include::pct-network-opts.adoc[]
589
590
591 [[pct_startup_and_shutdown]]
592 Automatic Start and Shutdown of Containers
593 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
594
595 To automatically start a container when the host system boots, select the
596 option 'Start at boot' in the 'Options' panel of the container in the web
597 interface or run the following command:
598
599 ----
600 # pct set CTID -onboot 1
601 ----
602
603 .Start and Shutdown Order
604 // use the screenshot from qemu - its the same
605 [thumbnail="screenshot/gui-qemu-edit-start-order.png"]
606
607 If you want to fine tune the boot order of your containers, you can use the following
608 parameters:
609
610 * *Start/Shutdown order*: Defines the start order priority. For example, set it to 1 if
611 you want the CT to be the first to be started. (We use the reverse startup
612 order for shutdown, so a container with a start order of 1 would be the last to
613 be shut down)
614 * *Startup delay*: Defines the interval between this container start and subsequent
615 containers starts. For example, set it to 240 if you want to wait 240 seconds before starting
616 other containers.
617 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
618 for the container to be offline after issuing a shutdown command.
619 By default this value is set to 60, which means that {pve} will issue a
620 shutdown request, wait 60s for the machine to be offline, and if after 60s
621 the machine is still online will notify that the shutdown action failed.
622
623 Please note that containers without a Start/Shutdown order parameter will always
624 start after those where the parameter is set, and this parameter only
625 makes sense between the machines running locally on a host, and not
626 cluster-wide.
627
628 Hookscripts
629 ~~~~~~~~~~~
630
631 You can add a hook script to CTs with the config property `hookscript`.
632
633 ----
634 # pct set 100 -hookscript local:snippets/hookscript.pl
635 ----
636
637 It will be called during various phases of the guests lifetime.
638 For an example and documentation see the example script under
639 `/usr/share/pve-docs/examples/guest-example-hookscript.pl`.
640
641 Backup and Restore
642 ------------------
643
644
645 Container Backup
646 ~~~~~~~~~~~~~~~~
647
648 It is possible to use the `vzdump` tool for container backup. Please
649 refer to the `vzdump` manual page for details.
650
651
652 Restoring Container Backups
653 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
654
655 Restoring container backups made with `vzdump` is possible using the
656 `pct restore` command. By default, `pct restore` will attempt to restore as much
657 of the backed up container configuration as possible. It is possible to override
658 the backed up configuration by manually setting container options on the command
659 line (see the `pct` manual page for details).
660
661 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
662 contained in a vzdump archive.
663
664 There are two basic restore modes, only differing by their handling of mount
665 points:
666
667
668 ``Simple'' Restore Mode
669 ^^^^^^^^^^^^^^^^^^^^^^^
670
671 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
672 are explicitly set, the mount point configuration from the backed up
673 configuration file is restored using the following steps:
674
675 . Extract mount points and their options from backup
676 . Create volumes for storage backed mount points (on storage provided with the
677 `storage` parameter, or default local storage if unset)
678 . Extract files from backup archive
679 . Add bind and device mount points to restored configuration (limited to root user)
680
681 NOTE: Since bind and device mount points are never backed up, no files are
682 restored in the last step, but only the configuration options. The assumption
683 is that such mount points are either backed up with another mechanism (e.g.,
684 NFS space that is bind mounted into many containers), or not intended to be
685 backed up at all.
686
687 This simple mode is also used by the container restore operations in the web
688 interface.
689
690
691 ``Advanced'' Restore Mode
692 ^^^^^^^^^^^^^^^^^^^^^^^^^
693
694 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
695 parameters), the `pct restore` command is automatically switched into an
696 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
697 configuration options contained in the backup archive, and instead only
698 uses the options explicitly provided as parameters.
699
700 This mode allows flexible configuration of mount point settings at restore time,
701 for example:
702
703 * Set target storages, volume sizes and other options for each mount point
704 individually
705 * Redistribute backed up files according to new mount point scheme
706 * Restore to device and/or bind mount points (limited to root user)
707
708
709 Managing Containers with `pct`
710 ------------------------------
711
712 The "Proxmox Container Toolkit" (`pct`) is the command line tool to manage {pve}
713 containers. It enables you to create or destroy containers, as well as control the
714 container execution (start, stop, reboot, migrate, etc.). It can be used to set
715 parameters in the config file of a container, for example the network
716 configuration or memory limits.
717
718 CLI Usage Examples
719 ~~~~~~~~~~~~~~~~~~
720
721 Create a container based on a Debian template (provided you have
722 already downloaded the template via the web interface)
723
724 ----
725 # pct create 100 /var/lib/vz/template/cache/debian-10.0-standard_10.0-1_amd64.tar.gz
726 ----
727
728 Start container 100
729
730 ----
731 # pct start 100
732 ----
733
734 Start a login session via getty
735
736 ----
737 # pct console 100
738 ----
739
740 Enter the LXC namespace and run a shell as root user
741
742 ----
743 # pct enter 100
744 ----
745
746 Display the configuration
747
748 ----
749 # pct config 100
750 ----
751
752 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
753 set the address and gateway, while it's running
754
755 ----
756 # pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
757 ----
758
759 Reduce the memory of the container to 512MB
760
761 ----
762 # pct set 100 -memory 512
763 ----
764
765
766 Obtaining Debugging Logs
767 ~~~~~~~~~~~~~~~~~~~~~~~~
768
769 In case `pct start` is unable to start a specific container, it might be
770 helpful to collect debugging output by running `lxc-start` (replace `ID` with
771 the container's ID):
772
773 ----
774 # lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
775 ----
776
777 This command will attempt to start the container in foreground mode,
778 to stop the container run `pct shutdown ID` or `pct stop ID` in a
779 second terminal.
780
781 The collected debug log is written to `/tmp/lxc-ID.log`.
782
783 NOTE: If you have changed the container's configuration since the last start
784 attempt with `pct start`, you need to run `pct start` at least once to also
785 update the configuration used by `lxc-start`.
786
787 [[pct_migration]]
788 Migration
789 ---------
790
791 If you have a cluster, you can migrate your Containers with
792
793 ----
794 # pct migrate <ctid> <target>
795 ----
796
797 This works as long as your Container is offline. If it has local volumes or
798 mount points defined, the migration will copy the content over the network to
799 the target host if the same storage is defined there.
800
801 If you want to migrate online Containers, the only way is to use
802 restart migration. This can be initiated with the -restart flag and the optional
803 -timeout parameter.
804
805 A restart migration will shut down the Container and kill it after the specified
806 timeout (the default is 180 seconds). Then it will migrate the Container
807 like an offline migration and when finished, it starts the Container on the
808 target node.
809
810 [[pct_configuration]]
811 Configuration
812 -------------
813
814 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
815 where `<CTID>` is the numeric ID of the given container. Like all
816 other files stored inside `/etc/pve/`, they get automatically
817 replicated to all other cluster nodes.
818
819 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
820 unique cluster wide.
821
822 .Example Container Configuration
823 ----
824 ostype: debian
825 arch: amd64
826 hostname: www
827 memory: 512
828 swap: 512
829 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
830 rootfs: local:107/vm-107-disk-1.raw,size=7G
831 ----
832
833 The configuration files are simple text files. You can edit them
834 using a normal text editor (`vi`, `nano`, etc). This is sometimes
835 useful to do small corrections, but keep in mind that you need to
836 restart the container to apply such changes.
837
838 For that reason, it is usually better to use the `pct` command to
839 generate and modify those files, or do the whole thing using the GUI.
840 Our toolkit is smart enough to instantaneously apply most changes to
841 running containers. This feature is called "hot plug", and there is no
842 need to restart the container in that case.
843
844 In cases where a change cannot be hot plugged, it will be registered
845 as a pending change (shown in red color in the GUI). They will only
846 be applied after rebooting the container.
847
848
849 File Format
850 ~~~~~~~~~~~
851
852 The container configuration file uses a simple colon separated
853 key/value format. Each line has the following format:
854
855 -----
856 # this is a comment
857 OPTION: value
858 -----
859
860 Blank lines in those files are ignored, and lines starting with a `#`
861 character are treated as comments and are also ignored.
862
863 It is possible to add low-level, LXC style configuration directly, for
864 example:
865
866 ----
867 lxc.init_cmd: /sbin/my_own_init
868 ----
869
870 or
871
872 ----
873 lxc.init_cmd = /sbin/my_own_init
874 ----
875
876 The settings are passed directly to the LXC low-level tools.
877
878
879 [[pct_snapshots]]
880 Snapshots
881 ~~~~~~~~~
882
883 When you create a snapshot, `pct` stores the configuration at snapshot
884 time into a separate snapshot section within the same configuration
885 file. For example, after creating a snapshot called ``testsnapshot'',
886 your configuration file will look like this:
887
888 .Container configuration with snapshot
889 ----
890 memory: 512
891 swap: 512
892 parent: testsnaphot
893 ...
894
895 [testsnaphot]
896 memory: 512
897 swap: 512
898 snaptime: 1457170803
899 ...
900 ----
901
902 There are a few snapshot related properties like `parent` and
903 `snaptime`. The `parent` property is used to store the parent/child
904 relationship between snapshots. `snaptime` is the snapshot creation
905 time stamp (Unix epoch).
906
907
908 [[pct_options]]
909 Options
910 ~~~~~~~
911
912 include::pct.conf.5-opts.adoc[]
913
914
915 Locks
916 -----
917
918 Container migrations, snapshots and backups (`vzdump`) set a lock to
919 prevent incompatible concurrent actions on the affected container. Sometimes
920 you need to remove such a lock manually (e.g., after a power failure).
921
922 ----
923 # pct unlock <CTID>
924 ----
925
926 CAUTION: Only do this if you are sure the action which set the lock is
927 no longer running.
928
929
930 ifdef::manvolnum[]
931
932 Files
933 ------
934
935 `/etc/pve/lxc/<CTID>.conf`::
936
937 Configuration file for the container '<CTID>'.
938
939
940 include::pve-copyright.adoc[]
941 endif::manvolnum[]
942
943
944
945
946
947
948