]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
pvecm: pve-manager service was renamed to pve-guests
[pve-docs.git] / pct.adoc
1 [[chapter_pct]]
2 ifdef::manvolnum[]
3 pct(1)
4 ======
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
11
12
13 SYNOPSIS
14 --------
15
16 include::pct.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 Proxmox Container Toolkit
24 =========================
25 :pve-toplevel:
26 endif::manvolnum[]
27 ifdef::wiki[]
28 :title: Linux Container
29 endif::wiki[]
30
31 Containers are a lightweight alternative to fully virtualized
32 VMs. Instead of emulating a complete Operating System (OS), containers
33 simply use the OS of the host they run on. This implies that all
34 containers use the same kernel, and that they can access resources
35 from the host directly.
36
37 This is great because containers do not waste CPU power nor memory due
38 to kernel emulation. Container run-time costs are close to zero and
39 usually negligible. But there are also some drawbacks you need to
40 consider:
41
42 * You can only run Linux based OS inside containers, i.e. it is not
43 possible to run FreeBSD or MS Windows inside.
44
45 * For security reasons, access to host resources needs to be
46 restricted. This is done with AppArmor, SecComp filters and other
47 kernel features. Be prepared that some syscalls are not allowed
48 inside containers.
49
50 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
51 technology. We consider LXC as low-level library, which provides
52 countless options. It would be too difficult to use those tools
53 directly. Instead, we provide a small wrapper called `pct`, the
54 "Proxmox Container Toolkit".
55
56 The toolkit is tightly coupled with {pve}. That means that it is aware
57 of the cluster setup, and it can use the same network and storage
58 resources as fully virtualized VMs. You can even use the {pve}
59 firewall, or manage containers using the HA framework.
60
61 Our primary goal is to offer an environment as one would get from a
62 VM, but without the additional overhead. We call this "System
63 Containers".
64
65 NOTE: If you want to run micro-containers (with docker, rkt, ...), it
66 is best to run them inside a VM.
67
68
69 Technology Overview
70 -------------------
71
72 * LXC (https://linuxcontainers.org/)
73
74 * Integrated into {pve} graphical user interface (GUI)
75
76 * Easy to use command line tool `pct`
77
78 * Access via {pve} REST API
79
80 * lxcfs to provide containerized /proc file system
81
82 * AppArmor/Seccomp to improve security
83
84 * CRIU: for live migration (planned)
85
86 * Use latest available kernels (4.4.X)
87
88 * Image based deployment (templates)
89
90 * Use {pve} storage library
91
92 * Container setup from host (network, DNS, storage, ...)
93
94
95 Security Considerations
96 -----------------------
97
98 Containers use the same kernel as the host, so there is a big attack
99 surface for malicious users. You should consider this fact if you
100 provide containers to totally untrusted people. In general, fully
101 virtualized VMs provide better isolation.
102
103 The good news is that LXC uses many kernel security features like
104 AppArmor, CGroups and PID and user namespaces, which makes containers
105 usage quite secure.
106
107 Guest Operating System Configuration
108 ------------------------------------
109
110 We normally try to detect the operating system type inside the
111 container, and then modify some files inside the container to make
112 them work as expected. Here is a short list of things we do at
113 container startup:
114
115 set /etc/hostname:: to set the container name
116
117 modify /etc/hosts:: to allow lookup of the local hostname
118
119 network setup:: pass the complete network setup to the container
120
121 configure DNS:: pass information about DNS servers
122
123 adapt the init system:: for example, fix the number of spawned getty processes
124
125 set the root password:: when creating a new container
126
127 rewrite ssh_host_keys:: so that each container has unique keys
128
129 randomize crontab:: so that cron does not start at the same time on all containers
130
131 Changes made by {PVE} are enclosed by comment markers:
132
133 ----
134 # --- BEGIN PVE ---
135 <data>
136 # --- END PVE ---
137 ----
138
139 Those markers will be inserted at a reasonable location in the
140 file. If such a section already exists, it will be updated in place
141 and will not be moved.
142
143 Modification of a file can be prevented by adding a `.pve-ignore.`
144 file for it. For instance, if the file `/etc/.pve-ignore.hosts`
145 exists then the `/etc/hosts` file will not be touched. This can be a
146 simple empty file creatd via:
147
148 # touch /etc/.pve-ignore.hosts
149
150 Most modifications are OS dependent, so they differ between different
151 distributions and versions. You can completely disable modifications
152 by manually setting the `ostype` to `unmanaged`.
153
154 OS type detection is done by testing for certain files inside the
155 container:
156
157 Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
158
159 Debian:: test /etc/debian_version
160
161 Fedora:: test /etc/fedora-release
162
163 RedHat or CentOS:: test /etc/redhat-release
164
165 ArchLinux:: test /etc/arch-release
166
167 Alpine:: test /etc/alpine-release
168
169 Gentoo:: test /etc/gentoo-release
170
171 NOTE: Container start fails if the configured `ostype` differs from the auto
172 detected type.
173
174
175 [[pct_container_images]]
176 Container Images
177 ----------------
178
179 Container images, sometimes also referred to as ``templates'' or
180 ``appliances'', are `tar` archives which contain everything to run a
181 container. You can think of it as a tidy container backup. Like most
182 modern container toolkits, `pct` uses those images when you create a
183 new container, for example:
184
185 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
186
187 {pve} itself ships a set of basic templates for most common
188 operating systems, and you can download them using the `pveam` (short
189 for {pve} Appliance Manager) command line utility. You can also
190 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
191 that tool (or the graphical user interface).
192
193 Our image repositories contain a list of available images, and there
194 is a cron job run each day to download that list. You can trigger that
195 update manually with:
196
197 pveam update
198
199 After that you can view the list of available images using:
200
201 pveam available
202
203 You can restrict this large list by specifying the `section` you are
204 interested in, for example basic `system` images:
205
206 .List available system images
207 ----
208 # pveam available --section system
209 system archlinux-base_2015-24-29-1_x86_64.tar.gz
210 system centos-7-default_20160205_amd64.tar.xz
211 system debian-6.0-standard_6.0-7_amd64.tar.gz
212 system debian-7.0-standard_7.0-3_amd64.tar.gz
213 system debian-8.0-standard_8.0-1_amd64.tar.gz
214 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
215 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
216 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
217 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
218 ----
219
220 Before you can use such a template, you need to download them into one
221 of your storages. You can simply use storage `local` for that
222 purpose. For clustered installations, it is preferred to use a shared
223 storage so that all nodes can access those images.
224
225 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
226
227 You are now ready to create containers using that image, and you can
228 list all downloaded images on storage `local` with:
229
230 ----
231 # pveam list local
232 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
233 ----
234
235 The above command shows you the full {pve} volume identifiers. They include
236 the storage name, and most other {pve} commands can use them. For
237 example you can delete that image later with:
238
239 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
240
241
242 [[pct_container_storage]]
243 Container Storage
244 -----------------
245
246 Traditional containers use a very simple storage model, only allowing
247 a single mount point, the root file system. This was further
248 restricted to specific file system types like `ext4` and `nfs`.
249 Additional mounts are often done by user provided scripts. This turned
250 out to be complex and error prone, so we try to avoid that now.
251
252 Our new LXC based container model is more flexible regarding
253 storage. First, you can have more than a single mount point. This
254 allows you to choose a suitable storage for each application. For
255 example, you can use a relatively slow (and thus cheap) storage for
256 the container root file system. Then you can use a second mount point
257 to mount a very fast, distributed storage for your database
258 application. See section <<pct_mount_points,Mount Points>> for further
259 details.
260
261 The second big improvement is that you can use any storage type
262 supported by the {pve} storage library. That means that you can store
263 your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
264 or even on distributed storage systems like `ceph`. It also enables us
265 to use advanced storage features like snapshots and clones. `vzdump`
266 can also use the snapshot feature to provide consistent container
267 backups.
268
269 Last but not least, you can also mount local devices directly, or
270 mount local directories using bind mounts. That way you can access
271 local storage inside containers with zero overhead. Such bind mounts
272 also provide an easy way to share data between different containers.
273
274
275 FUSE Mounts
276 ~~~~~~~~~~~
277
278 WARNING: Because of existing issues in the Linux kernel's freezer
279 subsystem the usage of FUSE mounts inside a container is strongly
280 advised against, as containers need to be frozen for suspend or
281 snapshot mode backups.
282
283 If FUSE mounts cannot be replaced by other mounting mechanisms or storage
284 technologies, it is possible to establish the FUSE mount on the Proxmox host
285 and use a bind mount point to make it accessible inside the container.
286
287
288 Using Quotas Inside Containers
289 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
290
291 Quotas allow to set limits inside a container for the amount of disk
292 space that each user can use. This only works on ext4 image based
293 storage types and currently does not work with unprivileged
294 containers.
295
296 Activating the `quota` option causes the following mount options to be
297 used for a mount point:
298 `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
299
300 This allows quotas to be used like you would on any other system. You
301 can initialize the `/aquota.user` and `/aquota.group` files by running
302
303 ----
304 quotacheck -cmug /
305 quotaon /
306 ----
307
308 and edit the quotas via the `edquota` command. Refer to the documentation
309 of the distribution running inside the container for details.
310
311 NOTE: You need to run the above commands for every mount point by passing
312 the mount point's path instead of just `/`.
313
314
315 Using ACLs Inside Containers
316 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
317
318 The standard Posix **A**ccess **C**ontrol **L**ists are also available inside containers.
319 ACLs allow you to set more detailed file ownership than the traditional user/
320 group/others model.
321
322
323 Backup of Containers mount points
324 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
325
326 By default additional mount points besides the RootDisk mount point are not
327 included in backups. You can reverse this default behavior by setting the
328 * Backup* option on a mount point.
329 // see PVE::VZDump::LXC::prepare()
330
331 Replication of Containers mount points
332 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333
334 By default additional mount points are replicated when the RootDisk
335 is replicated. If you want the {pve} storage replication mechanism to skip a
336 mount point when starting a replication job, you can set the
337 *Skip replication* option on that mount point. +
338 As of {pve} 5.0, replication requires a storage of type `zfspool`, so adding a
339 mount point to a different type of storage when the container has replication
340 configured requires to *Skip replication* for that mount point.
341
342
343 [[pct_settings]]
344 Container Settings
345 ------------------
346
347 [[pct_general]]
348 General Settings
349 ~~~~~~~~~~~~~~~~
350
351 [thumbnail="gui-create-ct-general.png"]
352
353 General settings of a container include
354
355 * the *Node* : the physical server on which the container will run
356 * the *CT ID*: a unique number in this {pve} installation used to identify your container
357 * *Hostname*: the hostname of the container
358 * *Resource Pool*: a logical group of containers and VMs
359 * *Password*: the root password of the container
360 * *SSH Public Key*: a public key for connecting to the root account over SSH
361 * *Unprivileged container*: this option allows to choose at creation time
362 if you want to create a privileged or unprivileged container.
363
364
365 Privileged Containers
366 ^^^^^^^^^^^^^^^^^^^^^
367
368 Security is done by dropping capabilities, using mandatory access
369 control (AppArmor), SecComp filters and namespaces. The LXC team
370 considers this kind of container as unsafe, and they will not consider
371 new container escape exploits to be security issues worthy of a CVE
372 and quick fix. So you should use this kind of containers only inside a
373 trusted environment, or when no untrusted task is running as root in
374 the container.
375
376
377 Unprivileged Containers
378 ^^^^^^^^^^^^^^^^^^^^^^^
379
380 This kind of containers use a new kernel feature called user
381 namespaces. The root UID 0 inside the container is mapped to an
382 unprivileged user outside the container. This means that most security
383 issues (container escape, resource abuse, ...) in those containers
384 will affect a random unprivileged user, and so would be a generic
385 kernel security bug rather than an LXC issue. The LXC team thinks
386 unprivileged containers are safe by design.
387
388 NOTE: If the container uses systemd as an init system, please be
389 aware the systemd version running inside the container should be equal
390 or greater than 220.
391
392 [[pct_cpu]]
393 CPU
394 ~~~
395
396 [thumbnail="gui-create-ct-cpu.png"]
397
398 You can restrict the number of visible CPUs inside the container using
399 the `cores` option. This is implemented using the Linux 'cpuset'
400 cgroup (**c**ontrol *group*). A special task inside `pvestatd` tries
401 to distribute running containers among available CPUs. You can view
402 the assigned CPUs using the following command:
403
404 ----
405 # pct cpusets
406 ---------------------
407 102: 6 7
408 105: 2 3 4 5
409 108: 0 1
410 ---------------------
411 ----
412
413 Containers use the host kernel directly, so all task inside a
414 container are handled by the host CPU scheduler. {pve} uses the Linux
415 'CFS' (**C**ompletely **F**air **S**cheduler) scheduler by default,
416 which has additional bandwidth control options.
417
418 [horizontal]
419
420 `cpulimit`: :: You can use this option to further limit assigned CPU
421 time. Please note that this is a floating point number, so it is
422 perfectly valid to assign two cores to a container, but restrict
423 overall CPU consumption to half a core.
424 +
425 ----
426 cores: 2
427 cpulimit: 0.5
428 ----
429
430 `cpuunits`: :: This is a relative weight passed to the kernel
431 scheduler. The larger the number is, the more CPU time this container
432 gets. Number is relative to the weights of all the other running
433 containers. The default is 1024. You can use this setting to
434 prioritize some containers.
435
436
437 [[pct_memory]]
438 Memory
439 ~~~~~~
440
441 [thumbnail="gui-create-ct-memory.png"]
442
443 Container memory is controlled using the cgroup memory controller.
444
445 [horizontal]
446
447 `memory`: :: Limit overall memory usage. This corresponds
448 to the `memory.limit_in_bytes` cgroup setting.
449
450 `swap`: :: Allows the container to use additional swap memory from the
451 host swap space. This corresponds to the `memory.memsw.limit_in_bytes`
452 cgroup setting, which is set to the sum of both value (`memory +
453 swap`).
454
455
456 [[pct_mount_points]]
457 Mount Points
458 ~~~~~~~~~~~~
459
460 [thumbnail="gui-create-ct-root-disk.png"]
461
462 The root mount point is configured with the `rootfs` property, and you can
463 configure up to 10 additional mount points. The corresponding options
464 are called `mp0` to `mp9`, and they can contain the following setting:
465
466 include::pct-mountpoint-opts.adoc[]
467
468 Currently there are basically three types of mount points: storage backed
469 mount points, bind mounts and device mounts.
470
471 .Typical container `rootfs` configuration
472 ----
473 rootfs: thin1:base-100-disk-1,size=8G
474 ----
475
476
477 Storage Backed Mount Points
478 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
479
480 Storage backed mount points are managed by the {pve} storage subsystem and come
481 in three different flavors:
482
483 - Image based: these are raw images containing a single ext4 formatted file
484 system.
485 - ZFS subvolumes: these are technically bind mounts, but with managed storage,
486 and thus allow resizing and snapshotting.
487 - Directories: passing `size=0` triggers a special case where instead of a raw
488 image a directory is created.
489
490
491 Bind Mount Points
492 ^^^^^^^^^^^^^^^^^
493
494 Bind mounts allow you to access arbitrary directories from your Proxmox VE host
495 inside a container. Some potential use cases are:
496
497 - Accessing your home directory in the guest
498 - Accessing an USB device directory in the guest
499 - Accessing an NFS mount from the host in the guest
500
501 Bind mounts are considered to not be managed by the storage subsystem, so you
502 cannot make snapshots or deal with quotas from inside the container. With
503 unprivileged containers you might run into permission problems caused by the
504 user mapping and cannot use ACLs.
505
506 NOTE: The contents of bind mount points are not backed up when using `vzdump`.
507
508 WARNING: For security reasons, bind mounts should only be established
509 using source directories especially reserved for this purpose, e.g., a
510 directory hierarchy under `/mnt/bindmounts`. Never bind mount system
511 directories like `/`, `/var` or `/etc` into a container - this poses a
512 great security risk.
513
514 NOTE: The bind mount source path must not contain any symlinks.
515
516 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
517 container with ID `100` under the path `/shared`, use a configuration line like
518 `mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
519 Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
520 achieve the same result.
521
522
523 Device Mount Points
524 ^^^^^^^^^^^^^^^^^^^
525
526 Device mount points allow to mount block devices of the host directly into the
527 container. Similar to bind mounts, device mounts are not managed by {PVE}'s
528 storage subsystem, but the `quota` and `acl` options will be honored.
529
530 NOTE: Device mount points should only be used under special circumstances. In
531 most cases a storage backed mount point offers the same performance and a lot
532 more features.
533
534 NOTE: The contents of device mount points are not backed up when using `vzdump`.
535
536
537 [[pct_container_network]]
538 Network
539 ~~~~~~~
540
541 [thumbnail="gui-create-ct-network.png"]
542
543 You can configure up to 10 network interfaces for a single
544 container. The corresponding options are called `net0` to `net9`, and
545 they can contain the following setting:
546
547 include::pct-network-opts.adoc[]
548
549
550 [[pct_startup_and_shutdown]]
551 Automatic Start and Shutdown of Containers
552 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
553
554 After creating your containers, you probably want them to start automatically
555 when the host system boots. For this you need to select the option 'Start at
556 boot' from the 'Options' Tab of your container in the web interface, or set it with
557 the following command:
558
559 pct set <ctid> -onboot 1
560
561 .Start and Shutdown Order
562 // use the screenshot from qemu - its the same
563 [thumbnail="gui-qemu-edit-start-order.png"]
564
565 If you want to fine tune the boot order of your containers, you can use the following
566 parameters :
567
568 * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if
569 you want the CT to be the first to be started. (We use the reverse startup
570 order for shutdown, so a container with a start order of 1 would be the last to
571 be shut down)
572 * *Startup delay*: Defines the interval between this container start and subsequent
573 containers starts . E.g. set it to 240 if you want to wait 240 seconds before starting
574 other containers.
575 * *Shutdown timeout*: Defines the duration in seconds {pve} should wait
576 for the container to be offline after issuing a shutdown command.
577 By default this value is set to 60, which means that {pve} will issue a
578 shutdown request, wait 60s for the machine to be offline, and if after 60s
579 the machine is still online will notify that the shutdown action failed.
580
581 Please note that containers without a Start/Shutdown order parameter will always
582 start after those where the parameter is set, and this parameter only
583 makes sense between the machines running locally on a host, and not
584 cluster-wide.
585
586
587 Backup and Restore
588 ------------------
589
590
591 Container Backup
592 ~~~~~~~~~~~~~~~~
593
594 It is possible to use the `vzdump` tool for container backup. Please
595 refer to the `vzdump` manual page for details.
596
597
598 Restoring Container Backups
599 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
600
601 Restoring container backups made with `vzdump` is possible using the
602 `pct restore` command. By default, `pct restore` will attempt to restore as much
603 of the backed up container configuration as possible. It is possible to override
604 the backed up configuration by manually setting container options on the command
605 line (see the `pct` manual page for details).
606
607 NOTE: `pvesm extractconfig` can be used to view the backed up configuration
608 contained in a vzdump archive.
609
610 There are two basic restore modes, only differing by their handling of mount
611 points:
612
613
614 ``Simple'' Restore Mode
615 ^^^^^^^^^^^^^^^^^^^^^^^
616
617 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
618 are explicitly set, the mount point configuration from the backed up
619 configuration file is restored using the following steps:
620
621 . Extract mount points and their options from backup
622 . Create volumes for storage backed mount points (on storage provided with the
623 `storage` parameter, or default local storage if unset)
624 . Extract files from backup archive
625 . Add bind and device mount points to restored configuration (limited to root user)
626
627 NOTE: Since bind and device mount points are never backed up, no files are
628 restored in the last step, but only the configuration options. The assumption
629 is that such mount points are either backed up with another mechanism (e.g.,
630 NFS space that is bind mounted into many containers), or not intended to be
631 backed up at all.
632
633 This simple mode is also used by the container restore operations in the web
634 interface.
635
636
637 ``Advanced'' Restore Mode
638 ^^^^^^^^^^^^^^^^^^^^^^^^^
639
640 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
641 parameters), the `pct restore` command is automatically switched into an
642 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
643 configuration options contained in the backup archive, and instead only
644 uses the options explicitly provided as parameters.
645
646 This mode allows flexible configuration of mount point settings at restore time,
647 for example:
648
649 * Set target storages, volume sizes and other options for each mount point
650 individually
651 * Redistribute backed up files according to new mount point scheme
652 * Restore to device and/or bind mount points (limited to root user)
653
654
655 Managing Containers with `pct`
656 ------------------------------
657
658 `pct` is the tool to manage Linux Containers on {pve}. You can create
659 and destroy containers, and control execution (start, stop, migrate,
660 ...). You can use pct to set parameters in the associated config file,
661 like network configuration or memory limits.
662
663
664 CLI Usage Examples
665 ~~~~~~~~~~~~~~~~~~
666
667 Create a container based on a Debian template (provided you have
668 already downloaded the template via the web interface)
669
670 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
671
672 Start container 100
673
674 pct start 100
675
676 Start a login session via getty
677
678 pct console 100
679
680 Enter the LXC namespace and run a shell as root user
681
682 pct enter 100
683
684 Display the configuration
685
686 pct config 100
687
688 Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
689 set the address and gateway, while it's running
690
691 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
692
693 Reduce the memory of the container to 512MB
694
695 pct set 100 -memory 512
696
697
698 Obtaining Debugging Logs
699 ~~~~~~~~~~~~~~~~~~~~~~~~
700
701 In case `pct start` is unable to start a specific container, it might be
702 helpful to collect debugging output by running `lxc-start` (replace `ID` with
703 the container's ID):
704
705 lxc-start -n ID -F -l DEBUG -o /tmp/lxc-ID.log
706
707 This command will attempt to start the container in foreground mode, to stop the container run `pct shutdown ID` or `pct stop ID` in a second terminal.
708
709 The collected debug log is written to `/tmp/lxc-ID.log`.
710
711 NOTE: If you have changed the container's configuration since the last start
712 attempt with `pct start`, you need to run `pct start` at least once to also
713 update the configuration used by `lxc-start`.
714
715 [[pct_migration]]
716 Migration
717 ---------
718
719 If you have a cluster, you can migrate your Containers with
720
721 pct migrate <vmid> <target>
722
723 This works as long as your Container is offline. If it has local volumes or
724 mountpoints defined, the migration will copy the content over the network to
725 the target host if there is the same storage defined.
726
727 If you want to migrate online Containers, the only way is to use
728 restart migration. This can be initiated with the -restart flag and the optional
729 -timeout parameter.
730
731 A restart migration will shut down the Container and kill it after the specified
732 timeout (the default is 180 seconds). Then it will migrate the Container
733 like an offline migration and when finished, it starts the Container on the
734 target node.
735
736 [[pct_configuration]]
737 Configuration
738 -------------
739
740 The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
741 where `<CTID>` is the numeric ID of the given container. Like all
742 other files stored inside `/etc/pve/`, they get automatically
743 replicated to all other cluster nodes.
744
745 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
746 unique cluster wide.
747
748 .Example Container Configuration
749 ----
750 ostype: debian
751 arch: amd64
752 hostname: www
753 memory: 512
754 swap: 512
755 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
756 rootfs: local:107/vm-107-disk-1.raw,size=7G
757 ----
758
759 Those configuration files are simple text files, and you can edit them
760 using a normal text editor (`vi`, `nano`, ...). This is sometimes
761 useful to do small corrections, but keep in mind that you need to
762 restart the container to apply such changes.
763
764 For that reason, it is usually better to use the `pct` command to
765 generate and modify those files, or do the whole thing using the GUI.
766 Our toolkit is smart enough to instantaneously apply most changes to
767 running containers. This feature is called "hot plug", and there is no
768 need to restart the container in that case.
769
770
771 File Format
772 ~~~~~~~~~~~
773
774 Container configuration files use a simple colon separated key/value
775 format. Each line has the following format:
776
777 -----
778 # this is a comment
779 OPTION: value
780 -----
781
782 Blank lines in those files are ignored, and lines starting with a `#`
783 character are treated as comments and are also ignored.
784
785 It is possible to add low-level, LXC style configuration directly, for
786 example:
787
788 lxc.init_cmd: /sbin/my_own_init
789
790 or
791
792 lxc.init_cmd = /sbin/my_own_init
793
794 Those settings are directly passed to the LXC low-level tools.
795
796
797 [[pct_snapshots]]
798 Snapshots
799 ~~~~~~~~~
800
801 When you create a snapshot, `pct` stores the configuration at snapshot
802 time into a separate snapshot section within the same configuration
803 file. For example, after creating a snapshot called ``testsnapshot'',
804 your configuration file will look like this:
805
806 .Container configuration with snapshot
807 ----
808 memory: 512
809 swap: 512
810 parent: testsnaphot
811 ...
812
813 [testsnaphot]
814 memory: 512
815 swap: 512
816 snaptime: 1457170803
817 ...
818 ----
819
820 There are a few snapshot related properties like `parent` and
821 `snaptime`. The `parent` property is used to store the parent/child
822 relationship between snapshots. `snaptime` is the snapshot creation
823 time stamp (Unix epoch).
824
825
826 [[pct_options]]
827 Options
828 ~~~~~~~
829
830 include::pct.conf.5-opts.adoc[]
831
832
833 Locks
834 -----
835
836 Container migrations, snapshots and backups (`vzdump`) set a lock to
837 prevent incompatible concurrent actions on the affected container. Sometimes
838 you need to remove such a lock manually (e.g., after a power failure).
839
840 pct unlock <CTID>
841
842 CAUTION: Only do that if you are sure the action which set the lock is
843 no longer running.
844
845
846 ifdef::manvolnum[]
847
848 Files
849 ------
850
851 `/etc/pve/lxc/<CTID>.conf`::
852
853 Configuration file for the container '<CTID>'.
854
855
856 include::pve-copyright.adoc[]
857 endif::manvolnum[]
858
859
860
861
862
863
864