]>
Commit | Line | Data |
---|---|---|
1 | ifdef::manvolnum[] | |
2 | PVE({manvolnum}) | |
3 | ================ | |
4 | include::attributes.txt[] | |
5 | ||
6 | NAME | |
7 | ---- | |
8 | ||
9 | pct - Tool to manage Linux Containers (LXC) on Proxmox VE | |
10 | ||
11 | ||
12 | SYNOPSYS | |
13 | -------- | |
14 | ||
15 | include::pct.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Proxmox Container Toolkit | |
23 | ========================= | |
24 | include::attributes.txt[] | |
25 | endif::manvolnum[] | |
26 | ||
27 | ||
28 | Containers are a lightweight alternative to fully virtualized | |
29 | VMs. Instead of emulating a complete Operating System (OS), containers | |
30 | simply use the OS of the host they run on. This implies that all | |
31 | containers use the same kernel, and that they can access resources | |
32 | from the host directly. | |
33 | ||
34 | This is great because containers do not waste CPU power nor memory due | |
35 | to kernel emulation. Container run-time costs are close to zero and | |
36 | usually negligible. But there are also some drawbacks you need to | |
37 | consider: | |
38 | ||
39 | * You can only run Linux based OS inside containers, i.e. it is not | |
40 | possible to run FreeBSD or MS Windows inside. | |
41 | ||
42 | * For security reasons, access to host resources needs to be | |
43 | restricted. This is done with AppArmor, SecComp filters and other | |
44 | kernel features. Be prepared that some syscalls are not allowed | |
45 | inside containers. | |
46 | ||
47 | {pve} uses https://linuxcontainers.org/[LXC] as underlying container | |
48 | technology. We consider LXC as low-level library, which provides | |
49 | countless options. It would be too difficult to use those tools | |
50 | directly. Instead, we provide a small wrapper called `pct`, the | |
51 | "Proxmox Container Toolkit". | |
52 | ||
53 | The toolkit is tightly coupled with {pve}. That means that it is aware | |
54 | of the cluster setup, and it can use the same network and storage | |
55 | resources as fully virtualized VMs. You can even use the {pve} | |
56 | firewall, or manage containers using the HA framework. | |
57 | ||
58 | Our primary goal is to offer an environment as one would get from a | |
59 | VM, but without the additional overhead. We call this "System | |
60 | Containers". | |
61 | ||
62 | NOTE: If you want to run micro-containers (with docker, rkt, ...), it | |
63 | is best to run them inside a VM. | |
64 | ||
65 | ||
66 | Security Considerations | |
67 | ----------------------- | |
68 | ||
69 | Containers use the same kernel as the host, so there is a big attack | |
70 | surface for malicious users. You should consider this fact if you | |
71 | provide containers to totally untrusted people. In general, fully | |
72 | virtualized VMs provide better isolation. | |
73 | ||
74 | The good news is that LXC uses many kernel security features like | |
75 | AppArmor, CGroups and PID and user namespaces, which makes containers | |
76 | usage quite secure. We distinguish two types of containers: | |
77 | ||
78 | Privileged containers | |
79 | ~~~~~~~~~~~~~~~~~~~~~ | |
80 | ||
81 | Security is done by dropping capabilities, using mandatory access | |
82 | control (AppArmor), SecComp filters and namespaces. The LXC team | |
83 | considers this kind of container as unsafe, and they will not consider | |
84 | new container escape exploits to be security issues worthy of a CVE | |
85 | and quick fix. So you should use this kind of containers only inside a | |
86 | trusted environment, or when no untrusted task is running as root in | |
87 | the container. | |
88 | ||
89 | Unprivileged containers | |
90 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
91 | ||
92 | This kind of containers use a new kernel feature called user | |
93 | namespaces. The root uid 0 inside the container is mapped to an | |
94 | unprivileged user outside the container. This means that most security | |
95 | issues (container escape, resource abuse, ...) in those containers | |
96 | will affect a random unprivileged user, and so would be a generic | |
97 | kernel security bug rather than an LXC issue. The LXC team thinks | |
98 | unprivileged containers are safe by design. | |
99 | ||
100 | ||
101 | Configuration | |
102 | ------------- | |
103 | ||
104 | The '/etc/pve/lxc/<CTID>.conf' file stores container configuration, | |
105 | where '<CTID>' is the numeric ID of the given container. Like all | |
106 | other files stored inside '/etc/pve/', they get automatically | |
107 | replicated to all other cluster nodes. | |
108 | ||
109 | NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be | |
110 | unique cluster wide. | |
111 | ||
112 | .Example Container Configuration | |
113 | ---- | |
114 | ostype: debian | |
115 | arch: amd64 | |
116 | hostname: www | |
117 | memory: 512 | |
118 | swap: 512 | |
119 | net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth | |
120 | rootfs: local:107/vm-107-disk-1.raw,size=7G | |
121 | ---- | |
122 | ||
123 | Those configuration files are simple text files, and you can edit them | |
124 | using a normal text editor ('vi', 'nano', ...). This is sometimes | |
125 | useful to do small corrections, but keep in mind that you need to | |
126 | restart the container to apply such changes. | |
127 | ||
128 | For that reason, it is usually better to use the 'pct' command to | |
129 | generate and modify those files, or do the whole thing using the GUI. | |
130 | Our toolkit is smart enough to instantaneously apply most changes to | |
131 | running containers. This feature is called "hot plug", and there is no | |
132 | need to restart the container in that case. | |
133 | ||
134 | File Format | |
135 | ~~~~~~~~~~~ | |
136 | ||
137 | Container configuration files use a simple colon separated key/value | |
138 | format. Each line has the following format: | |
139 | ||
140 | # this is a comment | |
141 | OPTION: value | |
142 | ||
143 | Blank lines in those files are ignored, and lines starting with a '#' | |
144 | character are treated as comments and are also ignored. | |
145 | ||
146 | It is possible to add low-level, LXC style configuration directly, for | |
147 | example: | |
148 | ||
149 | lxc.init_cmd: /sbin/my_own_init | |
150 | ||
151 | or | |
152 | ||
153 | lxc.init_cmd = /sbin/my_own_init | |
154 | ||
155 | Those settings are directly passed to the LXC low-level tools. | |
156 | ||
157 | Snapshots | |
158 | ~~~~~~~~~ | |
159 | ||
160 | When you create a snapshot, 'pct' stores the configuration at snapshot | |
161 | time into a separate snapshot section within the same configuration | |
162 | file. For example, after creating a snapshot called 'testsnapshot', | |
163 | your configuration file will look like this: | |
164 | ||
165 | .Container Configuration with Snapshot | |
166 | ---- | |
167 | memory: 512 | |
168 | swap: 512 | |
169 | parent: testsnaphot | |
170 | ... | |
171 | ||
172 | [testsnaphot] | |
173 | memory: 512 | |
174 | swap: 512 | |
175 | snaptime: 1457170803 | |
176 | ... | |
177 | ---- | |
178 | ||
179 | There are a few snapshot related properties like 'parent' and | |
180 | 'snaptime'. The 'parent' property is used to store the parent/child | |
181 | relationship between snapshots. 'snaptime' is the snapshot creation | |
182 | time stamp (unix epoch). | |
183 | ||
184 | Guest Operating System Configuration | |
185 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
186 | ||
187 | We normally try to detect the operating system type inside the | |
188 | container, and then modify some files inside the container to make | |
189 | them work as expected. Here is a short list of things we do at | |
190 | container startup: | |
191 | ||
192 | set /etc/hostname:: to set the container name | |
193 | ||
194 | modify /etc/hosts:: to allow lookup of the local hostname | |
195 | ||
196 | network setup:: pass the complete network setup to the container | |
197 | ||
198 | configure DNS:: pass information about DNS servers | |
199 | ||
200 | adapt the init system:: for example, fix the number of spawned getty processes | |
201 | ||
202 | set the root password:: when creating a new container | |
203 | ||
204 | rewrite ssh_host_keys:: so that each container has unique keys | |
205 | ||
206 | randomize crontab:: so that cron does not start at the same time on all containers | |
207 | ||
208 | Changes made by {PVE} are enclosed by comment markers: | |
209 | ||
210 | ---- | |
211 | # --- BEGIN PVE --- | |
212 | <data> | |
213 | # --- END PVE --- | |
214 | ---- | |
215 | ||
216 | Those markers will be inserted at a reasonable location in the | |
217 | file. If such a section already exists, it will be updated in place | |
218 | and will not be moved. | |
219 | ||
220 | Modification of a file can be prevented by adding a `.pve-ignore.` | |
221 | file for it. For instance, if the file `/etc/.pve-ignore.hosts` | |
222 | exists then the `/etc/hosts` file will not be touched. This can be a | |
223 | simple empty file creatd via: | |
224 | ||
225 | # touch /etc/.pve-ignore.hosts | |
226 | ||
227 | Most modifications are OS dependent, so they differ between different | |
228 | distributions and versions. You can completely disable modifications | |
229 | by manually setting the 'ostype' to 'unmanaged'. | |
230 | ||
231 | OS type detection is done by testing for certain files inside the | |
232 | container: | |
233 | ||
234 | Ubuntu:: inspect /etc/lsb-release ('DISTRIB_ID=Ubuntu') | |
235 | ||
236 | Debian:: test /etc/debian_version | |
237 | ||
238 | Fedora:: test /etc/fedora-release | |
239 | ||
240 | RedHat or CentOS:: test /etc/redhat-release | |
241 | ||
242 | ArchLinux:: test /etc/arch-release | |
243 | ||
244 | Alpine:: test /etc/alpine-release | |
245 | ||
246 | Gentoo:: test /etc/gentoo-release | |
247 | ||
248 | NOTE: Container start fails if the configured 'ostype' differs from the auto | |
249 | detected type. | |
250 | ||
251 | Options | |
252 | ~~~~~~~ | |
253 | ||
254 | include::pct.conf.5-opts.adoc[] | |
255 | ||
256 | ||
257 | Container Images | |
258 | ---------------- | |
259 | ||
260 | Container Images, sometimes also referred to as "templates" or | |
261 | "appliances", are 'tar' archives which contain everything to run a | |
262 | container. You can think of it as a tidy container backup. Like most | |
263 | modern container toolkits, 'pct' uses those images when you create a | |
264 | new container, for example: | |
265 | ||
266 | pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz | |
267 | ||
268 | Proxmox itself ships a set of basic templates for most common | |
269 | operating systems, and you can download them using the 'pveam' (short | |
270 | for {pve} Appliance Manager) command line utility. You can also | |
271 | download https://www.turnkeylinux.org/[TurnKey Linux] containers using | |
272 | that tool (or the graphical user interface). | |
273 | ||
274 | Our image repositories contain a list of available images, and there | |
275 | is a cron job run each day to download that list. You can trigger that | |
276 | update manually with: | |
277 | ||
278 | pveam update | |
279 | ||
280 | After that you can view the list of available images using: | |
281 | ||
282 | pveam available | |
283 | ||
284 | You can restrict this large list by specifying the 'section' you are | |
285 | interested in, for example basic 'system' images: | |
286 | ||
287 | .List available system images | |
288 | ---- | |
289 | # pveam available --section system | |
290 | system archlinux-base_2015-24-29-1_x86_64.tar.gz | |
291 | system centos-7-default_20160205_amd64.tar.xz | |
292 | system debian-6.0-standard_6.0-7_amd64.tar.gz | |
293 | system debian-7.0-standard_7.0-3_amd64.tar.gz | |
294 | system debian-8.0-standard_8.0-1_amd64.tar.gz | |
295 | system ubuntu-12.04-standard_12.04-1_amd64.tar.gz | |
296 | system ubuntu-14.04-standard_14.04-1_amd64.tar.gz | |
297 | system ubuntu-15.04-standard_15.04-1_amd64.tar.gz | |
298 | system ubuntu-15.10-standard_15.10-1_amd64.tar.gz | |
299 | ---- | |
300 | ||
301 | Before you can use such a template, you need to download them into one | |
302 | of your storages. You can simply use storage 'local' for that | |
303 | purpose. For clustered installations, it is preferred to use a shared | |
304 | storage so that all nodes can access those images. | |
305 | ||
306 | pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz | |
307 | ||
308 | You are now ready to create containers using that image, and you can | |
309 | list all downloaded images on storage 'local' with: | |
310 | ||
311 | ---- | |
312 | # pveam list local | |
313 | local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB | |
314 | ---- | |
315 | ||
316 | The above command shows you the full {pve} volume identifiers. They include | |
317 | the storage name, and most other {pve} commands can use them. For | |
318 | examply you can delete that image later with: | |
319 | ||
320 | pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz | |
321 | ||
322 | ||
323 | Container Storage | |
324 | ----------------- | |
325 | ||
326 | Traditional containers use a very simple storage model, only allowing | |
327 | a single mount point, the root file system. This was further | |
328 | restricted to specific file system types like 'ext4' and 'nfs'. | |
329 | Additional mounts are often done by user provided scripts. This turend | |
330 | out to be complex and error prone, so we try to avoid that now. | |
331 | ||
332 | Our new LXC based container model is more flexible regarding | |
333 | storage. First, you can have more than a single mount point. This | |
334 | allows you to choose a suitable storage for each application. For | |
335 | example, you can use a relatively slow (and thus cheap) storage for | |
336 | the container root file system. Then you can use a second mount point | |
337 | to mount a very fast, distributed storage for your database | |
338 | application. | |
339 | ||
340 | The second big improvement is that you can use any storage type | |
341 | supported by the {pve} storage library. That means that you can store | |
342 | your containers on local 'lvmthin' or 'zfs', shared 'iSCSI' storage, | |
343 | or even on distributed storage systems like 'ceph'. It also enables us | |
344 | to use advanced storage features like snapshots and clones. 'vzdump' | |
345 | can also use the snapshot feature to provide consistent container | |
346 | backups. | |
347 | ||
348 | Last but not least, you can also mount local devices directly, or | |
349 | mount local directories using bind mounts. That way you can access | |
350 | local storage inside containers with zero overhead. Such bind mounts | |
351 | also provide an easy way to share data between different containers. | |
352 | ||
353 | ||
354 | Mount Points | |
355 | ~~~~~~~~~~~~ | |
356 | ||
357 | The root mount point is configured with the `rootfs` property, and you can | |
358 | configure up to 10 additional mount points. The corresponding options | |
359 | are called `mp0` to `mp9`, and they can contain the following setting: | |
360 | ||
361 | include::pct-mountpoint-opts.adoc[] | |
362 | ||
363 | Currently there are basically three types of mount points: storage backed | |
364 | mount points, bind mounts and device mounts. | |
365 | ||
366 | .Typical Container `rootfs` configuration | |
367 | ---- | |
368 | rootfs: thin1:base-100-disk-1,size=8G | |
369 | ---- | |
370 | ||
371 | ||
372 | Storage backed mount points | |
373 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
374 | ||
375 | Storage backed mount points are managed by the {pve} storage subsystem and come | |
376 | in three different flavors: | |
377 | ||
378 | - Image based: These are raw images containing a single ext4 formatted file | |
379 | system. | |
380 | - ZFS Subvolumes: These are technically bind mounts, but with managed storage, | |
381 | and thus allow resizing and snapshotting. | |
382 | - Directories: passing `size=0` triggers a special case where instead of a raw | |
383 | image a directory is created. | |
384 | ||
385 | ||
386 | Bind mount points | |
387 | ^^^^^^^^^^^^^^^^^ | |
388 | ||
389 | Bind mounts allow you to access arbitrary directories from your Proxmox VE host | |
390 | inside a container. Some potential use cases are: | |
391 | ||
392 | - Accessing your home directory in the guest | |
393 | - Accessing an USB device directory in the guest | |
394 | - Accessing an NFS mount from the host in the guest | |
395 | ||
396 | Bind mounts are considered to not be managed by the storage subsystem, so you | |
397 | cannot make snapshots or deal with quotas from inside the container. With | |
398 | unprivileged containers you might run into permission problems caused by the | |
399 | user mapping and cannot use ACLs. | |
400 | ||
401 | NOTE: The contents of bind mount points are not backed up when using 'vzdump'. | |
402 | ||
403 | WARNING: For security reasons, bind mounts should only be established | |
404 | using source directories especially reserved for this purpose, e.g., a | |
405 | directory hierarchy under `/mnt/bindmounts`. Never bind mount system | |
406 | directories like `/`, `/var` or `/etc` into a container - this poses a | |
407 | great security risk. | |
408 | ||
409 | NOTE: The bind mount source path must not contain any symlinks. | |
410 | ||
411 | For example, to make the directory `/mnt/bindmounts/shared` accessible in the | |
412 | container with ID `100` under the path `/shared`, use a configuration line like | |
413 | 'mp0: /mnt/bindmounts/shared,mp=/shared' in '/etc/pve/lxc/100.conf'. | |
414 | Alternatively, use 'pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared' to | |
415 | achieve the same result. | |
416 | ||
417 | ||
418 | Device mount points | |
419 | ^^^^^^^^^^^^^^^^^^^ | |
420 | ||
421 | Device mount points allow to mount block devices of the host directly into the | |
422 | container. Similar to bind mounts, device mounts are not managed by {PVE}'s | |
423 | storage subsystem, but the `quota` and `acl` options will be honored. | |
424 | ||
425 | NOTE: Device mount points should only be used under special circumstances. In | |
426 | most cases a storage backed mount point offers the same performance and a lot | |
427 | more features. | |
428 | ||
429 | NOTE: The contents of device mount points are not backed up when using 'vzdump'. | |
430 | ||
431 | ||
432 | FUSE mounts | |
433 | ~~~~~~~~~~~ | |
434 | ||
435 | WARNING: Because of existing issues in the Linux kernel's freezer | |
436 | subsystem the usage of FUSE mounts inside a container is strongly | |
437 | advised against, as containers need to be frozen for suspend or | |
438 | snapshot mode backups. | |
439 | ||
440 | If FUSE mounts cannot be replaced by other mounting mechanisms or storage | |
441 | technologies, it is possible to establish the FUSE mount on the Proxmox host | |
442 | and use a bind mount point to make it accessible inside the container. | |
443 | ||
444 | ||
445 | Using quotas inside containers | |
446 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
447 | ||
448 | Quotas allow to set limits inside a container for the amount of disk | |
449 | space that each user can use. This only works on ext4 image based | |
450 | storage types and currently does not work with unprivileged | |
451 | containers. | |
452 | ||
453 | Activating the `quota` option causes the following mount options to be | |
454 | used for a mount point: | |
455 | `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0` | |
456 | ||
457 | This allows quotas to be used like you would on any other system. You | |
458 | can initialize the `/aquota.user` and `/aquota.group` files by running | |
459 | ||
460 | ---- | |
461 | quotacheck -cmug / | |
462 | quotaon / | |
463 | ---- | |
464 | ||
465 | and edit the quotas via the `edquota` command. Refer to the documentation | |
466 | of the distribution running inside the container for details. | |
467 | ||
468 | NOTE: You need to run the above commands for every mount point by passing | |
469 | the mount point's path instead of just `/`. | |
470 | ||
471 | ||
472 | Using ACLs inside containers | |
473 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
474 | ||
475 | The standard Posix Access Control Lists are also available inside containers. | |
476 | ACLs allow you to set more detailed file ownership than the traditional user/ | |
477 | group/others model. | |
478 | ||
479 | ||
480 | Container Network | |
481 | ----------------- | |
482 | ||
483 | You can configure up to 10 network interfaces for a single | |
484 | container. The corresponding options are called 'net0' to 'net9', and | |
485 | they can contain the following setting: | |
486 | ||
487 | include::pct-network-opts.adoc[] | |
488 | ||
489 | ||
490 | Backup and Restore | |
491 | ------------------ | |
492 | ||
493 | Container Backup | |
494 | ~~~~~~~~~~~~~~~~ | |
495 | ||
496 | It is possible to use the 'vzdump' tool for container backup. Please | |
497 | refer to the 'vzdump' manual page for details. | |
498 | ||
499 | Restoring Container Backups | |
500 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
501 | ||
502 | Restoring container backups made with 'vzdump' is possible using the | |
503 | 'pct restore' command. By default, 'pct restore' will attempt to restore as much | |
504 | of the backed up container configuration as possible. It is possible to override | |
505 | the backed up configuration by manually setting container options on the command | |
506 | line (see the 'pct' manual page for details). | |
507 | ||
508 | NOTE: 'pvesm extractconfig' can be used to view the backed up configuration | |
509 | contained in a vzdump archive. | |
510 | ||
511 | There are two basic restore modes, only differing by their handling of mount | |
512 | points: | |
513 | ||
514 | ||
515 | "Simple" restore mode | |
516 | ^^^^^^^^^^^^^^^^^^^^^ | |
517 | ||
518 | If neither the `rootfs` parameter nor any of the optional `mpX` parameters | |
519 | are explicitly set, the mount point configuration from the backed up | |
520 | configuration file is restored using the following steps: | |
521 | ||
522 | . Extract mount points and their options from backup | |
523 | . Create volumes for storage backed mount points (on storage provided with the | |
524 | `storage` parameter, or default local storage if unset) | |
525 | . Extract files from backup archive | |
526 | . Add bind and device mount points to restored configuration (limited to root user) | |
527 | ||
528 | NOTE: Since bind and device mount points are never backed up, no files are | |
529 | restored in the last step, but only the configuration options. The assumption | |
530 | is that such mount points are either backed up with another mechanism (e.g., | |
531 | NFS space that is bind mounted into many containers), or not intended to be | |
532 | backed up at all. | |
533 | ||
534 | This simple mode is also used by the container restore operations in the web | |
535 | interface. | |
536 | ||
537 | ||
538 | "Advanced" restore mode | |
539 | ^^^^^^^^^^^^^^^^^^^^^^^ | |
540 | ||
541 | By setting the `rootfs` parameter (and optionally, any combination of `mpX` | |
542 | parameters), the 'pct restore' command is automatically switched into an | |
543 | advanced mode. This advanced mode completely ignores the `rootfs` and `mpX` | |
544 | configuration options contained in the backup archive, and instead only | |
545 | uses the options explicitly provided as parameters. | |
546 | ||
547 | This mode allows flexible configuration of mount point settings at restore time, | |
548 | for example: | |
549 | ||
550 | * Set target storages, volume sizes and other options for each mount point | |
551 | individually | |
552 | * Redistribute backed up files according to new mount point scheme | |
553 | * Restore to device and/or bind mount points (limited to root user) | |
554 | ||
555 | ||
556 | Managing Containers with 'pct' | |
557 | ------------------------------ | |
558 | ||
559 | 'pct' is the tool to manage Linux Containers on {pve}. You can create | |
560 | and destroy containers, and control execution (start, stop, migrate, | |
561 | ...). You can use pct to set parameters in the associated config file, | |
562 | like network configuration or memory limits. | |
563 | ||
564 | CLI Usage Examples | |
565 | ~~~~~~~~~~~~~~~~~~ | |
566 | ||
567 | Create a container based on a Debian template (provided you have | |
568 | already downloaded the template via the webgui) | |
569 | ||
570 | pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz | |
571 | ||
572 | Start container 100 | |
573 | ||
574 | pct start 100 | |
575 | ||
576 | Start a login session via getty | |
577 | ||
578 | pct console 100 | |
579 | ||
580 | Enter the LXC namespace and run a shell as root user | |
581 | ||
582 | pct enter 100 | |
583 | ||
584 | Display the configuration | |
585 | ||
586 | pct config 100 | |
587 | ||
588 | Add a network interface called eth0, bridged to the host bridge vmbr0, | |
589 | set the address and gateway, while it's running | |
590 | ||
591 | pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1 | |
592 | ||
593 | Reduce the memory of the container to 512MB | |
594 | ||
595 | pct set 100 -memory 512 | |
596 | ||
597 | ||
598 | Files | |
599 | ------ | |
600 | ||
601 | '/etc/pve/lxc/<CTID>.conf':: | |
602 | ||
603 | Configuration file for the container '<CTID>'. | |
604 | ||
605 | ||
606 | Container Advantages | |
607 | -------------------- | |
608 | ||
609 | - Simple, and fully integrated into {pve}. Setup looks similar to a normal | |
610 | VM setup. | |
611 | ||
612 | * Storage (ZFS, LVM, NFS, Ceph, ...) | |
613 | ||
614 | * Network | |
615 | ||
616 | * Authentification | |
617 | ||
618 | * Cluster | |
619 | ||
620 | - Fast: minimal overhead, as fast as bare metal | |
621 | ||
622 | - High density (perfect for idle workloads) | |
623 | ||
624 | - REST API | |
625 | ||
626 | - Direct hardware access | |
627 | ||
628 | ||
629 | Technology Overview | |
630 | ------------------- | |
631 | ||
632 | - Integrated into {pve} graphical user interface (GUI) | |
633 | ||
634 | - LXC (https://linuxcontainers.org/) | |
635 | ||
636 | - cgmanager for cgroup management | |
637 | ||
638 | - lxcfs to provive containerized /proc file system | |
639 | ||
640 | - apparmor | |
641 | ||
642 | - CRIU: for live migration (planned) | |
643 | ||
644 | - We use latest available kernels (4.4.X) | |
645 | ||
646 | - Image based deployment (templates) | |
647 | ||
648 | - Container setup from host (Network, DNS, Storage, ...) | |
649 | ||
650 | ||
651 | ifdef::manvolnum[] | |
652 | include::pve-copyright.adoc[] | |
653 | endif::manvolnum[] | |
654 | ||
655 | ||
656 | ||
657 | ||
658 | ||
659 | ||
660 |