]> git.proxmox.com Git - pve-docs.git/blob - pct.adoc
pct: add auto-generated network options
[pve-docs.git] / pct.adoc
1 ifdef::manvolnum[]
2 PVE({manvolnum})
3 ================
4 include::attributes.txt[]
5
6 NAME
7 ----
8
9 pct - Tool to manage Linux Containers (LXC) on Proxmox VE
10
11
12 SYNOPSYS
13 --------
14
15 include::pct.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Proxmox Container Toolkit
23 =========================
24 include::attributes.txt[]
25 endif::manvolnum[]
26
27
28 Containers are a lightweight alternative to fully virtualized
29 VMs. Instead of emulating a complete Operating System (OS), containers
30 simply use the OS of the host they run on. This implies that all
31 containers use the same kernel, and that they can access resources
32 from the host directly.
33
34 This is great because containers do not waste CPU power nor memory due
35 to kernel emulation. Container run-time costs are close to zero and
36 usually negligible. But there are also some drawbacks you need to
37 consider:
38
39 * You can only run Linux based OS inside containers, i.e. it is not
40 possible to run FreeBSD or MS Windows inside.
41
42 * For security reasons, access to host resources needs to be
43 restricted. This is done with AppArmor, SecComp filters and other
44 kernel features. Be prepared that some syscalls are not allowed
45 inside containers.
46
47 {pve} uses https://linuxcontainers.org/[LXC] as underlying container
48 technology. We consider LXC as low-level library, which provides
49 countless options. It would be too difficult to use those tools
50 directly. Instead, we provide a small wrapper called `pct`, the
51 "Proxmox Container Toolkit".
52
53 The toolkit is tightly coupled with {pve}. That means that it is aware
54 of the cluster setup, and it can use the same network and storage
55 resources as fully virtualized VMs. You can even use the {pve}
56 firewall, or manage containers using the HA framework.
57
58 Our primary goal is to offer an environment as one would get from a
59 VM, but without the additional overhead. We call this "System
60 Containers".
61
62 NOTE: If you want to run micro-containers (with docker, rct, ...), it
63 is best to run them inside a VM.
64
65
66 Security Considerations
67 -----------------------
68
69 Containers use the same kernel as the host, so there is a big attack
70 surface for malicious users. You should consider this fact if you
71 provide containers to totally untrusted people. In general, fully
72 virtualized VMs provide better isolation.
73
74 The good news is that LXC uses many kernel security features like
75 AppArmor, CGroups and PID and user namespaces, which makes containers
76 usage quite secure. We distinguish two types of containers:
77
78 Privileged containers
79 ~~~~~~~~~~~~~~~~~~~~~
80
81 Security is done by dropping capabilities, using mandatory access
82 control (AppArmor), SecComp filters and namespaces. The LXC team
83 considers this kind of container as unsafe, and they will not consider
84 new container escape exploits to be security issues worthy of a CVE
85 and quick fix. So you should use this kind of containers only inside a
86 trusted environment, or when no untrusted task is running as root in
87 the container.
88
89 Unprivileged containers
90 ~~~~~~~~~~~~~~~~~~~~~~~
91
92 This kind of containers use a new kernel feature called user
93 namespaces. The root uid 0 inside the container is mapped to an
94 unprivileged user outside the container. This means that most security
95 issues (container escape, resource abuse, ...) in those containers
96 will affect a random unprivileged user, and so would be a generic
97 kernel security bug rather than an LXC issue. The LXC team thinks
98 unprivileged containers are safe by design.
99
100
101 Configuration
102 -------------
103
104 The '/etc/pve/lxc/<CTID>.conf' file stores container configuration,
105 where '<CTID>' is the numeric ID of the given container. Like all
106 other files stored inside '/etc/pve/', they get automatically
107 replicated to all other cluster nodes.
108
109 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
110 unique cluster wide.
111
112 .Example Container Configuration
113 ----
114 ostype: debian
115 arch: amd64
116 hostname: www
117 memory: 512
118 swap: 512
119 net0: bridge=vmbr0,hwaddr=66:64:66:64:64:36,ip=dhcp,name=eth0,type=veth
120 rootfs: local:107/vm-107-disk-1.raw,size=7G
121 ----
122
123 Those configuration files are simple text files, and you can edit them
124 using a normal text editor ('vi', 'nano', ...). This is sometimes
125 useful to do small corrections, but keep in mind that you need to
126 restart the container to apply such changes.
127
128 For that reason, it is usually better to use the 'pct' command to
129 generate and modify those files, or do the whole thing using the GUI.
130 Our toolkit is smart enough to instantaneously apply most changes to
131 running containers. This feature is called "hot plug", and there is no
132 need to restart the container in that case.
133
134 File Format
135 ~~~~~~~~~~~
136
137 Container configuration files use a simple colon separated key/value
138 format. Each line has the following format:
139
140 # this is a comment
141 OPTION: value
142
143 Blank lines in those files are ignored, and lines starting with a '#'
144 character are treated as comments and are also ignored.
145
146 It is possible to add low-level, LXC style configuration directly, for
147 example:
148
149 lxc.init_cmd: /sbin/my_own_init
150
151 or
152
153 lxc.init_cmd = /sbin/my_own_init
154
155 Those settings are directly passed to the LXC low-level tools.
156
157 Snapshots
158 ~~~~~~~~~
159
160 When you create a snapshot, 'pct' stores the configuration at snapshot
161 time into a separate snapshot section within the same configuration
162 file. For example, after creating a snapshot called 'testsnapshot',
163 your configuration file will look like this:
164
165 .Container Configuration with Snapshot
166 ----
167 memory: 512
168 swap: 512
169 parent: testsnaphot
170 ...
171
172 [testsnaphot]
173 memory: 512
174 swap: 512
175 snaptime: 1457170803
176 ...
177 ----
178
179 There are a few snapshot related properties like 'parent' and
180 'snaptime'. The 'parent' property is used to store the parent/child
181 relationship between snapshots. 'snaptime' is the snapshot creation
182 time stamp (unix epoch).
183
184 Guest Operating System Configuration
185 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186
187 We normally try to detect the operating system type inside the
188 container, and then modify some files inside the container to make
189 them work as expected. Here is a short list of things we do at
190 container startup:
191
192 set /etc/hostname:: to set the container name
193
194 modify /etc/hosts:: to allow lookup of the local hostname
195
196 network setup:: pass the complete network setup to the container
197
198 configure DNS:: pass information about DNS servers
199
200 adapt the init system:: for example, fix the number of spawned getty processes
201
202 set the root password:: when creating a new container
203
204 rewrite ssh_host_keys:: so that each container has unique keys
205
206 randomize crontab:: so that cron does not start at the same time on all containers
207
208 The above task depends on the OS type, so the implementation is different
209 for each OS type. You can also disable any modifications by manually
210 setting the 'ostype' to 'unmanaged'.
211
212 OS type detection is done by testing for certain files inside the
213 container:
214
215 Ubuntu:: inspect /etc/lsb-release ('DISTRIB_ID=Ubuntu')
216
217 Debian:: test /etc/debian_version
218
219 Fedora:: test /etc/fedora-release
220
221 RedHat or CentOS:: test /etc/redhat-release
222
223 ArchLinux:: test /etc/arch-release
224
225 Alpine:: test /etc/alpine-release
226
227 NOTE: Container start fails if the configured 'ostype' differs from the auto
228 detected type.
229
230
231 Container Images
232 ----------------
233
234 Container Images, sometimes also referred to as "templates" or
235 "appliances", are 'tar' archives which contain everything to run a
236 container. You can think of it as a tidy container backup. Like most
237 modern container toolkits, 'pct' uses those images when you create a
238 new container, for example:
239
240 pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
241
242 Proxmox itself ships a set of basic templates for most common
243 operating systems, and you can download them using the 'pveam' (short
244 for {pve} Appliance Manager) command line utility. You can also
245 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
246 that tool (or the graphical user interface).
247
248 Our image repositories contain a list of available images, and there
249 is a cron job run each day to download that list. You can trigger that
250 update manually with:
251
252 pveam update
253
254 After that you can view the list of available images using:
255
256 pveam available
257
258 You can restrict this large list by specifying the 'section' you are
259 interested in, for example basic 'system' images:
260
261 .List available system images
262 ----
263 # pveam available --section system
264 system archlinux-base_2015-24-29-1_x86_64.tar.gz
265 system centos-7-default_20160205_amd64.tar.xz
266 system debian-6.0-standard_6.0-7_amd64.tar.gz
267 system debian-7.0-standard_7.0-3_amd64.tar.gz
268 system debian-8.0-standard_8.0-1_amd64.tar.gz
269 system ubuntu-12.04-standard_12.04-1_amd64.tar.gz
270 system ubuntu-14.04-standard_14.04-1_amd64.tar.gz
271 system ubuntu-15.04-standard_15.04-1_amd64.tar.gz
272 system ubuntu-15.10-standard_15.10-1_amd64.tar.gz
273 ----
274
275 Before you can use such a template, you need to download them into one
276 of your storages. You can simply use storage 'local' for that
277 purpose. For clustered installations, it is preferred to use a shared
278 storage so that all nodes can access those images.
279
280 pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
281
282 You are now ready to create containers using that image, and you can
283 list all downloaded images on storage 'local' with:
284
285 ----
286 # pveam list local
287 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz 190.20MB
288 ----
289
290 The above command shows you the full {pve} volume identifiers. They include
291 the storage name, and most other {pve} commands can use them. For
292 examply you can delete that image later with:
293
294 pveam remove local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
295
296
297 Container Storage
298 -----------------
299
300 Traditional containers use a very simple storage model, only allowing
301 a single mount point, the root file system. This was further
302 restricted to specific file system types like 'ext4' and 'nfs'.
303 Additional mounts are often done by user provided scripts. This turend
304 out to be complex and error prone, so we try to avoid that now.
305
306 Our new LXC based container model is more flexible regarding
307 storage. First, you can have more than a single mount point. This
308 allows you to choose a suitable storage for each application. For
309 example, you can use a relatively slow (and thus cheap) storage for
310 the container root file system. Then you can use a second mount point
311 to mount a very fast, distributed storage for your database
312 application.
313
314 The second big improvement is that you can use any storage type
315 supported by the {pve} storage library. That means that you can store
316 your containers on local 'lvmthin' or 'zfs', shared 'iSCSI' storage,
317 or even on distributed storage systems like 'ceph'. It also enables us
318 to use advanced storage features like snapshots and clones. 'vzdump'
319 can also use the snapshot feature to provide consistent container
320 backups.
321
322 Last but not least, you can also mount local devices directly, or
323 mount local directories using bind mounts. That way you can access
324 local storage inside containers with zero overhead. Such bind mounts
325 also provide an easy way to share data between different containers.
326
327 Container Mountpoints
328 ---------------------
329
330 Beside the root directory the container can also have additional mountpoints.
331 Currently there are basically three types of mountpoints: storage backed
332 mountpoints, bind mounts and device mounts.
333
334 Storage backed mountpoints are managed by the {pve} storage subsystem and come
335 in three different flavors:
336
337 - Image based: These are raw images containing a single ext4 formatted file
338 system.
339 - ZFS Subvolumes: These are technically bind mounts, but with managed storage,
340 and thus allow resizing and snapshotting.
341 - Directories: passing `size=0` triggers a special case where instead of a raw
342 image a directory is created.
343
344 Bind mounts are considered to not be managed by the storage subsystem, so you
345 cannot make snapshots or deal with quotas from inside the container, and with
346 unprivileged containers you might run into permission problems caused by the
347 user mapping, and cannot use ACLs from inside an unprivileged container.
348
349 Similarly device mounts are not managed by the storage, but for these the
350 `quota` and `acl` options will be honored.
351
352 WARNING: Because of existing issues in the Linux kernel's freezer
353 subsystem the usage of FUSE mounts inside a container is strongly
354 advised against, as containers need to be frozen for suspend or
355 snapshot mode backups. If FUSE mounts cannot be replaced by other
356 mounting mechanisms or storage technologies, it is possible to
357 establish the FUSE mount on the Proxmox host and use a bind
358 mountpoint to make it accessible inside the container.
359
360 Using quotas inside containers
361 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
362
363 Quotas allow to set limits inside a container for the amount of disk space
364 that each user can use.
365 This only works on ext4 image based storage types and currently does not work
366 with unprivileged containers.
367
368 Activating the `quota` option causes the following mount options to be used for
369 a mountpoint: `usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0`
370
371 This allows quotas to be used like you would on any other system. You can
372 initialize the `/aquota.user` and `/aquota.group` files by running
373
374 quotacheck -cmug /
375 quotaon /
376
377 and edit the quotas via the `edquota` command. Refer to the documentation
378 of the distribution running inside the container for details.
379
380 NOTE: You need to run the above commands for every mountpoint by passing
381 the mountpoint's path instead of just `/`.
382
383 Using ACLs inside containers
384 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
385
386 The standard Posix Access Control Lists are also available inside containers.
387 ACLs allow you to set more detailed file ownership than the traditional user/
388 group/others model.
389
390
391 Container Network
392 -----------------
393
394 You can configure up to 10 network interfaces for a single
395 container. The corresponding options are called 'net0' to 'net9', and
396 they can contain the following setting:
397
398 include::pct-network-opts.adoc[]
399
400
401 Managing Containers with 'pct'
402 ------------------------------
403
404 'pct' is the tool to manage Linux Containers on {pve}. You can create
405 and destroy containers, and control execution (start, stop, migrate,
406 ...). You can use pct to set parameters in the associated config file,
407 like network configuration or memory limits.
408
409 CLI Usage Examples
410 ~~~~~~~~~~~~~~~~~~
411
412 Create a container based on a Debian template (provided you have
413 already downloaded the template via the webgui)
414
415 pct create 100 /var/lib/vz/template/cache/debian-8.0-standard_8.0-1_amd64.tar.gz
416
417 Start container 100
418
419 pct start 100
420
421 Start a login session via getty
422
423 pct console 100
424
425 Enter the LXC namespace and run a shell as root user
426
427 pct enter 100
428
429 Display the configuration
430
431 pct config 100
432
433 Add a network interface called eth0, bridged to the host bridge vmbr0,
434 set the address and gateway, while it's running
435
436 pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
437
438 Reduce the memory of the container to 512MB
439
440 pct set -memory 512 100
441
442 Files
443 ------
444
445 '/etc/pve/lxc/<CTID>.conf'::
446
447 Configuration file for the container '<CTID>'.
448
449
450 Container Advantages
451 --------------------
452
453 - Simple, and fully integrated into {pve}. Setup looks similar to a normal
454 VM setup.
455
456 * Storage (ZFS, LVM, NFS, Ceph, ...)
457
458 * Network
459
460 * Authentification
461
462 * Cluster
463
464 - Fast: minimal overhead, as fast as bare metal
465
466 - High density (perfect for idle workloads)
467
468 - REST API
469
470 - Direct hardware access
471
472
473 Technology Overview
474 -------------------
475
476 - Integrated into {pve} graphical user interface (GUI)
477
478 - LXC (https://linuxcontainers.org/)
479
480 - cgmanager for cgroup management
481
482 - lxcfs to provive containerized /proc file system
483
484 - apparmor
485
486 - CRIU: for live migration (planned)
487
488 - We use latest available kernels (4.4.X)
489
490 - Image based deployment (templates)
491
492 - Container setup from host (Network, DNS, Storage, ...)
493
494
495 ifdef::manvolnum[]
496 include::pve-copyright.adoc[]
497 endif::manvolnum[]
498
499
500
501
502
503
504