]> git.proxmox.com Git - mirror_lxc.git/blob - doc/lxc.sgml.in
Merge git://github.com/lxc/lxc
[mirror_lxc.git] / doc / lxc.sgml.in
1 <!--
2
3 lxc: linux Container library
4
5 (C) Copyright IBM Corp. 2007, 2008
6
7 Authors:
8 Daniel Lezcano <daniel.lezcano at free.fr>
9
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
14
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
19
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
23
24 -->
25
26 <!DOCTYPE refentry PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
27
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
29 ]>
30
31 <refentry>
32
33 <docinfo>
34 <date>@LXC_GENERATE_DATE@</date>
35 </docinfo>
36
37
38 <refmeta>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
41 <refmiscinfo>
42 Version @PACKAGE_VERSION@
43 </refmiscinfo>
44 </refmeta>
45
46 <refnamediv>
47 <refname>lxc</refname>
48
49 <refpurpose>
50 linux containers
51 </refpurpose>
52 </refnamediv>
53
54 <refsect1>
55 <title>Quick start</title>
56 <para>
57 You are in a hurry, and you don't want to read this man page. Ok,
58 without warranty, here are the commands to launch a shell inside
59 a container with a predefined configuration template, it may
60 work.
61 <command>@BINDIR@/lxc-execute -n foo -f
62 @DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
63 </para>
64 </refsect1>
65
66 <refsect1>
67 <title>Overview</title>
68 <para>
69 The container technology is actively being pushed into the
70 mainstream linux kernel. It provides the resource management
71 through the control groups aka process containers and resource
72 isolation through the namespaces.
73 </para>
74
75 <para>
76 The linux containers, <command>lxc</command>, aims to use these
77 new functionalities to provide an userspace container object
78 which provides full resource isolation and resource control for
79 an applications or a system.
80 </para>
81
82 <para>
83 The first objective of this project is to make the life easier
84 for the kernel developers involved in the containers project and
85 especially to continue working on the Checkpoint/Restart new
86 features. The <command>lxc</command> is small enough to easily
87 manage a container with simple command lines and complete enough
88 to be used for other purposes.
89 </para>
90 </refsect1>
91
92 <refsect1>
93 <title>Requirements</title>
94 <para>
95 The <command>lxc</command> relies on a set of functionalities
96 provided by the kernel which needs to be active. Depending of
97 the missing functionalities the <command>lxc</command> will
98 work with a restricted number of functionalities or will simply
99 fail.
100 </para>
101
102 <para>
103 The following list gives the kernel features to be enabled in
104 the kernel to have the full features container:
105 </para>
106 <programlisting>
107 * General setup
108 * Control Group support
109 -> Namespace cgroup subsystem
110 -> Freezer cgroup subsystem
111 -> Cpuset support
112 -> Simple CPU accounting cgroup subsystem
113 -> Resource counters
114 -> Memory resource controllers for Control Groups
115 * Group CPU scheduler
116 -> Basis for grouping tasks (Control Groups)
117 * Namespaces support
118 -> UTS namespace
119 -> IPC namespace
120 -> User namespace
121 -> Pid namespace
122 -> Network namespace
123 * Device Drivers
124 * Character devices
125 -> Support multiple instances of devpts
126 * Network device support
127 -> MAC-VLAN support
128 -> Virtual ethernet pair device
129 * Networking
130 * Networking options
131 -> 802.1d Ethernet Bridging
132 * Security options
133 -> File POSIX Capabilities
134 </programlisting>
135
136 <para>
137
138 The kernel version >= 2.6.27 shipped with the distros, will
139 work with <command>lxc</command>, this one will have less
140 functionalities but enough to be interesting.
141
142 With the kernel 2.6.29, <command>lxc</command> is fully
143 functional.
144
145 The helper script <command>lxc-checkconfig</command> will give
146 you information about your kernel configuration.
147 </para>
148
149 <para>
150 Before using the <command>lxc</command>, your system should be
151 configured with the file capabilities, otherwise you will need
152 to run the <command>lxc</command> commands as root.
153 </para>
154
155 <para>
156 The control group can be mounted anywhere, eg:
157 <command>mount -t cgroup cgroup /cgroup</command>.
158
159 If you want to dedicate a specific cgroup mount point
160 for <command>lxc</command>, that is to have different cgroups
161 mounted at different places with different options but
162 let <command>lxc</command> to use one location, you can bind
163 the mount point with the <option>lxc</option> name, eg:
164 <command>mount -t cgroup lxc /cgroup4lxc</command> or
165 <command>mount -t cgroup -ons,cpuset,freezer,devices
166 lxc /cgroup4lxc</command>
167
168 </para>
169
170 </refsect1>
171
172 <refsect1>
173 <title>Functional specification</title>
174 <para>
175 A container is an object isolating some resources of the host,
176 for the application or system running in it.
177 </para>
178 <para>
179 The application / system will be launched inside a
180 container specified by a configuration that is either
181 initially created or passed as parameter of the starting commands.
182 </para>
183
184 <para>How to run an application in a container ?</para>
185 <para>
186 Before running an application, you should know what are the
187 resources you want to isolate. The default configuration is to
188 isolate the pids, the sysv ipc and the mount points. If you want
189 to run a simple shell inside a container, a basic configuration
190 is needed, especially if you want to share the rootfs. If you
191 want to run an application like <command>sshd</command>, you
192 should provide a new network stack and a new hostname. If you
193 want to avoid conflicts with some files
194 eg. <filename>/var/run/httpd.pid</filename>, you should
195 remount <filename>/var/run</filename> with an empty
196 directory. If you want to avoid the conflicts in all the cases,
197 you can specify a rootfs for the container. The rootfs can be a
198 directory tree, previously bind mounted with the initial rootfs,
199 so you can still use your distro but with your
200 own <filename>/etc</filename> and <filename>/home</filename>
201 </para>
202 <para>
203 Here is an example of directory tree
204 for <command>sshd</command>:
205 <programlisting>
206 [root@lxc sshd]$ tree -d rootfs
207
208 rootfs
209 |-- bin
210 |-- dev
211 | |-- pts
212 | `-- shm
213 | `-- network
214 |-- etc
215 | `-- ssh
216 |-- lib
217 |-- proc
218 |-- root
219 |-- sbin
220 |-- sys
221 |-- usr
222 `-- var
223 |-- empty
224 | `-- sshd
225 |-- lib
226 | `-- empty
227 | `-- sshd
228 `-- run
229 `-- sshd
230 </programlisting>
231
232 and the mount points file associated with it:
233 <programlisting>
234 [root@lxc sshd]$ cat fstab
235
236 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
237 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
238 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
239 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
240 </programlisting>
241 </para>
242
243 <para>How to run a system in a container ?</para>
244
245 <para>Running a system inside a container is paradoxically easier
246 than running an application. Why ? Because you don't have to care
247 about the resources to be isolated, everything need to be
248 isolated, the other resources are specified as being isolated but
249 without configuration because the container will set them
250 up. eg. the ipv4 address will be setup by the system container
251 init scripts. Here is an example of the mount points file:
252
253 <programlisting>
254 [root@lxc debian]$ cat fstab
255
256 /dev /home/root/debian/rootfs/dev none bind 0 0
257 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
258 </programlisting>
259
260 More information can be added to the container to facilitate the
261 configuration. For example, make accessible from the container
262 the resolv.conf file belonging to the host.
263
264 <programlisting>
265 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
266 </programlisting>
267 </para>
268
269 <refsect2>
270 <title>Container life cycle</title>
271 <para>
272 When the container is created, it contains the configuration
273 information. When a process is launched, the container will be
274 starting and running. When the last process running inside the
275 container exits, the container is stopped.
276 </para>
277 <para>
278 In case of failure when the container is initialized, it will
279 pass through the aborting state.
280 </para>
281
282 <programlisting>
283 <![CDATA[
284 ---------
285 | STOPPED |<---------------
286 --------- |
287 | |
288 start |
289 | |
290 V |
291 ---------- |
292 | STARTING |--error- |
293 ---------- | |
294 | | |
295 V V |
296 --------- ---------- |
297 | RUNNING | | ABORTING | |
298 --------- ---------- |
299 | | |
300 no process | |
301 | | |
302 V | |
303 ---------- | |
304 | STOPPING |<------- |
305 ---------- |
306 | |
307 ---------------------
308 ]]>
309 </programlisting>
310 </refsect2>
311
312 <refsect2>
313 <title>Configuration</title>
314 <para>The container is configured through a configuration
315 file, the format of the configuration file is described in
316 <citerefentry>
317 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
318 <manvolnum>5</manvolnum>
319 </citerefentry>
320 </para>
321 </refsect2>
322
323 <refsect2>
324 <title>Creating / Destroying container
325 (persistent container)</title>
326 <para>
327 A persistent container object can be
328 created via the <command>lxc-create</command>
329 command. It takes a container name as parameter and
330 optional configuration file and template.
331 The name is used by the different
332 commands to refer to this
333 container. The <command>lxc-destroy</command> command will
334 destroy the container object.
335 <programlisting>
336 lxc-create -n foo
337 lxc-destroy -n foo
338 </programlisting>
339 </para>
340 </refsect2>
341
342 <refsect2>
343 <title>Volatile container</title>
344 <para>It is not mandatory to create a container object
345 before to start it.
346 The container can be directly started with a
347 configuration file as parameter.
348 </para>
349 </refsect2>
350
351 <refsect2>
352 <title>Starting / Stopping container</title>
353 <para>When the container has been created, it is ready to run an
354 application / system.
355 This is the purpose of the <command>lxc-execute</command> and
356 <command>lxc-start</command> commands.
357 If the container was not created before
358 starting the application, the container will use the
359 configuration file passed as parameter to the command,
360 and if there is no such parameter either, then
361 it will use a default isolation.
362 If the application is ended, the container will be stopped also,
363 but if needed the <command>lxc-stop</command> command can
364 be used to kill the still running application.
365 </para>
366
367 <para>
368 Running an application inside a container is not exactly the
369 same thing as running a system. For this reason, there are two
370 different commands to run an application into a container:
371 <programlisting>
372 lxc-execute -n foo [-f config] /bin/bash
373 lxc-start -n foo [-f config] [/bin/bash]
374 </programlisting>
375 </para>
376
377 <para>
378 <command>lxc-execute</command> command will run the
379 specified command into the container via an intermediate
380 process, <command>lxc-init</command>.
381 This lxc-init after launching the specified command,
382 will wait for its end and all other reparented processes.
383 (that allows to support daemons in the container).
384 In other words, in the
385 container, <command>lxc-init</command> has the pid 1 and the
386 first process of the application has the pid 2.
387 </para>
388
389 <para>
390 <command>lxc-start</command> command will run directly the specified
391 command into the container.
392 The pid of the first process is 1. If no command is
393 specified <command>lxc-start</command> will
394 run <filename>/sbin/init</filename>.
395 </para>
396
397 <para>
398 To summarize, <command>lxc-execute</command> is for running
399 an application and <command>lxc-start</command> is better suited for
400 running a system.
401 </para>
402
403 <para>
404 If the application is no longer responding, is inaccessible or is
405 not able to finish by itself, a
406 wild <command>lxc-stop</command> command will kill all the
407 processes in the container without pity.
408 <programlisting>
409 lxc-stop -n foo
410 </programlisting>
411 </para>
412 </refsect2>
413
414 <refsect2>
415 <title>Connect to an available tty</title>
416 <para>
417 If the container is configured with the ttys, it is possible
418 to access it through them. It is up to the container to
419 provide a set of available tty to be used by the following
420 command. When the tty is lost, it is possible to reconnect it
421 without login again.
422 <programlisting>
423 lxc-console -n foo -t 3
424 </programlisting>
425 </para>
426 </refsect2>
427
428 <refsect2>
429 <title>Freeze / Unfreeze container</title>
430 <para>
431 Sometime, it is useful to stop all the processes belonging to
432 a container, eg. for job scheduling. The commands:
433 <programlisting>
434 lxc-freeze -n foo
435 </programlisting>
436
437 will put all the processes in an uninteruptible state and
438
439 <programlisting>
440 lxc-unfreeze -n foo
441 </programlisting>
442
443 will resume them.
444 </para>
445
446 <para>
447 This feature is enabled if the cgroup freezer is enabled in the
448 kernel.
449 </para>
450 </refsect2>
451
452 <refsect2>
453 <title>Getting information about container</title>
454 <para>When there are a lot of containers, it is hard to follow
455 what has been created or destroyed, what is running or what are
456 the pids running into a specific container. For this reason, the
457 following commands may be usefull:
458 <programlisting>
459 lxc-ls
460 lxc-ps --name foo
461 lxc-info -n foo
462 </programlisting>
463 </para>
464 <para>
465 <command>lxc-ls</command> lists the containers of the
466 system. The command is a script built on top
467 of <command>ls</command>, so it accepts the options of the ls
468 commands, eg:
469 <programlisting>
470 lxc-ls -C1
471 </programlisting>
472 will display the containers list in one column or:
473 <programlisting>
474 lxc-ls -l
475 </programlisting>
476 will display the containers list and their permissions.
477 </para>
478
479 <para>
480 <command>lxc-ps</command> will display the pids for a specific
481 container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
482 is built on top of <command>ps</command> and accepts the same
483 options, eg:
484 <programlisting>lxc-ps --name foo --forest</programlisting>
485 will display the processes hierarchy for the processes
486 belonging the 'foo' container.
487
488 <programlisting>lxc-ps --lxc</programlisting>
489 will display all the containers and their processes.
490 </para>
491
492 <para>
493 <command>lxc-info</command> gives informations for a specific
494 container, at present time, only the state of the container is
495 displayed.
496 </para>
497
498 <para>
499 Here is an example on how the combination of these commands
500 allow to list all the containers and retrieve their state.
501 <programlisting>
502 for i in $(lxc-ls -1); do
503 lxc-info -n $i
504 done
505 </programlisting>
506
507 And displaying all the pids of all the containers:
508
509 <programlisting>
510 for i in $(lxc-ls -1); do
511 lxc-ps --name $i --forest
512 done
513 </programlisting>
514
515 </para>
516
517 <para>
518 <command>lxc-netstat</command> display network information for
519 a specific container. This command is built on top of
520 the <command>netstat</command> command and will accept its
521 options
522 </para>
523
524 <para>
525 The following command will display the socket informations for
526 the container 'foo'.
527 <programlisting>
528 lxc-netstat -n foo -tano
529 </programlisting>
530 </para>
531
532 </refsect2>
533
534 <refsect2>
535 <title>Monitoring container</title>
536 <para>It is sometime useful to track the states of a container,
537 for example to monitor it or just to wait for a specific
538 state in a script.
539 </para>
540
541 <para>
542 <command>lxc-monitor</command> command will monitor one or
543 several containers. The parameter of this command accept a
544 regular expression for example:
545 <programlisting>
546 lxc-monitor -n "foo|bar"
547 </programlisting>
548 will monitor the states of containers named 'foo' and 'bar', and:
549 <programlisting>
550 lxc-monitor -n ".*"
551 </programlisting>
552 will monitor all the containers.
553 </para>
554 <para>
555 For a container 'foo' starting, doing some work and exiting,
556 the output will be in the form:
557 <programlisting>
558 'foo' changed state to [STARTING]
559 'foo' changed state to [RUNNING]
560 'foo' changed state to [STOPPING]
561 'foo' changed state to [STOPPED]
562 </programlisting>
563 </para>
564 <para>
565 <command>lxc-wait</command> command will wait for a specific
566 state change and exit. This is useful for scripting to
567 synchronize the launch of a container or the end. The
568 parameter is an ORed combination of different states. The
569 following example shows how to wait for a container if he went
570 to the background.
571
572 <programlisting>
573 <![CDATA[
574 # launch lxc-wait in background
575 lxc-wait -n foo -s STOPPED &
576 LXC_WAIT_PID=$!
577
578 # this command goes in background
579 lxc-execute -n foo mydaemon &
580
581 # block until the lxc-wait exits
582 # and lxc-wait exits when the container
583 # is STOPPED
584 wait $LXC_WAIT_PID
585 echo "'foo' is finished"
586 ]]>
587 </programlisting>
588 </para>
589 </refsect2>
590
591 <refsect2>
592 <title>Setting the control group for container</title>
593 <para>The container is tied with the control groups, when a
594 container is started a control group is created and associated
595 with it. The control group properties can be read and modified
596 when the container is running by using the lxc-cgroup command.
597 </para>
598 <para>
599 <command>lxc-cgroup</command> command is used to set or get a
600 control group subsystem which is associated with a
601 container. The subsystem name is handled by the user, the
602 command won't do any syntax checking on the subsystem name, if
603 the subsystem name does not exists, the command will fail.
604 </para>
605 <para>
606 <programlisting>
607 lxc-cgroup -n foo cpuset.cpus
608 </programlisting>
609 will display the content of this subsystem.
610 <programlisting>
611 lxc-cgroup -n foo cpu.shares 512
612 </programlisting>
613 will set the subsystem to the specified value.
614 </para>
615 </refsect2>
616 </refsect1>
617
618 <refsect1>
619 <title>Bugs</title>
620 <para>The <command>lxc</command> is still in development, so the
621 command syntax and the API can change. The version 1.0.0 will be
622 the frozen version.</para>
623 </refsect1>
624
625 &seealso;
626
627 <refsect1>
628 <title>Author</title>
629 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
630 </refsect1>
631
632 </refentry>
633
634 <!-- Keep this comment at the end of the file Local variables: mode:
635 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
636 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
637 sgml-parent-document:nil sgml-default-dtd-file:nil
638 sgml-exposed-tags:nil sgml-local-catalogs:nil
639 sgml-local-ecat-files:nil End: -->