]> git.proxmox.com Git - mirror_lxc.git/blob - doc/lxc.sgml.in
From: Daniel Lezcano <daniel.lezcano@free.fr>
[mirror_lxc.git] / doc / lxc.sgml.in
1 <!--
2
3 lxc: linux Container library
4
5 (C) Copyright IBM Corp. 2007, 2008
6
7 Authors:
8 Daniel Lezcano <dlezcano at fr.ibm.com>
9
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
14
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
19
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
23
24 -->
25
26 <!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN">
27
28 <refentry>
29
30 <docinfo>
31 <date>@LXC_GENERATE_DATE@</date>
32 </docinfo>
33
34
35 <refmeta>
36 <refentrytitle>lxc</refentrytitle>
37 <manvolnum>7</manvolnum>
38 <refmiscinfo>
39 Version @LXC_MAJOR_VERSION@.@LXC_MINOR_VERSION@.@LXC_MICRO_VERSION@
40 </refmiscinfo>
41 </refmeta>
42
43 <refnamediv>
44 <refname>lxc</refname>
45
46 <refpurpose>
47 linux containers
48 </refpurpose>
49 </refnamediv>
50
51 <refsect1>
52 <title>Quick start</title>
53 <para>
54 You are in a hurry, and you don't want to read this man page. Ok,
55 without warranty, here are the commands to launch a shell inside
56 a container with a predefined configuration template, it may
57 work.
58 <command>
59 @BINDIR@/lxc-execute -n foo -f @SYSCONFDIR@/lxc/lxc-macvlan.conf /bin/bash
60 </command>
61 </para>
62 </refsect1>
63
64 <refsect1>
65 <title>Overview</title>
66 <para>
67 The container technology is actively being pushed into the
68 mainstream linux kernel. It provides the resource management
69 through the control groups aka process containers and resource
70 isolation through the namespaces.
71 </para>
72
73 <para>
74 The linux containers, <command>lxc</command>, aims to use these
75 new functionalities to provide an userspace container object
76 which provides full resource isolation and resource control for
77 an applications or a system.
78 </para>
79
80 <para>
81 The first objective of this project is to make the life easier
82 for the kernel developers involved in the containers project and
83 especially to continue working on the Checkpoint/Restart new
84 features. The <command>lxc</command> is small enough to easily
85 manage a container with simple command lines and complete enough
86 to be used for other purposes.
87 </para>
88 </refsect1>
89
90 <refsect1>
91 <title>Requirements</title>
92 <para>
93 The <command>lxc</command> relies on a set of functionalies
94 provided by the kernel which needs to be active. Depending of
95 the missing functionalities the <command>lxc</command> will
96 work with a restricted number of functionalities or will simply
97 fails.
98 </para>
99
100 <para>
101 The following list gives the kernel features to be enabled in
102 the kernel to have the full features container:
103 </para>
104 <programlisting>
105 * General
106 * Control Group support
107 -> namespace cgroup subsystem
108 -> cpuset support
109 -> Group CPU scheduler
110 -> control group freeze subsystem
111 -> Basis for grouping tasks (Control Groups)
112 -> Simple CPU accounting
113 -> Resource counters
114 -> Memory resource controllers for Control Groups
115 -> Namespace support
116 -> UTS namespace
117 -> IPC namespace
118 -> User namespace
119 -> Pid namespace
120 * Network support
121 -> Networking options
122 -> Network namespace support
123 </programlisting>
124
125 <para>
126 For the moment the easiest way to have all the features in the
127 kernel is to use the git tree at:
128 <systemitem>
129 git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git
130 </systemitem>
131
132 But the kernel version >= 2.6.27 shipped with the distros, may
133 work with <command>lxc</command>, this one will have less
134 functionalities but enough to be interesting.
135
136 The planned kernel version which <command>lxc</command> should
137 be fully functionaly is 2.6.29.
138 </para>
139
140 <para>
141 Before using the <command>lxc</command>, your system should be
142 configured with the file capabilities, otherwise you will need
143 to run the <command>lxc</command> commands as root. The
144 control group should be mounted anywhere, eg:
145 <command>mount -t cgroup cgroup /cgroup</command>
146 </para>
147 </refsect1>
148
149 <refsect1>
150 <title>Functional specification</title>
151 <para>
152 A container is an object where the configuration is
153 persistent. The application will be launched inside this
154 container and it will use the configuration which was previously
155 created.
156 </para>
157
158 <para>How to run an application in a container ?</para>
159 <para>
160 Before running an application, you should know what are the
161 resources you want to isolate. The default configuration is to
162 isolate the pids, the sysv ipc and the mount points. If you want
163 to run a simple shell inside a container, a basic configuration
164 is needed, especially if you want to share the rootfs. If you
165 want to run an application like <command>sshd</command>, you
166 should provide a new network stack and a new hostname. If you
167 want to avoid conflicts with some files
168 eg. <filename>/var/run/httpd.pid</filename>, you should
169 remount <filename>/var/run</filename> with an empty
170 directory. If you want to avoid the conflicts in all the cases,
171 you can specify a rootfs for the container. The rootfs can be a
172 directory tree, previously bind mounted with the initial rootfs,
173 so you can still use your distro but with your
174 own <filename>/etc</filename> and <filename>/home</filename>
175 </para>
176 <para>
177 Here is an example of directory tree
178 for <command>sshd</command>:
179 <programlisting>
180 [root@lxc sshd]$ tree -d rootfs
181
182 rootfs
183 |-- bin
184 |-- dev
185 | |-- pts
186 | `-- shm
187 | `-- network
188 |-- etc
189 | `-- ssh
190 |-- lib
191 |-- proc
192 |-- root
193 |-- sbin
194 |-- sys
195 |-- usr
196 `-- var
197 |-- empty
198 | `-- sshd
199 |-- lib
200 | `-- empty
201 | `-- sshd
202 `-- run
203 `-- sshd
204 </programlisting>
205
206 and the mount points file associated with it:
207 <programlisting>
208 [root@lxc sshd]$ cat fstab
209
210 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
211 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
212 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
213 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
214 </programlisting>
215 </para>
216
217 <para>How to run a system in a container ?</para>
218
219 <para>Running a system inside a container is paradoxically easier
220 than running an application. Why ? Because you don't have to care
221 about the resources to be isolated, everything need to be isolated
222 except <filename>/dev</filename> which needs to be remounted in
223 the container rootfs, the other resources are specified as being
224 isolated but without configuration because the container will set
225 them up. eg. the ipv4 address will be setup by the system
226 container init scripts. Here is an example of the mount points
227 file:
228
229 <programlisting>
230 [root@lxc debian]$ cat fstab
231
232 /dev /home/root/debian/rootfs/dev none bind 0 0
233 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
234 </programlisting>
235
236 More information can be added to the container to facilitate the
237 configuration. For example, make accessible from the container
238 the resolv.conf file belonging to the host.
239
240 <programlisting>
241 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
242 </programlisting>
243 </para>
244
245 <refsect2>
246 <title>Container life cycle</title>
247 <para>
248 When the container is created, it contains the configuration
249 information. When a process is launched, the container will be
250 starting and running. When the last process running inside the
251 container exits, the container is stopped.
252 </para>
253 <para>
254 In case of failure when the container is initialized, it will
255 pass through the aborting state.
256 </para>
257
258 <programlisting>
259
260 ---------
261 | STOPPED |<---------------
262 --------- |
263 | |
264 start |
265 | |
266 V |
267 ---------- |
268 | STARTING |--error- |
269 ---------- | |
270 | | |
271 V V |
272 --------- ---------- |
273 | RUNNING | | ABORTING | |
274 --------- ---------- |
275 | | |
276 no process | |
277 | | |
278 V | |
279 ---------- | |
280 | STOPPING |<------- |
281 ---------- |
282 | |
283 ---------------------
284
285 </programlisting>
286 </refsect2>
287
288 <refsect2>
289 <title>Configuration</title>
290 <para>The container is configured through a configuration
291 file, the format of the configuration file is described in
292 <citerefentry>
293 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
294 <manvolnum>5</manvolnum>
295 </citerefentry>
296 </para>
297 </refsect2>
298
299 <refsect2>
300 <title>Creating / Destroying the containers</title>
301 <para>
302 The container is created via the <command>lxc-create</command>
303 command. It takes a container name as parameter and an
304 optional configuration file. The name is used by the different
305 commands to refer to this
306 container. The <command>lxc-destroy</command> command will
307 destroy the container object.
308 <programlisting>
309 lxc-create -n foo
310 lxc-destroy -n foo
311 </programlisting>
312 </para>
313 </refsect2>
314
315 <refsect2>
316 <title>Starting / Stopping a container</title>
317 <para>When the container has been created, it is ready to run an
318 application / system. When the application has to be destroyed
319 the container can be stopped, that will kill all the processes
320 of the container.</para>
321
322 <para>
323 Running an application inside a container is not exactly the
324 same thing as running a system. For this reason, there is two
325 commands to run an application into a container:
326 <programlisting>
327 lxc-execute -n foo [-f config] /bin/bash
328 lxc-start -n foo [/bin/bash]
329 </programlisting>
330 </para>
331
332 <para>
333 <command>lxc-execute</command> command will run the
334 specified command into a container but it will mount /proc
335 and autocreate/autodestroy the container if it does not
336 exist. It will furthermore create an intermediate
337 process, <command>lxc-init</command>, which is in charge to
338 launch the specified command, that allows to support daemons
339 in the container. In other words, in the
340 container <command>lxc-init</command> has the pid 1 and the
341 first process of the application has the pid 2.
342 </para>
343
344 <para>
345 <command>lxc-start</command> command will run the specified
346 command into the container doing nothing else than using the
347 configuration specified by <command>lxc-create</command>.
348 The pid of the first process is 1. If no command is
349 specified <command>lxc-start</command> will
350 run <filename>/sbin/init</filename>.
351 </para>
352
353 <para>
354 To summarize, <command>lxc-execute</command> is for running
355 an application and <command>lxc-start</command> is for
356 running a system.
357 </para>
358
359 <para>
360 If the application is no longer responding, inaccessible or is
361 not able to finish by itself, a
362 wild <command>lxc-stop</command> command will kill all the
363 processes in the container without pity.
364 <programlisting>
365 lxc-stop -n foo
366 </programlisting>
367 </para>
368 </refsect2>
369
370 <refsect2>
371 <title>Connect to an available tty</title>
372 <para>
373 If the container is configured with the ttys, it is possible
374 to access it through them. It is up to the container to
375 provide a set of available tty to be used by the following
376 command. When the tty is lost, it is possible to reconnect it
377 without login again.
378 <programlisting>
379 lxc-console -n foo -t 3
380 </programlisting>
381 </para>
382 </refsect2>
383
384 <refsect2>
385 <title>Freeze / Unfreeze a container</title>
386 <para>
387 Sometime, it is useful to stop all the processes belonging to
388 a container, eg. for job scheduling. The commands:
389 <programlisting>
390 lxc-freeze -n foo
391 </programlisting>
392
393 will put all the processes in an uninteruptible state and
394
395 <programlisting>
396 lxc-unfreeze -n foo
397 </programlisting>
398
399 will resume all the tasks.
400 </para>
401
402 <para>
403 This feature is enabled if the cgroup freezer is enabled in the
404 kernel.
405 </para>
406 </refsect2>
407
408 <refsect2>
409 <title>Getting information about the container</title>
410 <para>When there are a lot of containers, it is hard to follow
411 what has been created or destroyed, what is running or what are
412 the pids running into a specific container. For this reason, the
413 following commands give this information:
414 <programlisting>
415 lxc-ls
416 lxc-ps -n foo
417 lxc-info -n foo
418 </programlisting>
419 </para>
420 <para>
421 <command>lxc-ls</command> lists the containers of the
422 system. The command is a script built on top
423 of <command>ls</command>, so it accepts the options of the ls
424 commands, eg:
425 <programlisting>
426 lxc-ls -C1
427 </programlisting>
428 will display the containers list in one column or:
429 <programlisting>
430 lxc-ls -l
431 </programlisting>
432 will display the containers list and their permissions.
433 </para>
434
435 <para>
436 <command>lxc-ps</command> will display the pids for a specific
437 container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
438 is built on top of <command>ps</command> and accepts the same
439 options, eg:
440 <programlisting>
441 lxc-ps -n foo --forest
442 </programlisting>
443
444 will display the process hierarchy for the container 'foo'.
445 </para>
446
447 <para>
448 <command>lxc-info</command> gives informations for a specific
449 container, at present time, only the state of the container is
450 displayed.
451 </para>
452
453 <para>
454 Here is an example on how the combination of these commands
455 allow to list all the containers and retrieve their state.
456 <programlisting>
457 for i in $(lxc-ls -1); do
458 lxc-info -n $i
459 done
460 </programlisting>
461
462 And displaying all the pids of all the containers:
463
464 <programlisting>
465 for i in $(lxc-ls -1); do
466 lxc-ps -n $i --forest
467 done
468 </programlisting>
469
470 </para>
471
472 <para>
473 <command>lxc-netstat</command> display network information for
474 a specific container. This command is built on top of
475 the <command>netstat</command> command and will accept its
476 options
477 </para>
478
479 <para>
480 The following command will display the socket informations for
481 the container 'foo'.
482 <programlisting>
483 lxc-netstat -n foo -tano
484 </programlisting>
485 </para>
486
487 </refsect2>
488
489 <refsect2>
490 <title>Monitoring the containers</title>
491 <para>It is sometime useful to track the states of a container,
492 for example to monitor it or just to wait for a specific
493 state in a script.
494 </para>
495
496 <para>
497 <command>lxc-monitor</command> command will monitor one or
498 several containers. The parameter of this command accept a
499 regular expression for example:
500 <programlisting>
501 lxc-monitor -n "foo|bar"
502 </programlisting>
503 will monitor the states of containers named 'foo' and 'bar', and:
504 <programlisting>
505 lxc-monitor -n ".*"
506 </programlisting>
507 will monitor all the containers.
508 </para>
509 <para>
510 For a container 'foo' starting, doing some work and exiting,
511 the output will be in the form:
512 <programlisting>
513 'foo' changed state to [STARTING]
514 'foo' changed state to [RUNNING]
515 'foo' changed state to [STOPPING]
516 'foo' changed state to [STOPPED]
517 </programlisting>
518 </para>
519 <para>
520 <command>lxc-wait</command> command will wait for a specific
521 state change and exit. This is useful for scripting to
522 synchronize the launch of a container or the end. The
523 parameter is an ORed combination of different states. The
524 following example shows how to wait for a container if he went
525 to the background.
526
527 <programlisting>
528
529 # launch lxc-wait in background
530 lxc-wait -n foo -s STOPPED &
531 LXC_WAIT_PID=$!
532
533 # this command goes in background
534 lxc-execute -n foo mydaemon &
535
536 # block until the lxc-wait exits
537 # and lxc-wait exits when the container
538 # is STOPPED
539 wait $LXC_WAIT_PID
540 echo "'foo' is finished"
541
542 </programlisting>
543 </para>
544 </refsect2>
545
546 <refsect2>
547 <title>Setting the control group for a container</title>
548 <para>The container is tied with the control groups, when a
549 container is started a control group is created and associated
550 with it. The control group properties can be read and modified
551 when the container is running by using the lxc-cgroup command.
552 </para>
553 <para>
554 <command>lxc-cgroup</command> command is used to set or get a
555 control group subsystem which is associated with a
556 container. The subsystem name is handled by the user, the
557 command won't do any syntax checking on the subsystem name, if
558 the subsystem name does not exists, the command will fail.
559 </para>
560 <para>
561 <programlisting>
562 lxc-cgroup -n foo cpuset.cpus
563 </programlisting>
564 will display the content of this subsystem.
565 <programlisting>
566 lxc-cgroup -n foo cpu.shares 512
567 </programlisting>
568 will set the subsystem to the specified value.
569 </para>
570 </refsect2>
571 </refsect1>
572
573 <refsect1>
574 <title>Bugs</title>
575 <para>The <command>lxc</command> is still in development, so the
576 command syntax and the API can change. The version 1.0.0 will be
577 the frozen version.</para>
578 </refsect1>
579
580 <refsect1>
581 <title>See Also</title>
582 <simpara>
583 <citerefentry>
584 <refentrytitle><command>lxc-create</command></refentrytitle>
585 <manvolnum>1</manvolnum>
586 </citerefentry>,
587
588 <citerefentry>
589 <refentrytitle><command>lxc-destroy</command></refentrytitle>
590 <manvolnum>1</manvolnum>
591 </citerefentry>,
592
593 <citerefentry>
594 <refentrytitle><command>lxc-start</command></refentrytitle>
595 <manvolnum>1</manvolnum>
596 </citerefentry>,
597
598 <citerefentry>
599 <refentrytitle><command>lxc-execute</command></refentrytitle>
600 <manvolnum>1</manvolnum>
601 </citerefentry>,
602
603 <citerefentry>
604 <refentrytitle><command>lxc-stop</command></refentrytitle>
605 <manvolnum>1</manvolnum>
606 </citerefentry>,
607
608 <citerefentry>
609 <refentrytitle><command>lxc-console</command></refentrytitle>
610 <manvolnum>1</manvolnum>
611 </citerefentry>,
612
613 <citerefentry>
614 <refentrytitle><command>lxc-monitor</command></refentrytitle>
615 <manvolnum>1</manvolnum>
616 </citerefentry>,
617
618 <citerefentry>
619 <refentrytitle><command>lxc-wait</command></refentrytitle>
620 <manvolnum>1</manvolnum>
621 </citerefentry>,
622
623 <citerefentry>
624 <refentrytitle><command>lxc-cgroup</command></refentrytitle>
625 <manvolnum>1</manvolnum>
626 </citerefentry>,
627
628 <citerefentry>
629 <refentrytitle><command>lxc-ls</command></refentrytitle>
630 <manvolnum>1</manvolnum>
631 </citerefentry>,
632
633 <citerefentry>
634 <refentrytitle><command>lxc-ps</command></refentrytitle>
635 <manvolnum>1</manvolnum>
636 </citerefentry>,
637
638 <citerefentry>
639 <refentrytitle><command>lxc-info</command></refentrytitle>
640 <manvolnum>1</manvolnum>
641 </citerefentry>,
642
643 <citerefentry>
644 <refentrytitle><command>lxc-freeze</command></refentrytitle>
645 <manvolnum>1</manvolnum>
646 </citerefentry>,
647
648 <citerefentry>
649 <refentrytitle><command>lxc-unfreeze</command></refentrytitle>
650 <manvolnum>1</manvolnum>
651 </citerefentry>,
652
653 <citerefentry>
654 <refentrytitle><command>lxc.conf</command></refentrytitle>
655 <manvolnum>5</manvolnum>
656 </citerefentry>,
657
658 </simpara>
659 </refsect1>
660
661 <refsect1>
662 <title>Author</title>
663 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
664 </refsect1>
665
666 </refentry>
667
668 <!-- Keep this comment at the end of the file Local variables: mode:
669 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
670 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
671 sgml-parent-document:nil sgml-default-dtd-file:nil
672 sgml-exposed-tags:nil sgml-local-catalogs:nil
673 sgml-local-ecat-files:nil End: -->