]> git.proxmox.com Git - mirror_lxc.git/blob - doc/lxc.sgml.in
update the man pages
[mirror_lxc.git] / doc / lxc.sgml.in
1 <!--
2
3 lxc: linux Container library
4
5 (C) Copyright IBM Corp. 2007, 2008
6
7 Authors:
8 Daniel Lezcano <dlezcano at fr.ibm.com>
9
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
14
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
19
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
23
24 -->
25
26 <!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
27
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
29 ]>
30
31 <refentry>
32
33 <docinfo>
34 <date>@LXC_GENERATE_DATE@</date>
35 </docinfo>
36
37
38 <refmeta>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
41 <refmiscinfo>
42 Version @PACKAGE_VERSION@
43 </refmiscinfo>
44 </refmeta>
45
46 <refnamediv>
47 <refname>lxc</refname>
48
49 <refpurpose>
50 linux containers
51 </refpurpose>
52 </refnamediv>
53
54 <refsect1>
55 <title>Quick start</title>
56 <para>
57 You are in a hurry, and you don't want to read this man page. Ok,
58 without warranty, here are the commands to launch a shell inside
59 a container with a predefined configuration template, it may
60 work.
61 <command>
62 @BINDIR@/lxc-execute -n foo -f @DOCDIR@/examples/lxc-macvlan.conf /bin/bash
63 </command>
64 </para>
65 </refsect1>
66
67 <refsect1>
68 <title>Overview</title>
69 <para>
70 The container technology is actively being pushed into the
71 mainstream linux kernel. It provides the resource management
72 through the control groups aka process containers and resource
73 isolation through the namespaces.
74 </para>
75
76 <para>
77 The linux containers, <command>lxc</command>, aims to use these
78 new functionalities to provide an userspace container object
79 which provides full resource isolation and resource control for
80 an applications or a system.
81 </para>
82
83 <para>
84 The first objective of this project is to make the life easier
85 for the kernel developers involved in the containers project and
86 especially to continue working on the Checkpoint/Restart new
87 features. The <command>lxc</command> is small enough to easily
88 manage a container with simple command lines and complete enough
89 to be used for other purposes.
90 </para>
91 </refsect1>
92
93 <refsect1>
94 <title>Requirements</title>
95 <para>
96 The <command>lxc</command> relies on a set of functionalies
97 provided by the kernel which needs to be active. Depending of
98 the missing functionalities the <command>lxc</command> will
99 work with a restricted number of functionalities or will simply
100 fails.
101 </para>
102
103 <para>
104 The following list gives the kernel features to be enabled in
105 the kernel to have the full features container:
106 </para>
107 <programlisting>
108 * General setup
109 * Control Group support
110 -> Namespace cgroup subsystem
111 -> Freezer cgroup subsystem
112 -> Cpuset support
113 -> Simple CPU accounting cgroup subsystem
114 -> Resource counters
115 -> Memory resource controllers for Control Groups
116 * Group CPU scheduler
117 -> Basis for grouping tasks (Control Groups)
118 * Namespaces support
119 -> UTS namespace
120 -> IPC namespace
121 -> User namespace
122 -> Pid namespace
123 -> Network namespace
124 * Security options
125 -> File POSIX Capabilities
126 </programlisting>
127
128 <para>
129
130 The kernel version >= 2.6.27 shipped with the distros, will
131 work with <command>lxc</command>, this one will have less
132 functionalities but enough to be interesting.
133
134 With the kernel 2.6.29, <command>lxc</command> is fully
135 functional.
136 </para>
137
138 <para>
139 Before using the <command>lxc</command>, your system should be
140 configured with the file capabilities, otherwise you will need
141 to run the <command>lxc</command> commands as root.
142 </para>
143
144 <para>
145 The control group can be mounted anywhere, eg:
146 <command>mount -t cgroup cgroup /cgroup</command>.
147
148 If you want to dedicate a specific cgroup mount point
149 for <command>lxc</command>, that is to have different cgroups
150 mounted at different places with different options but
151 let <command>lxc</command> to use one location, you can bind
152 the mount point with the <option>lxc</option> name, eg:
153 <command>mount -t cgroup lxc /cgroup4lxc</command> or
154 <command>mount -t cgroup -ons,cpuset,freezer,devices
155 lxc /cgroup4lxc</command>
156
157 </para>
158
159 </refsect1>
160
161 <refsect1>
162 <title>Functional specification</title>
163 <para>
164 A container is an object where the configuration is
165 persistent. The application will be launched inside this
166 container and it will use the configuration which was previously
167 created.
168 </para>
169
170 <para>How to run an application in a container ?</para>
171 <para>
172 Before running an application, you should know what are the
173 resources you want to isolate. The default configuration is to
174 isolate the pids, the sysv ipc and the mount points. If you want
175 to run a simple shell inside a container, a basic configuration
176 is needed, especially if you want to share the rootfs. If you
177 want to run an application like <command>sshd</command>, you
178 should provide a new network stack and a new hostname. If you
179 want to avoid conflicts with some files
180 eg. <filename>/var/run/httpd.pid</filename>, you should
181 remount <filename>/var/run</filename> with an empty
182 directory. If you want to avoid the conflicts in all the cases,
183 you can specify a rootfs for the container. The rootfs can be a
184 directory tree, previously bind mounted with the initial rootfs,
185 so you can still use your distro but with your
186 own <filename>/etc</filename> and <filename>/home</filename>
187 </para>
188 <para>
189 Here is an example of directory tree
190 for <command>sshd</command>:
191 <programlisting>
192 [root@lxc sshd]$ tree -d rootfs
193
194 rootfs
195 |-- bin
196 |-- dev
197 | |-- pts
198 | `-- shm
199 | `-- network
200 |-- etc
201 | `-- ssh
202 |-- lib
203 |-- proc
204 |-- root
205 |-- sbin
206 |-- sys
207 |-- usr
208 `-- var
209 |-- empty
210 | `-- sshd
211 |-- lib
212 | `-- empty
213 | `-- sshd
214 `-- run
215 `-- sshd
216 </programlisting>
217
218 and the mount points file associated with it:
219 <programlisting>
220 [root@lxc sshd]$ cat fstab
221
222 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
223 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
224 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
225 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
226 </programlisting>
227 </para>
228
229 <para>How to run a system in a container ?</para>
230
231 <para>Running a system inside a container is paradoxically easier
232 than running an application. Why ? Because you don't have to care
233 about the resources to be isolated, everything need to be
234 isolated, the other resources are specified as being isolated but
235 without configuration because the container will set them
236 up. eg. the ipv4 address will be setup by the system container
237 init scripts. Here is an example of the mount points file:
238
239 <programlisting>
240 [root@lxc debian]$ cat fstab
241
242 /dev /home/root/debian/rootfs/dev none bind 0 0
243 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
244 </programlisting>
245
246 More information can be added to the container to facilitate the
247 configuration. For example, make accessible from the container
248 the resolv.conf file belonging to the host.
249
250 <programlisting>
251 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
252 </programlisting>
253 </para>
254
255 <refsect2>
256 <title>Container life cycle</title>
257 <para>
258 When the container is created, it contains the configuration
259 information. When a process is launched, the container will be
260 starting and running. When the last process running inside the
261 container exits, the container is stopped.
262 </para>
263 <para>
264 In case of failure when the container is initialized, it will
265 pass through the aborting state.
266 </para>
267
268 <programlisting>
269
270 ---------
271 | STOPPED |<---------------
272 --------- |
273 | |
274 start |
275 | |
276 V |
277 ---------- |
278 | STARTING |--error- |
279 ---------- | |
280 | | |
281 V V |
282 --------- ---------- |
283 | RUNNING | | ABORTING | |
284 --------- ---------- |
285 | | |
286 no process | |
287 | | |
288 V | |
289 ---------- | |
290 | STOPPING |<------- |
291 ---------- |
292 | |
293 ---------------------
294
295 </programlisting>
296 </refsect2>
297
298 <refsect2>
299 <title>Configuration</title>
300 <para>The container is configured through a configuration
301 file, the format of the configuration file is described in
302 <citerefentry>
303 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
304 <manvolnum>5</manvolnum>
305 </citerefentry>
306 </para>
307 </refsect2>
308
309 <refsect2>
310 <title>Creating / Destroying the containers</title>
311 <para>
312 The container is created via the <command>lxc-create</command>
313 command. It takes a container name as parameter and an
314 optional configuration file. The name is used by the different
315 commands to refer to this
316 container. The <command>lxc-destroy</command> command will
317 destroy the container object.
318 <programlisting>
319 lxc-create -n foo
320 lxc-destroy -n foo
321 </programlisting>
322 </para>
323 </refsect2>
324
325 <refsect2>
326 <title>Starting / Stopping a container</title>
327 <para>When the container has been created, it is ready to run an
328 application / system. When the application has to be destroyed
329 the container can be stopped, that will kill all the processes
330 of the container.</para>
331
332 <para>
333 Running an application inside a container is not exactly the
334 same thing as running a system. For this reason, there is two
335 commands to run an application into a container:
336 <programlisting>
337 lxc-execute -n foo [-f config] /bin/bash
338 lxc-start -n foo [/bin/bash]
339 </programlisting>
340 </para>
341
342 <para>
343 <command>lxc-execute</command> command will run the
344 specified command into a container but it will mount /proc
345 and autocreate/autodestroy the container if it does not
346 exist. It will furthermore create an intermediate
347 process, <command>lxc-init</command>, which is in charge to
348 launch the specified command, that allows to support daemons
349 in the container. In other words, in the
350 container <command>lxc-init</command> has the pid 1 and the
351 first process of the application has the pid 2.
352 </para>
353
354 <para>
355 <command>lxc-start</command> command will run the specified
356 command into the container doing nothing else than using the
357 configuration specified by <command>lxc-create</command>.
358 The pid of the first process is 1. If no command is
359 specified <command>lxc-start</command> will
360 run <filename>/sbin/init</filename>.
361 </para>
362
363 <para>
364 To summarize, <command>lxc-execute</command> is for running
365 an application and <command>lxc-start</command> is for
366 running a system.
367 </para>
368
369 <para>
370 If the application is no longer responding, inaccessible or is
371 not able to finish by itself, a
372 wild <command>lxc-stop</command> command will kill all the
373 processes in the container without pity.
374 <programlisting>
375 lxc-stop -n foo
376 </programlisting>
377 </para>
378 </refsect2>
379
380 <refsect2>
381 <title>Connect to an available tty</title>
382 <para>
383 If the container is configured with the ttys, it is possible
384 to access it through them. It is up to the container to
385 provide a set of available tty to be used by the following
386 command. When the tty is lost, it is possible to reconnect it
387 without login again.
388 <programlisting>
389 lxc-console -n foo -t 3
390 </programlisting>
391 </para>
392 </refsect2>
393
394 <refsect2>
395 <title>Freeze / Unfreeze a container</title>
396 <para>
397 Sometime, it is useful to stop all the processes belonging to
398 a container, eg. for job scheduling. The commands:
399 <programlisting>
400 lxc-freeze -n foo
401 </programlisting>
402
403 will put all the processes in an uninteruptible state and
404
405 <programlisting>
406 lxc-unfreeze -n foo
407 </programlisting>
408
409 will resume all the tasks.
410 </para>
411
412 <para>
413 This feature is enabled if the cgroup freezer is enabled in the
414 kernel.
415 </para>
416 </refsect2>
417
418 <refsect2>
419 <title>Getting information about the container</title>
420 <para>When there are a lot of containers, it is hard to follow
421 what has been created or destroyed, what is running or what are
422 the pids running into a specific container. For this reason, the
423 following commands give this information:
424 <programlisting>
425 lxc-ls
426 lxc-ps --name foo
427 lxc-info -n foo
428 </programlisting>
429 </para>
430 <para>
431 <command>lxc-ls</command> lists the containers of the
432 system. The command is a script built on top
433 of <command>ls</command>, so it accepts the options of the ls
434 commands, eg:
435 <programlisting>
436 lxc-ls -C1
437 </programlisting>
438 will display the containers list in one column or:
439 <programlisting>
440 lxc-ls -l
441 </programlisting>
442 will display the containers list and their permissions.
443 </para>
444
445 <para>
446 <command>lxc-ps</command> will display the pids for a specific
447 container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
448 is built on top of <command>ps</command> and accepts the same
449 options, eg:
450 <programlisting>lxc-ps --name foo --forest</programlisting>
451 will display the processes hierarchy for the processes
452 belonging the 'foo' container.
453
454 <programlisting>lxc-ps --lxc</programlisting>
455 will display all the containers and their processes.
456 </para>
457
458 <para>
459 <command>lxc-info</command> gives informations for a specific
460 container, at present time, only the state of the container is
461 displayed.
462 </para>
463
464 <para>
465 Here is an example on how the combination of these commands
466 allow to list all the containers and retrieve their state.
467 <programlisting>
468 for i in $(lxc-ls -1); do
469 lxc-info -n $i
470 done
471 </programlisting>
472
473 And displaying all the pids of all the containers:
474
475 <programlisting>
476 for i in $(lxc-ls -1); do
477 lxc-ps -n $i --forest
478 done
479 </programlisting>
480
481 </para>
482
483 <para>
484 <command>lxc-netstat</command> display network information for
485 a specific container. This command is built on top of
486 the <command>netstat</command> command and will accept its
487 options
488 </para>
489
490 <para>
491 The following command will display the socket informations for
492 the container 'foo'.
493 <programlisting>
494 lxc-netstat -n foo -tano
495 </programlisting>
496 </para>
497
498 </refsect2>
499
500 <refsect2>
501 <title>Monitoring the containers</title>
502 <para>It is sometime useful to track the states of a container,
503 for example to monitor it or just to wait for a specific
504 state in a script.
505 </para>
506
507 <para>
508 <command>lxc-monitor</command> command will monitor one or
509 several containers. The parameter of this command accept a
510 regular expression for example:
511 <programlisting>
512 lxc-monitor -n "foo|bar"
513 </programlisting>
514 will monitor the states of containers named 'foo' and 'bar', and:
515 <programlisting>
516 lxc-monitor -n ".*"
517 </programlisting>
518 will monitor all the containers.
519 </para>
520 <para>
521 For a container 'foo' starting, doing some work and exiting,
522 the output will be in the form:
523 <programlisting>
524 'foo' changed state to [STARTING]
525 'foo' changed state to [RUNNING]
526 'foo' changed state to [STOPPING]
527 'foo' changed state to [STOPPED]
528 </programlisting>
529 </para>
530 <para>
531 <command>lxc-wait</command> command will wait for a specific
532 state change and exit. This is useful for scripting to
533 synchronize the launch of a container or the end. The
534 parameter is an ORed combination of different states. The
535 following example shows how to wait for a container if he went
536 to the background.
537
538 <programlisting>
539
540 # launch lxc-wait in background
541 lxc-wait -n foo -s STOPPED &
542 LXC_WAIT_PID=$!
543
544 # this command goes in background
545 lxc-execute -n foo mydaemon &
546
547 # block until the lxc-wait exits
548 # and lxc-wait exits when the container
549 # is STOPPED
550 wait $LXC_WAIT_PID
551 echo "'foo' is finished"
552
553 </programlisting>
554 </para>
555 </refsect2>
556
557 <refsect2>
558 <title>Setting the control group for a container</title>
559 <para>The container is tied with the control groups, when a
560 container is started a control group is created and associated
561 with it. The control group properties can be read and modified
562 when the container is running by using the lxc-cgroup command.
563 </para>
564 <para>
565 <command>lxc-cgroup</command> command is used to set or get a
566 control group subsystem which is associated with a
567 container. The subsystem name is handled by the user, the
568 command won't do any syntax checking on the subsystem name, if
569 the subsystem name does not exists, the command will fail.
570 </para>
571 <para>
572 <programlisting>
573 lxc-cgroup -n foo cpuset.cpus
574 </programlisting>
575 will display the content of this subsystem.
576 <programlisting>
577 lxc-cgroup -n foo cpu.shares 512
578 </programlisting>
579 will set the subsystem to the specified value.
580 </para>
581 </refsect2>
582 </refsect1>
583
584 <refsect1>
585 <title>Bugs</title>
586 <para>The <command>lxc</command> is still in development, so the
587 command syntax and the API can change. The version 1.0.0 will be
588 the frozen version.</para>
589 </refsect1>
590
591 &seealso;
592
593 <refsect1>
594 <title>Author</title>
595 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
596 </refsect1>
597
598 </refentry>
599
600 <!-- Keep this comment at the end of the file Local variables: mode:
601 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
602 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
603 sgml-parent-document:nil sgml-default-dtd-file:nil
604 sgml-exposed-tags:nil sgml-local-catalogs:nil
605 sgml-local-ecat-files:nil End: -->