]> git.proxmox.com Git - mirror_lxc.git/blob - doc/lxc.sgml.in
Fix typo in lxc manpage
[mirror_lxc.git] / doc / lxc.sgml.in
1 <!--
2
3 lxc: linux Container library
4
5 (C) Copyright IBM Corp. 2007, 2008
6
7 Authors:
8 Daniel Lezcano <daniel.lezcano at free.fr>
9
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
14
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
19
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
23
24 -->
25
26 <!DOCTYPE refentry PUBLIC @docdtd@ [
27
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
29 ]>
30
31 <refentry>
32
33 <docinfo>
34 <date>@LXC_GENERATE_DATE@</date>
35 </docinfo>
36
37
38 <refmeta>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
41 <refmiscinfo>
42 Version @PACKAGE_VERSION@
43 </refmiscinfo>
44 </refmeta>
45
46 <refnamediv>
47 <refname>lxc</refname>
48
49 <refpurpose>
50 linux containers
51 </refpurpose>
52 </refnamediv>
53
54 <refsect1>
55 <title>Quick start</title>
56 <para>
57 You are in a hurry, and you don't want to read this man page. Ok,
58 without warranty, here are the commands to launch a shell inside
59 a container with a predefined configuration template, it may
60 work.
61 <command>@BINDIR@/lxc-execute -n foo -f
62 @DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
63 </para>
64 </refsect1>
65
66 <refsect1>
67 <title>Overview</title>
68 <para>
69 The container technology is actively being pushed into the
70 mainstream linux kernel. It provides the resource management
71 through the control groups aka process containers and resource
72 isolation through the namespaces.
73 </para>
74
75 <para>
76 The linux containers, <command>lxc</command>, aims to use these
77 new functionalities to provide a userspace container object
78 which provides full resource isolation and resource control for
79 an applications or a system.
80 </para>
81
82 <para>
83 The first objective of this project is to make the life easier
84 for the kernel developers involved in the containers project and
85 especially to continue working on the Checkpoint/Restart new
86 features. The <command>lxc</command> is small enough to easily
87 manage a container with simple command lines and complete enough
88 to be used for other purposes.
89 </para>
90 </refsect1>
91
92 <refsect1>
93 <title>Requirements</title>
94 <para>
95 The <command>lxc</command> relies on a set of functionalities
96 provided by the kernel which needs to be active. Depending of
97 the missing functionalities the <command>lxc</command> will
98 work with a restricted number of functionalities or will simply
99 fail.
100 </para>
101
102 <para>
103 The following list gives the kernel features to be enabled in
104 the kernel to have the full features container:
105 </para>
106 <programlisting>
107 * General setup
108 * Control Group support
109 -> Namespace cgroup subsystem
110 -> Freezer cgroup subsystem
111 -> Cpuset support
112 -> Simple CPU accounting cgroup subsystem
113 -> Resource counters
114 -> Memory resource controllers for Control Groups
115 * Group CPU scheduler
116 -> Basis for grouping tasks (Control Groups)
117 * Namespaces support
118 -> UTS namespace
119 -> IPC namespace
120 -> User namespace
121 -> Pid namespace
122 -> Network namespace
123 * Device Drivers
124 * Character devices
125 -> Support multiple instances of devpts
126 * Network device support
127 -> MAC-VLAN support
128 -> Virtual ethernet pair device
129 * Networking
130 * Networking options
131 -> 802.1d Ethernet Bridging
132 * Security options
133 -> File POSIX Capabilities
134 </programlisting>
135
136 <para>
137
138 The kernel version >= 2.6.32 shipped with the distros, will
139 work with <command>lxc</command>, this one will have less
140 functionalities but enough to be interesting.
141
142 The helper script <command>lxc-checkconfig</command> will give
143 you information about your kernel configuration.
144 </para>
145
146 <para>
147 The control group can be mounted anywhere, eg:
148 <command>mount -t cgroup cgroup /cgroup</command>.
149
150 It is however recommended to use cgmanager, cgroup-lite or systemd
151 to mount the cgroup hierarchy under /sys/fs/cgroup.
152
153 </para>
154
155 </refsect1>
156
157 <refsect1>
158 <title>Functional specification</title>
159 <para>
160 A container is an object isolating some resources of the host,
161 for the application or system running in it.
162 </para>
163 <para>
164 The application / system will be launched inside a
165 container specified by a configuration that is either
166 initially created or passed as parameter of the starting commands.
167 </para>
168
169 <para>How to run an application in a container ?</para>
170 <para>
171 Before running an application, you should know what are the
172 resources you want to isolate. The default configuration is to
173 isolate the pids, the sysv ipc and the mount points. If you want
174 to run a simple shell inside a container, a basic configuration
175 is needed, especially if you want to share the rootfs. If you
176 want to run an application like <command>sshd</command>, you
177 should provide a new network stack and a new hostname. If you
178 want to avoid conflicts with some files
179 eg. <filename>/var/run/httpd.pid</filename>, you should
180 remount <filename>/var/run</filename> with an empty
181 directory. If you want to avoid the conflicts in all the cases,
182 you can specify a rootfs for the container. The rootfs can be a
183 directory tree, previously bind mounted with the initial rootfs,
184 so you can still use your distro but with your
185 own <filename>/etc</filename> and <filename>/home</filename>
186 </para>
187 <para>
188 Here is an example of directory tree
189 for <command>sshd</command>:
190 <programlisting>
191 [root@lxc sshd]$ tree -d rootfs
192
193 rootfs
194 |-- bin
195 |-- dev
196 | |-- pts
197 | `-- shm
198 | `-- network
199 |-- etc
200 | `-- ssh
201 |-- lib
202 |-- proc
203 |-- root
204 |-- sbin
205 |-- sys
206 |-- usr
207 `-- var
208 |-- empty
209 | `-- sshd
210 |-- lib
211 | `-- empty
212 | `-- sshd
213 `-- run
214 `-- sshd
215 </programlisting>
216
217 and the mount points file associated with it:
218 <programlisting>
219 [root@lxc sshd]$ cat fstab
220
221 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
222 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
223 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
224 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
225 </programlisting>
226 </para>
227
228 <para>How to run a system in a container ?</para>
229
230 <para>Running a system inside a container is paradoxically easier
231 than running an application. Why ? Because you don't have to care
232 about the resources to be isolated, everything need to be
233 isolated, the other resources are specified as being isolated but
234 without configuration because the container will set them
235 up. eg. the ipv4 address will be setup by the system container
236 init scripts. Here is an example of the mount points file:
237
238 <programlisting>
239 [root@lxc debian]$ cat fstab
240
241 /dev /home/root/debian/rootfs/dev none bind 0 0
242 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
243 </programlisting>
244
245 More information can be added to the container to facilitate the
246 configuration. For example, make accessible from the container
247 the resolv.conf file belonging to the host.
248
249 <programlisting>
250 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
251 </programlisting>
252 </para>
253
254 <refsect2>
255 <title>Container life cycle</title>
256 <para>
257 When the container is created, it contains the configuration
258 information. When a process is launched, the container will be
259 starting and running. When the last process running inside the
260 container exits, the container is stopped.
261 </para>
262 <para>
263 In case of failure when the container is initialized, it will
264 pass through the aborting state.
265 </para>
266
267 <programlisting>
268 <![CDATA[
269 ---------
270 | STOPPED |<---------------
271 --------- |
272 | |
273 start |
274 | |
275 V |
276 ---------- |
277 | STARTING |--error- |
278 ---------- | |
279 | | |
280 V V |
281 --------- ---------- |
282 | RUNNING | | ABORTING | |
283 --------- ---------- |
284 | | |
285 no process | |
286 | | |
287 V | |
288 ---------- | |
289 | STOPPING |<------- |
290 ---------- |
291 | |
292 ---------------------
293 ]]>
294 </programlisting>
295 </refsect2>
296
297 <refsect2>
298 <title>Configuration</title>
299 <para>The container is configured through a configuration
300 file, the format of the configuration file is described in
301 <citerefentry>
302 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
303 <manvolnum>5</manvolnum>
304 </citerefentry>
305 </para>
306 </refsect2>
307
308 <refsect2>
309 <title>Creating / Destroying container
310 (persistent container)</title>
311 <para>
312 A persistent container object can be
313 created via the <command>lxc-create</command>
314 command. It takes a container name as parameter and
315 optional configuration file and template.
316 The name is used by the different
317 commands to refer to this
318 container. The <command>lxc-destroy</command> command will
319 destroy the container object.
320 <programlisting>
321 lxc-create -n foo
322 lxc-destroy -n foo
323 </programlisting>
324 </para>
325 </refsect2>
326
327 <refsect2>
328 <title>Volatile container</title>
329 <para>It is not mandatory to create a container object
330 before to start it.
331 The container can be directly started with a
332 configuration file as parameter.
333 </para>
334 </refsect2>
335
336 <refsect2>
337 <title>Starting / Stopping container</title>
338 <para>When the container has been created, it is ready to run an
339 application / system.
340 This is the purpose of the <command>lxc-execute</command> and
341 <command>lxc-start</command> commands.
342 If the container was not created before
343 starting the application, the container will use the
344 configuration file passed as parameter to the command,
345 and if there is no such parameter either, then
346 it will use a default isolation.
347 If the application is ended, the container will be stopped also,
348 but if needed the <command>lxc-stop</command> command can
349 be used to kill the still running application.
350 </para>
351
352 <para>
353 Running an application inside a container is not exactly the
354 same thing as running a system. For this reason, there are two
355 different commands to run an application into a container:
356 <programlisting>
357 lxc-execute -n foo [-f config] /bin/bash
358 lxc-start -n foo [-f config] [/bin/bash]
359 </programlisting>
360 </para>
361
362 <para>
363 <command>lxc-execute</command> command will run the
364 specified command into the container via an intermediate
365 process, <command>lxc-init</command>.
366 This lxc-init after launching the specified command,
367 will wait for its end and all other reparented processes.
368 (to support daemons in the container).
369 In other words, in the
370 container, <command>lxc-init</command> has the pid 1 and the
371 first process of the application has the pid 2.
372 </para>
373
374 <para>
375 <command>lxc-start</command> command will run directly the specified
376 command into the container.
377 The pid of the first process is 1. If no command is
378 specified <command>lxc-start</command> will
379 run the command defined in lxc.init_cmd or if not set,
380 <filename>/sbin/init</filename> .
381 </para>
382
383 <para>
384 To summarize, <command>lxc-execute</command> is for running
385 an application and <command>lxc-start</command> is better suited for
386 running a system.
387 </para>
388
389 <para>
390 If the application is no longer responding, is inaccessible or is
391 not able to finish by itself, a
392 wild <command>lxc-stop</command> command will kill all the
393 processes in the container without pity.
394 <programlisting>
395 lxc-stop -n foo
396 </programlisting>
397 </para>
398 </refsect2>
399
400 <refsect2>
401 <title>Connect to an available tty</title>
402 <para>
403 If the container is configured with the ttys, it is possible
404 to access it through them. It is up to the container to
405 provide a set of available tty to be used by the following
406 command. When the tty is lost, it is possible to reconnect it
407 without login again.
408 <programlisting>
409 lxc-console -n foo -t 3
410 </programlisting>
411 </para>
412 </refsect2>
413
414 <refsect2>
415 <title>Freeze / Unfreeze container</title>
416 <para>
417 Sometime, it is useful to stop all the processes belonging to
418 a container, eg. for job scheduling. The commands:
419 <programlisting>
420 lxc-freeze -n foo
421 </programlisting>
422
423 will put all the processes in an uninteruptible state and
424
425 <programlisting>
426 lxc-unfreeze -n foo
427 </programlisting>
428
429 will resume them.
430 </para>
431
432 <para>
433 This feature is enabled if the cgroup freezer is enabled in the
434 kernel.
435 </para>
436 </refsect2>
437
438 <refsect2>
439 <title>Getting information about container</title>
440 <para>When there are a lot of containers, it is hard to follow
441 what has been created or destroyed, what is running or what are
442 the pids running into a specific container. For this reason, the
443 following commands may be useful:
444 <programlisting>
445 lxc-ls
446 lxc-info -n foo
447 </programlisting>
448 </para>
449 <para>
450 <command>lxc-ls</command> lists the containers of the
451 system.
452 </para>
453
454 <para>
455 <command>lxc-info</command> gives information for a specific
456 container.
457 </para>
458
459 <para>
460 Here is an example on how the combination of these commands
461 allows one to list all the containers and retrieve their state.
462 <programlisting>
463 for i in $(lxc-ls -1); do
464 lxc-info -n $i
465 done
466 </programlisting>
467
468 </para>
469
470 </refsect2>
471
472 <refsect2>
473 <title>Monitoring container</title>
474 <para>It is sometime useful to track the states of a container,
475 for example to monitor it or just to wait for a specific
476 state in a script.
477 </para>
478
479 <para>
480 <command>lxc-monitor</command> command will monitor one or
481 several containers. The parameter of this command accept a
482 regular expression for example:
483 <programlisting>
484 lxc-monitor -n "foo|bar"
485 </programlisting>
486 will monitor the states of containers named 'foo' and 'bar', and:
487 <programlisting>
488 lxc-monitor -n ".*"
489 </programlisting>
490 will monitor all the containers.
491 </para>
492 <para>
493 For a container 'foo' starting, doing some work and exiting,
494 the output will be in the form:
495 <programlisting>
496 'foo' changed state to [STARTING]
497 'foo' changed state to [RUNNING]
498 'foo' changed state to [STOPPING]
499 'foo' changed state to [STOPPED]
500 </programlisting>
501 </para>
502 <para>
503 <command>lxc-wait</command> command will wait for a specific
504 state change and exit. This is useful for scripting to
505 synchronize the launch of a container or the end. The
506 parameter is an ORed combination of different states. The
507 following example shows how to wait for a container if he went
508 to the background.
509
510 <programlisting>
511 <![CDATA[
512 # launch lxc-wait in background
513 lxc-wait -n foo -s STOPPED &
514 LXC_WAIT_PID=$!
515
516 # this command goes in background
517 lxc-execute -n foo mydaemon &
518
519 # block until the lxc-wait exits
520 # and lxc-wait exits when the container
521 # is STOPPED
522 wait $LXC_WAIT_PID
523 echo "'foo' is finished"
524 ]]>
525 </programlisting>
526 </para>
527 </refsect2>
528
529 <refsect2>
530 <title>Setting the control group for container</title>
531 <para>The container is tied with the control groups, when a
532 container is started a control group is created and associated
533 with it. The control group properties can be read and modified
534 when the container is running by using the lxc-cgroup command.
535 </para>
536 <para>
537 <command>lxc-cgroup</command> command is used to set or get a
538 control group subsystem which is associated with a
539 container. The subsystem name is handled by the user, the
540 command won't do any syntax checking on the subsystem name, if
541 the subsystem name does not exists, the command will fail.
542 </para>
543 <para>
544 <programlisting>
545 lxc-cgroup -n foo cpuset.cpus
546 </programlisting>
547 will display the content of this subsystem.
548 <programlisting>
549 lxc-cgroup -n foo cpu.shares 512
550 </programlisting>
551 will set the subsystem to the specified value.
552 </para>
553 </refsect2>
554 </refsect1>
555
556 <refsect1>
557 <title>Bugs</title>
558 <para>The <command>lxc</command> is still in development, so the
559 command syntax and the API can change. The version 1.0.0 will be
560 the frozen version.</para>
561 </refsect1>
562
563 &seealso;
564
565 <refsect1>
566 <title>Author</title>
567 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
568 </refsect1>
569
570 </refentry>
571
572 <!-- Keep this comment at the end of the file Local variables: mode:
573 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
574 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
575 sgml-parent-document:nil sgml-default-dtd-file:nil
576 sgml-exposed-tags:nil sgml-local-catalogs:nil
577 sgml-local-ecat-files:nil End: -->