3 lxc: linux Container library
5 (C) Copyright IBM Corp. 2007, 2008
8 Daniel Lezcano <dlezcano at fr.ibm.com>
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
26 <!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN" [
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
34 <date>@LXC_GENERATE_DATE@</date>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
42 Version @PACKAGE_VERSION@
47 <refname>lxc</refname>
55 <title>Quick start</title>
57 You are in a hurry, and you don't want to read this man page. Ok,
58 without warranty, here are the commands to launch a shell inside
59 a container with a predefined configuration template, it may
62 @BINDIR@/lxc-execute -n foo -f @DOCDIR@/examples/lxc-macvlan.conf /bin/bash
68 <title>Overview</title>
70 The container technology is actively being pushed into the
71 mainstream linux kernel. It provides the resource management
72 through the control groups aka process containers and resource
73 isolation through the namespaces.
77 The linux containers, <command>lxc</command>, aims to use these
78 new functionalities to provide an userspace container object
79 which provides full resource isolation and resource control for
80 an applications or a system.
84 The first objective of this project is to make the life easier
85 for the kernel developers involved in the containers project and
86 especially to continue working on the Checkpoint/Restart new
87 features. The <command>lxc</command> is small enough to easily
88 manage a container with simple command lines and complete enough
89 to be used for other purposes.
94 <title>Requirements</title>
96 The <command>lxc</command> relies on a set of functionalies
97 provided by the kernel which needs to be active. Depending of
98 the missing functionalities the <command>lxc</command> will
99 work with a restricted number of functionalities or will simply
104 The following list gives the kernel features to be enabled in
105 the kernel to have the full features container:
109 * Control Group support
110 -> Namespace cgroup subsystem
111 -> Freezer cgroup subsystem
113 -> Simple CPU accounting cgroup subsystem
115 -> Memory resource controllers for Control Groups
116 * Group CPU scheduler
117 -> Basis for grouping tasks (Control Groups)
125 -> File POSIX Capabilities
130 The kernel version >= 2.6.27 shipped with the distros, will
131 work with <command>lxc</command>, this one will have less
132 functionalities but enough to be interesting.
134 With the kernel 2.6.29, <command>lxc</command> is fully
139 Before using the <command>lxc</command>, your system should be
140 configured with the file capabilities, otherwise you will need
141 to run the <command>lxc</command> commands as root.
145 The control group can be mounted anywhere, eg:
146 <command>mount -t cgroup cgroup /cgroup</command>.
148 If you want to dedicate a specific cgroup mount point
149 for <command>lxc</command>, that is to have different cgroups
150 mounted at different places with different options but
151 let <command>lxc</command> to use one location, you can bind
152 the mount point with the <option>lxc</option> name, eg:
153 <command>mount -t cgroup lxc /cgroup4lxc</command> or
154 <command>mount -t cgroup -ons,cpuset,freezer,devices
155 lxc /cgroup4lxc</command>
162 <title>Functional specification</title>
164 A container is an object where the configuration is
165 persistent. The application will be launched inside this
166 container and it will use the configuration which was previously
170 <para>How to run an application in a container ?</para>
172 Before running an application, you should know what are the
173 resources you want to isolate. The default configuration is to
174 isolate the pids, the sysv ipc and the mount points. If you want
175 to run a simple shell inside a container, a basic configuration
176 is needed, especially if you want to share the rootfs. If you
177 want to run an application like <command>sshd</command>, you
178 should provide a new network stack and a new hostname. If you
179 want to avoid conflicts with some files
180 eg. <filename>/var/run/httpd.pid</filename>, you should
181 remount <filename>/var/run</filename> with an empty
182 directory. If you want to avoid the conflicts in all the cases,
183 you can specify a rootfs for the container. The rootfs can be a
184 directory tree, previously bind mounted with the initial rootfs,
185 so you can still use your distro but with your
186 own <filename>/etc</filename> and <filename>/home</filename>
189 Here is an example of directory tree
190 for <command>sshd</command>:
192 [root@lxc sshd]$ tree -d rootfs
218 and the mount points file associated with it:
220 [root@lxc sshd]$ cat fstab
222 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
223 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
224 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
225 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
229 <para>How to run a system in a container ?</para>
231 <para>Running a system inside a container is paradoxically easier
232 than running an application. Why ? Because you don't have to care
233 about the resources to be isolated, everything need to be
234 isolated, the other resources are specified as being isolated but
235 without configuration because the container will set them
236 up. eg. the ipv4 address will be setup by the system container
237 init scripts. Here is an example of the mount points file:
240 [root@lxc debian]$ cat fstab
242 /dev /home/root/debian/rootfs/dev none bind 0 0
243 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
246 More information can be added to the container to facilitate the
247 configuration. For example, make accessible from the container
248 the resolv.conf file belonging to the host.
251 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
256 <title>Container life cycle</title>
258 When the container is created, it contains the configuration
259 information. When a process is launched, the container will be
260 starting and running. When the last process running inside the
261 container exits, the container is stopped.
264 In case of failure when the container is initialized, it will
265 pass through the aborting state.
271 | STOPPED |<---------------
278 | STARTING |--error- |
282 --------- ---------- |
283 | RUNNING | | ABORTING | |
284 --------- ---------- |
290 | STOPPING |<------- |
293 ---------------------
299 <title>Configuration</title>
300 <para>The container is configured through a configuration
301 file, the format of the configuration file is described in
303 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
304 <manvolnum>5</manvolnum>
310 <title>Creating / Destroying the containers</title>
312 The container is created via the <command>lxc-create</command>
313 command. It takes a container name as parameter and an
314 optional configuration file. The name is used by the different
315 commands to refer to this
316 container. The <command>lxc-destroy</command> command will
317 destroy the container object.
326 <title>Starting / Stopping a container</title>
327 <para>When the container has been created, it is ready to run an
328 application / system. When the application has to be destroyed
329 the container can be stopped, that will kill all the processes
330 of the container.</para>
333 Running an application inside a container is not exactly the
334 same thing as running a system. For this reason, there is two
335 commands to run an application into a container:
337 lxc-execute -n foo [-f config] /bin/bash
338 lxc-start -n foo [/bin/bash]
343 <command>lxc-execute</command> command will run the
344 specified command into a container but it will mount /proc
345 and autocreate/autodestroy the container if it does not
346 exist. It will furthermore create an intermediate
347 process, <command>lxc-init</command>, which is in charge to
348 launch the specified command, that allows to support daemons
349 in the container. In other words, in the
350 container <command>lxc-init</command> has the pid 1 and the
351 first process of the application has the pid 2.
355 <command>lxc-start</command> command will run the specified
356 command into the container doing nothing else than using the
357 configuration specified by <command>lxc-create</command>.
358 The pid of the first process is 1. If no command is
359 specified <command>lxc-start</command> will
360 run <filename>/sbin/init</filename>.
364 To summarize, <command>lxc-execute</command> is for running
365 an application and <command>lxc-start</command> is for
370 If the application is no longer responding, inaccessible or is
371 not able to finish by itself, a
372 wild <command>lxc-stop</command> command will kill all the
373 processes in the container without pity.
381 <title>Connect to an available tty</title>
383 If the container is configured with the ttys, it is possible
384 to access it through them. It is up to the container to
385 provide a set of available tty to be used by the following
386 command. When the tty is lost, it is possible to reconnect it
389 lxc-console -n foo -t 3
395 <title>Freeze / Unfreeze a container</title>
397 Sometime, it is useful to stop all the processes belonging to
398 a container, eg. for job scheduling. The commands:
403 will put all the processes in an uninteruptible state and
409 will resume all the tasks.
413 This feature is enabled if the cgroup freezer is enabled in the
419 <title>Getting information about the container</title>
420 <para>When there are a lot of containers, it is hard to follow
421 what has been created or destroyed, what is running or what are
422 the pids running into a specific container. For this reason, the
423 following commands give this information:
431 <command>lxc-ls</command> lists the containers of the
432 system. The command is a script built on top
433 of <command>ls</command>, so it accepts the options of the ls
438 will display the containers list in one column or:
442 will display the containers list and their permissions.
446 <command>lxc-ps</command> will display the pids for a specific
447 container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
448 is built on top of <command>ps</command> and accepts the same
450 <programlisting>lxc-ps --name foo --forest</programlisting>
451 will display the processes hierarchy for the processes
452 belonging the 'foo' container.
454 <programlisting>lxc-ps --lxc</programlisting>
455 will display all the containers and their processes.
459 <command>lxc-info</command> gives informations for a specific
460 container, at present time, only the state of the container is
465 Here is an example on how the combination of these commands
466 allow to list all the containers and retrieve their state.
468 for i in $(lxc-ls -1); do
473 And displaying all the pids of all the containers:
476 for i in $(lxc-ls -1); do
477 lxc-ps -n $i --forest
484 <command>lxc-netstat</command> display network information for
485 a specific container. This command is built on top of
486 the <command>netstat</command> command and will accept its
491 The following command will display the socket informations for
494 lxc-netstat -n foo -tano
501 <title>Monitoring the containers</title>
502 <para>It is sometime useful to track the states of a container,
503 for example to monitor it or just to wait for a specific
508 <command>lxc-monitor</command> command will monitor one or
509 several containers. The parameter of this command accept a
510 regular expression for example:
512 lxc-monitor -n "foo|bar"
514 will monitor the states of containers named 'foo' and 'bar', and:
518 will monitor all the containers.
521 For a container 'foo' starting, doing some work and exiting,
522 the output will be in the form:
524 'foo' changed state to [STARTING]
525 'foo' changed state to [RUNNING]
526 'foo' changed state to [STOPPING]
527 'foo' changed state to [STOPPED]
531 <command>lxc-wait</command> command will wait for a specific
532 state change and exit. This is useful for scripting to
533 synchronize the launch of a container or the end. The
534 parameter is an ORed combination of different states. The
535 following example shows how to wait for a container if he went
540 # launch lxc-wait in background
541 lxc-wait -n foo -s STOPPED &
544 # this command goes in background
545 lxc-execute -n foo mydaemon &
547 # block until the lxc-wait exits
548 # and lxc-wait exits when the container
551 echo "'foo' is finished"
558 <title>Setting the control group for a container</title>
559 <para>The container is tied with the control groups, when a
560 container is started a control group is created and associated
561 with it. The control group properties can be read and modified
562 when the container is running by using the lxc-cgroup command.
565 <command>lxc-cgroup</command> command is used to set or get a
566 control group subsystem which is associated with a
567 container. The subsystem name is handled by the user, the
568 command won't do any syntax checking on the subsystem name, if
569 the subsystem name does not exists, the command will fail.
573 lxc-cgroup -n foo cpuset.cpus
575 will display the content of this subsystem.
577 lxc-cgroup -n foo cpu.shares 512
579 will set the subsystem to the specified value.
586 <para>The <command>lxc</command> is still in development, so the
587 command syntax and the API can change. The version 1.0.0 will be
588 the frozen version.</para>
594 <title>Author</title>
595 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
600 <!-- Keep this comment at the end of the file Local variables: mode:
601 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
602 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
603 sgml-parent-document:nil sgml-default-dtd-file:nil
604 sgml-exposed-tags:nil sgml-local-catalogs:nil
605 sgml-local-ecat-files:nil End: -->