3 lxc: linux Container library
5 (C) Copyright IBM Corp. 2007, 2008
8 Daniel Lezcano <daniel.lezcano at free.fr>
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
26 <!DOCTYPE refentry PUBLIC @docdtd@ [
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
34 <date>@LXC_GENERATE_DATE@</date>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
42 Version @PACKAGE_VERSION@
47 <refname>lxc</refname>
55 <title>Quick start</title>
57 You are in a hurry, and you don't want to read this man page. Ok,
58 without warranty, here are the commands to launch a shell inside
59 a container with a predefined configuration template, it may
61 <command>@BINDIR@/lxc-execute -n foo -f
62 @DOCDIR@/examples/lxc-macvlan.conf /bin/bash</command>
67 <title>Overview</title>
69 The container technology is actively being pushed into the
70 mainstream linux kernel. It provides the resource management
71 through the control groups aka process containers and resource
72 isolation through the namespaces.
76 The linux containers, <command>lxc</command>, aims to use these
77 new functionalities to provide a userspace container object
78 which provides full resource isolation and resource control for
79 an applications or a system.
83 The first objective of this project is to make the life easier
84 for the kernel developers involved in the containers project and
85 especially to continue working on the Checkpoint/Restart new
86 features. The <command>lxc</command> is small enough to easily
87 manage a container with simple command lines and complete enough
88 to be used for other purposes.
93 <title>Requirements</title>
95 The <command>lxc</command> relies on a set of functionalities
96 provided by the kernel which needs to be active. Depending of
97 the missing functionalities the <command>lxc</command> will
98 work with a restricted number of functionalities or will simply
103 The following list gives the kernel features to be enabled in
104 the kernel to have the full features container:
108 * Control Group support
109 -> Namespace cgroup subsystem
110 -> Freezer cgroup subsystem
112 -> Simple CPU accounting cgroup subsystem
114 -> Memory resource controllers for Control Groups
115 * Group CPU scheduler
116 -> Basis for grouping tasks (Control Groups)
125 -> Support multiple instances of devpts
126 * Network device support
128 -> Virtual ethernet pair device
131 -> 802.1d Ethernet Bridging
133 -> File POSIX Capabilities
138 The kernel version >= 2.6.32 shipped with the distros, will
139 work with <command>lxc</command>, this one will have less
140 functionalities but enough to be interesting.
142 The helper script <command>lxc-checkconfig</command> will give
143 you information about your kernel configuration.
147 The control group can be mounted anywhere, eg:
148 <command>mount -t cgroup cgroup /cgroup</command>.
150 It is however recommended to use cgmanager, cgroup-lite or systemd
151 to mount the cgroup hierarchy under /sys/fs/cgroup.
158 <title>Functional specification</title>
160 A container is an object isolating some resources of the host,
161 for the application or system running in it.
164 The application / system will be launched inside a
165 container specified by a configuration that is either
166 initially created or passed as parameter of the starting commands.
169 <para>How to run an application in a container ?</para>
171 Before running an application, you should know what are the
172 resources you want to isolate. The default configuration is to
173 isolate the pids, the sysv ipc and the mount points. If you want
174 to run a simple shell inside a container, a basic configuration
175 is needed, especially if you want to share the rootfs. If you
176 want to run an application like <command>sshd</command>, you
177 should provide a new network stack and a new hostname. If you
178 want to avoid conflicts with some files
179 eg. <filename>/var/run/httpd.pid</filename>, you should
180 remount <filename>/var/run</filename> with an empty
181 directory. If you want to avoid the conflicts in all the cases,
182 you can specify a rootfs for the container. The rootfs can be a
183 directory tree, previously bind mounted with the initial rootfs,
184 so you can still use your distro but with your
185 own <filename>/etc</filename> and <filename>/home</filename>
188 Here is an example of directory tree
189 for <command>sshd</command>:
191 [root@lxc sshd]$ tree -d rootfs
217 and the mount points file associated with it:
219 [root@lxc sshd]$ cat fstab
221 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
222 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
223 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
224 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
228 <para>How to run a system in a container ?</para>
230 <para>Running a system inside a container is paradoxically easier
231 than running an application. Why ? Because you don't have to care
232 about the resources to be isolated, everything need to be
233 isolated, the other resources are specified as being isolated but
234 without configuration because the container will set them
235 up. eg. the ipv4 address will be setup by the system container
236 init scripts. Here is an example of the mount points file:
239 [root@lxc debian]$ cat fstab
241 /dev /home/root/debian/rootfs/dev none bind 0 0
242 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
245 More information can be added to the container to facilitate the
246 configuration. For example, make accessible from the container
247 the resolv.conf file belonging to the host.
250 /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
255 <title>Container life cycle</title>
257 When the container is created, it contains the configuration
258 information. When a process is launched, the container will be
259 starting and running. When the last process running inside the
260 container exits, the container is stopped.
263 In case of failure when the container is initialized, it will
264 pass through the aborting state.
270 | STOPPED |<---------------
277 | STARTING |--error- |
281 --------- ---------- |
282 | RUNNING | | ABORTING | |
283 --------- ---------- |
289 | STOPPING |<------- |
292 ---------------------
298 <title>Configuration</title>
299 <para>The container is configured through a configuration
300 file, the format of the configuration file is described in
302 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
303 <manvolnum>5</manvolnum>
309 <title>Creating / Destroying container
310 (persistent container)</title>
312 A persistent container object can be
313 created via the <command>lxc-create</command>
314 command. It takes a container name as parameter and
315 optional configuration file and template.
316 The name is used by the different
317 commands to refer to this
318 container. The <command>lxc-destroy</command> command will
319 destroy the container object.
328 <title>Volatile container</title>
329 <para>It is not mandatory to create a container object
331 The container can be directly started with a
332 configuration file as parameter.
337 <title>Starting / Stopping container</title>
338 <para>When the container has been created, it is ready to run an
339 application / system.
340 This is the purpose of the <command>lxc-execute</command> and
341 <command>lxc-start</command> commands.
342 If the container was not created before
343 starting the application, the container will use the
344 configuration file passed as parameter to the command,
345 and if there is no such parameter either, then
346 it will use a default isolation.
347 If the application is ended, the container will be stopped also,
348 but if needed the <command>lxc-stop</command> command can
349 be used to kill the still running application.
353 Running an application inside a container is not exactly the
354 same thing as running a system. For this reason, there are two
355 different commands to run an application into a container:
357 lxc-execute -n foo [-f config] /bin/bash
358 lxc-start -n foo [-f config] [/bin/bash]
363 <command>lxc-execute</command> command will run the
364 specified command into the container via an intermediate
365 process, <command>lxc-init</command>.
366 This lxc-init after launching the specified command,
367 will wait for its end and all other reparented processes.
368 (to support daemons in the container).
369 In other words, in the
370 container, <command>lxc-init</command> has the pid 1 and the
371 first process of the application has the pid 2.
375 <command>lxc-start</command> command will run directly the specified
376 command into the container.
377 The pid of the first process is 1. If no command is
378 specified <command>lxc-start</command> will
379 run the command defined in lxc.init.cmd or if not set,
380 <filename>/sbin/init</filename> .
384 To summarize, <command>lxc-execute</command> is for running
385 an application and <command>lxc-start</command> is better suited for
390 If the application is no longer responding, is inaccessible or is
391 not able to finish by itself, a
392 wild <command>lxc-stop</command> command will kill all the
393 processes in the container without pity.
401 <title>Connect to an available tty</title>
403 If the container is configured with the ttys, it is possible
404 to access it through them. It is up to the container to
405 provide a set of available tty to be used by the following
406 command. When the tty is lost, it is possible to reconnect it
409 lxc-console -n foo -t 3
415 <title>Freeze / Unfreeze container</title>
417 Sometime, it is useful to stop all the processes belonging to
418 a container, eg. for job scheduling. The commands:
423 will put all the processes in an uninteruptible state and
433 This feature is enabled if the cgroup freezer is enabled in the
439 <title>Getting information about container</title>
440 <para>When there are a lot of containers, it is hard to follow
441 what has been created or destroyed, what is running or what are
442 the pids running into a specific container. For this reason, the
443 following commands may be useful:
450 <command>lxc-ls</command> lists the containers of the
455 <command>lxc-info</command> gives information for a specific
460 Here is an example on how the combination of these commands
461 allows one to list all the containers and retrieve their state.
463 for i in $(lxc-ls -1); do
473 <title>Monitoring container</title>
474 <para>It is sometime useful to track the states of a container,
475 for example to monitor it or just to wait for a specific
480 <command>lxc-monitor</command> command will monitor one or
481 several containers. The parameter of this command accept a
482 regular expression for example:
484 lxc-monitor -n "foo|bar"
486 will monitor the states of containers named 'foo' and 'bar', and:
490 will monitor all the containers.
493 For a container 'foo' starting, doing some work and exiting,
494 the output will be in the form:
496 'foo' changed state to [STARTING]
497 'foo' changed state to [RUNNING]
498 'foo' changed state to [STOPPING]
499 'foo' changed state to [STOPPED]
503 <command>lxc-wait</command> command will wait for a specific
504 state change and exit. This is useful for scripting to
505 synchronize the launch of a container or the end. The
506 parameter is an ORed combination of different states. The
507 following example shows how to wait for a container if he went
512 # launch lxc-wait in background
513 lxc-wait -n foo -s STOPPED &
516 # this command goes in background
517 lxc-execute -n foo mydaemon &
519 # block until the lxc-wait exits
520 # and lxc-wait exits when the container
523 echo "'foo' is finished"
530 <title>Setting the control group for container</title>
531 <para>The container is tied with the control groups, when a
532 container is started a control group is created and associated
533 with it. The control group properties can be read and modified
534 when the container is running by using the lxc-cgroup command.
537 <command>lxc-cgroup</command> command is used to set or get a
538 control group subsystem which is associated with a
539 container. The subsystem name is handled by the user, the
540 command won't do any syntax checking on the subsystem name, if
541 the subsystem name does not exists, the command will fail.
545 lxc-cgroup -n foo cpuset.cpus
547 will display the content of this subsystem.
549 lxc-cgroup -n foo cpu.shares 512
551 will set the subsystem to the specified value.
558 <para>The <command>lxc</command> is still in development, so the
559 command syntax and the API can change. The version 1.0.0 will be
560 the frozen version.</para>
566 <title>Author</title>
567 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
572 <!-- Keep this comment at the end of the file Local variables: mode:
573 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
574 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
575 sgml-parent-document:nil sgml-default-dtd-file:nil
576 sgml-exposed-tags:nil sgml-local-catalogs:nil
577 sgml-local-ecat-files:nil End: -->