-<!--
+<!--
lxc: linux Container library
(C) Copyright IBM Corp. 2007, 2008
Authors:
-Daniel Lezcano <dlezcano at fr.ibm.com>
+Daniel Lezcano <daniel.lezcano at free.fr>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
-Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
-->
-<!DOCTYPE refentry PUBLIC "-//Davenport//DTD DocBook V3.0//EN">
+<!DOCTYPE refentry PUBLIC @docdtd@ [
+
+<!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
+]>
<refentry>
<refentrytitle>lxc</refentrytitle>
<manvolnum>7</manvolnum>
<refmiscinfo>
- Version @LXC_MAJOR_VERSION@.@LXC_MINOR_VERSION@.@LXC_MICRO_VERSION@
+ Version @PACKAGE_VERSION@
</refmiscinfo>
</refmeta>
</refpurpose>
</refnamediv>
- <refsect1>
- <title>Quick start</title>
- <para>
- You are in a hurry, and you don't want to read this man page. Ok,
- without warranty, here are the commands to launch a shell inside
- a container with a predefined configuration template, it may
- work.
- <command>
- @BINDIR@/lxc-execute -n foo -f @SYSCONFDIR@/lxc/lxc-macvlan.conf /bin/bash
- </command>
- </para>
- </refsect1>
-
<refsect1>
<title>Overview</title>
<para>
- The container technology is actively being pushed into the
- mainstream linux kernel. It provides the resource management
- through the control groups aka process containers and resource
- isolation through the namespaces.
+ The container technology is actively being pushed into the mainstream
+ Linux kernel. It provides resource management through control groups and
+ resource isolation via namespaces.
</para>
<para>
- The linux containers, <command>lxc</command>, aims to use these
- new functionalities to provide an userspace container object
- which provides full resource isolation and resource control for
- an applications or a system.
+ <command>lxc</command>, aims to use these new functionalities to provide a
+ userspace container object which provides full resource isolation and
+ resource control for an applications or a full system.
</para>
<para>
- The first objective of this project is to make the life easier
- for the kernel developers involved in the containers project and
- especially to continue working on the Checkpoint/Restart new
- features. The <command>lxc</command> is small enough to easily
- manage a container with simple command lines and complete enough
- to be used for other purposes.
+ <command>lxc</command> is small enough to easily manage a container with
+ simple command lines and complete enough to be used for other purposes.
</para>
</refsect1>
<refsect1>
<title>Requirements</title>
<para>
- The <command>lxc</command> relies on a set of functionalies
- provided by the kernel which needs to be active. Depending of
- the missing functionalities the <command>lxc</command> will
- work with a restricted number of functionalities or will simply
- fails.
+ The kernel version >= 3.10 shipped with the distros, will work with
+ <command>lxc</command>, this one will have less functionalities but enough
+ to be interesting.
</para>
-
+
<para>
- The following list gives the kernel features to be enabled in
- the kernel to have the full features container:
+ <command>lxc</command> relies on a set of functionalities provided by the
+ kernel. The helper script <command>lxc-checkconfig</command> will give
+ you information about your kernel configuration, required, and missing
+ features.
</para>
- <programlisting>
- * General
- * Control Group support
- -> namespace cgroup subsystem
- -> cpuset support
- -> Group CPU scheduler
- -> control group freeze subsystem
- -> Basis for grouping tasks (Control Groups)
- -> Simple CPU accounting
- -> Resource counters
- -> Memory resource controllers for Control Groups
- -> Namespace support
- -> UTS namespace
- -> IPC namespace
- -> User namespace
- -> Pid namespace
- * Network support
- -> Networking options
- -> Network namespace support
- </programlisting>
-
- <para>
- For the moment the easiest way to have all the features in the
- kernel is to use the git tree at:
- <systemitem>
- git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git
- </systemitem>
-
- But the kernel version >= 2.6.27 shipped with the distros, may
- work with <command>lxc</command>, this one will have less
- functionalities but enough to be interesting.
-
- The planned kernel version which <command>lxc</command> should
- be fully functionaly is 2.6.29.
- </para>
-
- <para>
- Before using the <command>lxc</command>, your system should be
- configured with the file capabilities, otherwise you will need
- to run the <command>lxc</command> commands as root. The
- control group should be mounted anywhere, eg:
- <command>mount -t cgroup cgroup /cgroup</command>
- </para>
</refsect1>
<refsect1>
<title>Functional specification</title>
<para>
- A container is an object where the configuration is
- persistent. The application will be launched inside this
- container and it will use the configuration which was previously
- created.
+ A container is an object isolating some resources of the host, for the
+ application or system running in it.
+ </para>
+ <para>
+ The application / system will be launched inside a container specified by
+ a configuration that is either initially created or passed as a parameter
+ of the commands.
</para>
- <para>How to run an application in a container ?</para>
+ <para>How to run an application in a container</para>
<para>
- Before running an application, you should know what are the
- resources you want to isolate. The default configuration is to
- isolate the pids, the sysv ipc and the mount points. If you want
- to run a simple shell inside a container, a basic configuration
- is needed, especially if you want to share the rootfs. If you
- want to run an application like <command>sshd</command>, you
- should provide a new network stack and a new hostname. If you
- want to avoid conflicts with some files
- eg. <filename>/var/run/httpd.pid</filename>, you should
- remount <filename>/var/run</filename> with an empty
- directory. If you want to avoid the conflicts in all the cases,
- you can specify a rootfs for the container. The rootfs can be a
- directory tree, previously bind mounted with the initial rootfs,
- so you can still use your distro but with your
+ Before running an application, you should know what are the resources you
+ want to isolate. The default configuration is to isolate PIDs, the sysv
+ IPC and mount points. If you want to run a simple shell inside a
+ container, a basic configuration is needed, especially if you want to
+ share the rootfs. If you want to run an application like
+ <command>sshd</command>, you should provide a new network stack and a new
+ hostname. If you want to avoid conflicts with some files eg.
+ <filename>/var/run/httpd.pid</filename>, you should remount
+ <filename>/var/run</filename> with an empty directory. If you want to
+ avoid the conflicts in all the cases, you can specify a rootfs for the
+ container. The rootfs can be a directory tree, previously bind mounted
+ with the initial rootfs, so you can still use your distro but with your
own <filename>/etc</filename> and <filename>/home</filename>
</para>
<para>
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
</programlisting>
</para>
-
- <para>How to run a system in a container ?</para>
-
- <para>Running a system inside a container is paradoxically easier
- than running an application. Why ? Because you don't have to care
- about the resources to be isolated, everything need to be isolated
- except <filename>/dev</filename> which needs to be remounted in
- the container rootfs, the other resources are specified as being
- isolated but without configuration because the container will set
- them up. eg. the ipv4 address will be setup by the system
- container init scripts. Here is an example of the mount points
- file:
-
+
+ <para>How to run a system in a container</para>
+
+ <para>
+ Running a system inside a container is paradoxically easier
+ than running an application. Why? Because you don't have to care
+ about the resources to be isolated, everything needs to be
+ isolated, the other resources are specified as being isolated but
+ without configuration because the container will set them
+ up. eg. the ipv4 address will be setup by the system container
+ init scripts. Here is an example of the mount points file:
+ </para>
+
<programlisting>
[root@lxc debian]$ cat fstab
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
</programlisting>
- More information can be added to the container to facilitate the
- configuration. For example, make accessible from the container
- the resolv.conf file belonging to the host.
-
- <programlisting>
- /etc/resolv.conf /home/root/debian/rootfs/etc/resolv.conf none bind 0 0
- </programlisting>
- </para>
-
<refsect2>
<title>Container life cycle</title>
<para>
When the container is created, it contains the configuration
- information. When a process is launched, the container will be
- starting and running. When the last process running inside the
- container exits, the container is stopped.
+ information. When a process is launched, the container will be starting
+ and running. When the last process running inside the container exits,
+ the container is stopped.
</para>
<para>
- In case of failure when the container is initialized, it will
- pass through the aborting state.
+ In case of failure when the container is initialized, it will pass
+ through the aborting state.
</para>
<programlisting>
-
+<![CDATA[
---------
| STOPPED |<---------------
--------- |
---------- |
| |
---------------------
-
+]]>
</programlisting>
</refsect2>
<refsect2>
<title>Configuration</title>
<para>The container is configured through a configuration
- file, the format of the configuration file is described in
+ file, the format of the configuration file is described in
<citerefentry>
<refentrytitle><filename>lxc.conf</filename></refentrytitle>
<manvolnum>5</manvolnum>
</refsect2>
<refsect2>
- <title>Creating / Destroying the containers</title>
+ <title>Creating / Destroying containers</title>
<para>
- The container is created via the <command>lxc-create</command>
- command. It takes a container name as parameter and an
- optional configuration file. The name is used by the different
- commands to refer to this
- container. The <command>lxc-destroy</command> command will
- destroy the container object.
+ A persistent container object can be created via the
+ <command>lxc-create</command> command. It takes a container name as
+ parameter and optional configuration file and template. The name is
+ used by the different commands to refer to this container. The
+ <command>lxc-destroy</command> command will destroy the container
+ object.
<programlisting>
lxc-create -n foo
lxc-destroy -n foo
</refsect2>
<refsect2>
- <title>Starting / Stopping a container</title>
- <para>When the container has been created, it is ready to run an
- application / system. When the application has to be destroyed
- the container can be stopped, that will kill all the processes
- of the container.</para>
-
+ <title>Volatile container</title>
+ <para>
+ It is not mandatory to create a container object before starting it.
+ The container can be directly started with a configuration file as
+ parameter.
+ </para>
+ </refsect2>
+
+ <refsect2>
+ <title>Starting / Stopping container</title>
<para>
- Running an application inside a container is not exactly the
- same thing as running a system. For this reason, there is two
- commands to run an application into a container:
+ When the container has been created, it is ready to run an application /
+ system. This is the purpose of the <command>lxc-execute</command> and
+ <command>lxc-start</command> commands. If the container was not created
+ before starting the application, the container will use the
+ configuration file passed as parameter to the command, and if there is
+ no such parameter either, then it will use a default isolation. If the
+ application ended, the container will be stopped, but if needed the
+ <command>lxc-stop</command> command can be used to stop the container.
+ </para>
+
+ <para>
+ Running an application inside a container is not exactly the same thing
+ as running a system. For this reason, there are two different commands
+ to run an application into a container:
<programlisting>
lxc-execute -n foo [-f config] /bin/bash
- lxc-start -n foo [/bin/bash]
+ lxc-start -n foo [-f config] [/bin/bash]
</programlisting>
</para>
<para>
- <command>lxc-execute</command> command will run the
- specified command into a container but it will mount /proc
- and autocreate/autodestroy the container if it does not
- exist. It will furthermore create an intermediate
- process, <command>lxc-init</command>, which is in charge to
- launch the specified command, that allows to support daemons
- in the container. In other words, in the
- container <command>lxc-init</command> has the pid 1 and the
- first process of the application has the pid 2.
+ The <command>lxc-execute</command> command will run the specified command
+ into a container via an intermediate process,
+ <command>lxc-init</command>.
+ This lxc-init after launching the specified command, will wait for its
+ end and all other reparented processes. (to support daemons in the
+ container). In other words, in the container,
+ <command>lxc-init</command> has PID 1 and the first process of the
+ application has PID 2.
</para>
<para>
- <command>lxc-start</command> command will run the specified
- command into the container doing nothing else than using the
- configuration specified by <command>lxc-create</command>.
- The pid of the first process is 1. If no command is
- specified <command>lxc-start</command> will
- run <filename>/sbin/init</filename>.
+ The <command>lxc-start</command> command will directly run the specified
+ command in the container. The PID of the first process is 1. If no
+ command is specified <command>lxc-start</command> will run the command
+ defined in lxc.init.cmd or if not set, <filename>/sbin/init</filename> .
</para>
<para>
- To summarize, <command>lxc-execute</command> is for running
- an application and <command>lxc-start</command> is for
+ To summarize, <command>lxc-execute</command> is for running an
+ application and <command>lxc-start</command> is better suited for
running a system.
</para>
<para>
- If the application is no longer responding, inaccessible or is
- not able to finish by itself, a
- wild <command>lxc-stop</command> command will kill all the
- processes in the container without pity.
+ If the application is no longer responding, is inaccessible or is not
+ able to finish by itself, a wild <command>lxc-stop</command> command
+ will kill all the processes in the container without pity.
+ <programlisting>
+ lxc-stop -n foo -k
+ </programlisting>
+ </para>
+ </refsect2>
+
+ <refsect2>
+ <title>Connect to an available tty</title>
+ <para>
+ If the container is configured with ttys, it is possible to access it
+ through them. It is up to the container to provide a set of available
+ ttys to be used by the following command. When the tty is lost, it is
+ possible to reconnect to it without login again.
<programlisting>
- lxc-stop -n foo
+ lxc-console -n foo -t 3
</programlisting>
</para>
</refsect2>
<refsect2>
- <title>Freeze / Unfreeze a container</title>
+ <title>Freeze / Unfreeze container</title>
<para>
Sometime, it is useful to stop all the processes belonging to
a container, eg. for job scheduling. The commands:
lxc-freeze -n foo
</programlisting>
- will put all the processes in an uninteruptible state and
+ will put all the processes in an uninteruptible state and
<programlisting>
lxc-unfreeze -n foo
</programlisting>
- will resume all the tasks.
+ will resume them.
</para>
<para>
- This feature is enabled if the cgroup freezer is enabled in the
- kernel.
+ This feature is enabled if the freezer cgroup v1 controller is enabled
+ in the kernel.
</para>
</refsect2>
<refsect2>
- <title>Getting information about the container</title>
- <para>When there are a lot of containers, it is hard to follow
- what has been created or destroyed, what is running or what are
- the pids running into a specific container. For this reason, the
- following commands give this information:
- <programlisting>
- lxc-ls
- lxc-ps -n foo
- lxc-info -n foo
- </programlisting>
- </para>
+ <title>Getting information about container</title>
<para>
- <command>lxc-ls</command> lists the containers of the
- system. The command is a script built on top
- of <command>ls</command>, so it accepts the options of the ls
- commands, eg:
+ When there are a lot of containers, it is hard to follow what has been
+ created or destroyed, what is running or what are the PIDs running in a
+ specific container. For this reason, the following commands may be useful:
<programlisting>
- lxc-ls -C1
- </programlisting>
- will display the containers list in one column or:
- <programlisting>
- lxc-ls -l
+ lxc-ls -f
+ lxc-info -n foo
</programlisting>
- will display the containers list and their permissions.
</para>
-
<para>
- <command>lxc-ps</command> will display the pids for a specific
- container. Like <command>lxc-ls</command>, <command>lxc-ps</command>
- is built on top of <command>ps</command> and accepts the same
- options, eg:
- <programlisting>
- lxc-ps -n foo --forest
- </programlisting>
-
- will display the process hierarchy for the container 'foo'.
+ <command>lxc-ls</command> lists containers.
</para>
<para>
- <command>lxc-info</command> gives informations for a specific
- container, at present time, only the state of the container is
- displayed.
+ <command>lxc-info</command> gives information for a specific container.
</para>
<para>
Here is an example on how the combination of these commands
- allow to list all the containers and retrieve their state.
+ allows one to list all the containers and retrieve their state.
<programlisting>
for i in $(lxc-ls -1); do
lxc-info -n $i
done
</programlisting>
-
- And displaying all the pids of all the containers:
-
- <programlisting>
- for i in $(lxc-ls -1); do
- lxc-ps -n $i --forest
- done
- </programlisting>
-
</para>
-
- <para>
- <command>lxc-netstat</command> display network information for
- a specific container. This command is built on top of
- the <command>netstat</command> command and will accept its
- options
- </para>
-
- <para>
- The following command will display the socket informations for
- the container 'foo'.
- <programlisting>
- lxc-netstat -n foo -tano
- </programlisting>
- </para>
-
</refsect2>
<refsect2>
- <title>Monitoring the containers</title>
- <para>It is sometime useful to track the states of a container,
- for example to monitor it or just to wait for a specific
- state in a script.
+ <title>Monitoring container</title>
+ <para>
+ It is sometime useful to track the states of a container, for example to
+ monitor it or just to wait for a specific state in a script.
</para>
<para>
- <command>lxc-monitor</command> command will monitor one or
- several containers. The parameter of this command accept a
- regular expression for example:
+ <command>lxc-monitor</command> command will monitor one or several
+ containers. The parameter of this command accepts a regular expression
+ for example:
<programlisting>
lxc-monitor -n "foo|bar"
</programlisting>
state change and exit. This is useful for scripting to
synchronize the launch of a container or the end. The
parameter is an ORed combination of different states. The
- following example shows how to wait for a container if he went
- to the background.
+ following example shows how to wait for a container if it successfully
+ started as a daemon.
<programlisting>
-
+<![CDATA[
# launch lxc-wait in background
lxc-wait -n foo -s STOPPED &
LXC_WAIT_PID=$!
# is STOPPED
wait $LXC_WAIT_PID
echo "'foo' is finished"
-
+]]>
</programlisting>
</para>
</refsect2>
<refsect2>
- <title>Setting the control group for a container</title>
- <para>The container is tied with the control groups, when a
- container is started a control group is created and associated
- with it. The control group properties can be read and modified
- when the container is running by using the lxc-cgroup command.
+ <title>cgroup settings for containers</title>
+ <para>
+ The container is tied with the control groups, when a container is
+ started a control group is created and associated with it. The control
+ group properties can be read and modified when the container is running
+ by using the lxc-cgroup command.
</para>
<para>
<command>lxc-cgroup</command> command is used to set or get a
</refsect2>
</refsect1>
- <refsect1>
- <title>Bugs</title>
- <para>The <command>lxc</command> is still in development, so the
- command syntax and the API can change. The version 1.0.0 will be
- the frozen version.</para>
- </refsect1>
-
- <refsect1>
- <title>See Also</title>
- <simpara>
- <citerefentry>
- <refentrytitle><command>lxc-create</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-destroy</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-start</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-execute</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-stop</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-monitor</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-wait</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-cgroup</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-ls</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-ps</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-info</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-freeze</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc-unfreeze</command></refentrytitle>
- <manvolnum>1</manvolnum>
- </citerefentry>,
-
- <citerefentry>
- <refentrytitle><command>lxc.conf</command></refentrytitle>
- <manvolnum>5</manvolnum>
- </citerefentry>,
-
- </simpara>
- </refsect1>
+ &seealso;
<refsect1>
<title>Author</title>
<para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
+ <para>Christian Brauner <email>christian.brauner@ubuntu.com</email></para>
+ <para>Serge Hallyn <email>serge@hallyn.com</email></para>
+ <para>Stéphane Graber <email>stgraber@ubuntu.com</email></para>
</refsect1>
</refentry>