]> git.proxmox.com Git - mirror_lxc.git/blob - doc/lxc.sgml.in
Merge pull request #3067 from Rachid-Koucha/patch-1
[mirror_lxc.git] / doc / lxc.sgml.in
1 <!--
2
3 lxc: linux Container library
4
5 (C) Copyright IBM Corp. 2007, 2008
6
7 Authors:
8 Daniel Lezcano <daniel.lezcano at free.fr>
9
10 This library is free software; you can redistribute it and/or
11 modify it under the terms of the GNU Lesser General Public
12 License as published by the Free Software Foundation; either
13 version 2.1 of the License, or (at your option) any later version.
14
15 This library is distributed in the hope that it will be useful,
16 but WITHOUT ANY WARRANTY; without even the implied warranty of
17 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
18 Lesser General Public License for more details.
19
20 You should have received a copy of the GNU Lesser General Public
21 License along with this library; if not, write to the Free Software
22 Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
23
24 -->
25
26 <!DOCTYPE refentry PUBLIC @docdtd@ [
27
28 <!ENTITY seealso SYSTEM "@builddir@/see_also.sgml">
29 ]>
30
31 <refentry>
32
33 <docinfo>
34 <date>@LXC_GENERATE_DATE@</date>
35 </docinfo>
36
37
38 <refmeta>
39 <refentrytitle>lxc</refentrytitle>
40 <manvolnum>7</manvolnum>
41 <refmiscinfo>
42 Version @PACKAGE_VERSION@
43 </refmiscinfo>
44 </refmeta>
45
46 <refnamediv>
47 <refname>lxc</refname>
48
49 <refpurpose>
50 linux containers
51 </refpurpose>
52 </refnamediv>
53
54 <refsect1>
55 <title>Overview</title>
56 <para>
57 The container technology is actively being pushed into the mainstream
58 Linux kernel. It provides resource management through control groups and
59 resource isolation via namespaces.
60 </para>
61
62 <para>
63 <command>lxc</command>, aims to use these new functionalities to provide a
64 userspace container object which provides full resource isolation and
65 resource control for an applications or a full system.
66 </para>
67
68 <para>
69 <command>lxc</command> is small enough to easily manage a container with
70 simple command lines and complete enough to be used for other purposes.
71 </para>
72 </refsect1>
73
74 <refsect1>
75 <title>Requirements</title>
76 <para>
77 The kernel version >= 3.10 shipped with the distros, will work with
78 <command>lxc</command>, this one will have less functionalities but enough
79 to be interesting.
80 </para>
81
82 <para>
83 <command>lxc</command> relies on a set of functionalities provided by the
84 kernel. The helper script <command>lxc-checkconfig</command> will give
85 you information about your kernel configuration, required, and missing
86 features.
87 </para>
88 </refsect1>
89
90 <refsect1>
91 <title>Functional specification</title>
92 <para>
93 A container is an object isolating some resources of the host, for the
94 application or system running in it.
95 </para>
96 <para>
97 The application / system will be launched inside a container specified by
98 a configuration that is either initially created or passed as a parameter
99 of the commands.
100 </para>
101
102 <para>How to run an application in a container</para>
103 <para>
104 Before running an application, you should know what are the resources you
105 want to isolate. The default configuration is to isolate PIDs, the sysv
106 IPC and mount points. If you want to run a simple shell inside a
107 container, a basic configuration is needed, especially if you want to
108 share the rootfs. If you want to run an application like
109 <command>sshd</command>, you should provide a new network stack and a new
110 hostname. If you want to avoid conflicts with some files eg.
111 <filename>/var/run/httpd.pid</filename>, you should remount
112 <filename>/var/run</filename> with an empty directory. If you want to
113 avoid the conflicts in all the cases, you can specify a rootfs for the
114 container. The rootfs can be a directory tree, previously bind mounted
115 with the initial rootfs, so you can still use your distro but with your
116 own <filename>/etc</filename> and <filename>/home</filename>
117 </para>
118 <para>
119 Here is an example of directory tree
120 for <command>sshd</command>:
121 <programlisting>
122 [root@lxc sshd]$ tree -d rootfs
123
124 rootfs
125 |-- bin
126 |-- dev
127 | |-- pts
128 | `-- shm
129 | `-- network
130 |-- etc
131 | `-- ssh
132 |-- lib
133 |-- proc
134 |-- root
135 |-- sbin
136 |-- sys
137 |-- usr
138 `-- var
139 |-- empty
140 | `-- sshd
141 |-- lib
142 | `-- empty
143 | `-- sshd
144 `-- run
145 `-- sshd
146 </programlisting>
147
148 and the mount points file associated with it:
149 <programlisting>
150 [root@lxc sshd]$ cat fstab
151
152 /lib /home/root/sshd/rootfs/lib none ro,bind 0 0
153 /bin /home/root/sshd/rootfs/bin none ro,bind 0 0
154 /usr /home/root/sshd/rootfs/usr none ro,bind 0 0
155 /sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
156 </programlisting>
157 </para>
158
159 <para>How to run a system in a container</para>
160
161 <para>
162 Running a system inside a container is paradoxically easier
163 than running an application. Why? Because you don't have to care
164 about the resources to be isolated, everything needs to be
165 isolated, the other resources are specified as being isolated but
166 without configuration because the container will set them
167 up. eg. the ipv4 address will be setup by the system container
168 init scripts. Here is an example of the mount points file:
169 </para>
170
171 <programlisting>
172 [root@lxc debian]$ cat fstab
173
174 /dev /home/root/debian/rootfs/dev none bind 0 0
175 /dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
176 </programlisting>
177
178 <refsect2>
179 <title>Container life cycle</title>
180 <para>
181 When the container is created, it contains the configuration
182 information. When a process is launched, the container will be starting
183 and running. When the last process running inside the container exits,
184 the container is stopped.
185 </para>
186 <para>
187 In case of failure when the container is initialized, it will pass
188 through the aborting state.
189 </para>
190
191 <programlisting>
192 <![CDATA[
193 ---------
194 | STOPPED |<---------------
195 --------- |
196 | |
197 start |
198 | |
199 V |
200 ---------- |
201 | STARTING |--error- |
202 ---------- | |
203 | | |
204 V V |
205 --------- ---------- |
206 | RUNNING | | ABORTING | |
207 --------- ---------- |
208 | | |
209 no process | |
210 | | |
211 V | |
212 ---------- | |
213 | STOPPING |<------- |
214 ---------- |
215 | |
216 ---------------------
217 ]]>
218 </programlisting>
219 </refsect2>
220
221 <refsect2>
222 <title>Configuration</title>
223 <para>The container is configured through a configuration
224 file, the format of the configuration file is described in
225 <citerefentry>
226 <refentrytitle><filename>lxc.conf</filename></refentrytitle>
227 <manvolnum>5</manvolnum>
228 </citerefentry>
229 </para>
230 </refsect2>
231
232 <refsect2>
233 <title>Creating / Destroying containers</title>
234 <para>
235 A persistent container object can be created via the
236 <command>lxc-create</command> command. It takes a container name as
237 parameter and optional configuration file and template. The name is
238 used by the different commands to refer to this container. The
239 <command>lxc-destroy</command> command will destroy the container
240 object.
241 <programlisting>
242 lxc-create -n foo
243 lxc-destroy -n foo
244 </programlisting>
245 </para>
246 </refsect2>
247
248 <refsect2>
249 <title>Volatile container</title>
250 <para>
251 It is not mandatory to create a container object before starting it.
252 The container can be directly started with a configuration file as
253 parameter.
254 </para>
255 </refsect2>
256
257 <refsect2>
258 <title>Starting / Stopping container</title>
259 <para>
260 When the container has been created, it is ready to run an application /
261 system. This is the purpose of the <command>lxc-execute</command> and
262 <command>lxc-start</command> commands. If the container was not created
263 before starting the application, the container will use the
264 configuration file passed as parameter to the command, and if there is
265 no such parameter either, then it will use a default isolation. If the
266 application ended, the container will be stopped, but if needed the
267 <command>lxc-stop</command> command can be used to stop the container.
268 </para>
269
270 <para>
271 Running an application inside a container is not exactly the same thing
272 as running a system. For this reason, there are two different commands
273 to run an application into a container:
274 <programlisting>
275 lxc-execute -n foo [-f config] /bin/bash
276 lxc-start -n foo [-f config] [/bin/bash]
277 </programlisting>
278 </para>
279
280 <para>
281 The <command>lxc-execute</command> command will run the specified command
282 into a container via an intermediate process,
283 <command>lxc-init</command>.
284 This lxc-init after launching the specified command, will wait for its
285 end and all other reparented processes. (to support daemons in the
286 container). In other words, in the container,
287 <command>lxc-init</command> has PID 1 and the first process of the
288 application has PID 2.
289 </para>
290
291 <para>
292 The <command>lxc-start</command> command will directly run the specified
293 command in the container. The PID of the first process is 1. If no
294 command is specified <command>lxc-start</command> will run the command
295 defined in lxc.init.cmd or if not set, <filename>/sbin/init</filename> .
296 </para>
297
298 <para>
299 To summarize, <command>lxc-execute</command> is for running an
300 application and <command>lxc-start</command> is better suited for
301 running a system.
302 </para>
303
304 <para>
305 If the application is no longer responding, is inaccessible or is not
306 able to finish by itself, a wild <command>lxc-stop</command> command
307 will kill all the processes in the container without pity.
308 <programlisting>
309 lxc-stop -n foo -k
310 </programlisting>
311 </para>
312 </refsect2>
313
314 <refsect2>
315 <title>Connect to an available tty</title>
316 <para>
317 If the container is configured with ttys, it is possible to access it
318 through them. It is up to the container to provide a set of available
319 ttys to be used by the following command. When the tty is lost, it is
320 possible to reconnect to it without login again.
321 <programlisting>
322 lxc-console -n foo -t 3
323 </programlisting>
324 </para>
325 </refsect2>
326
327 <refsect2>
328 <title>Freeze / Unfreeze container</title>
329 <para>
330 Sometime, it is useful to stop all the processes belonging to
331 a container, eg. for job scheduling. The commands:
332 <programlisting>
333 lxc-freeze -n foo
334 </programlisting>
335
336 will put all the processes in an uninteruptible state and
337
338 <programlisting>
339 lxc-unfreeze -n foo
340 </programlisting>
341
342 will resume them.
343 </para>
344
345 <para>
346 This feature is enabled if the freezer cgroup v1 controller is enabled
347 in the kernel.
348 </para>
349 </refsect2>
350
351 <refsect2>
352 <title>Getting information about container</title>
353 <para>
354 When there are a lot of containers, it is hard to follow what has been
355 created or destroyed, what is running or what are the PIDs running in a
356 specific container. For this reason, the following commands may be useful:
357 <programlisting>
358 lxc-ls -f
359 lxc-info -n foo
360 </programlisting>
361 </para>
362 <para>
363 <command>lxc-ls</command> lists containers.
364 </para>
365
366 <para>
367 <command>lxc-info</command> gives information for a specific container.
368 </para>
369
370 <para>
371 Here is an example on how the combination of these commands
372 allows one to list all the containers and retrieve their state.
373 <programlisting>
374 for i in $(lxc-ls -1); do
375 lxc-info -n $i
376 done
377 </programlisting>
378 </para>
379 </refsect2>
380
381 <refsect2>
382 <title>Monitoring container</title>
383 <para>
384 It is sometime useful to track the states of a container, for example to
385 monitor it or just to wait for a specific state in a script.
386 </para>
387
388 <para>
389 <command>lxc-monitor</command> command will monitor one or several
390 containers. The parameter of this command accepts a regular expression
391 for example:
392 <programlisting>
393 lxc-monitor -n "foo|bar"
394 </programlisting>
395 will monitor the states of containers named 'foo' and 'bar', and:
396 <programlisting>
397 lxc-monitor -n ".*"
398 </programlisting>
399 will monitor all the containers.
400 </para>
401 <para>
402 For a container 'foo' starting, doing some work and exiting,
403 the output will be in the form:
404 <programlisting>
405 'foo' changed state to [STARTING]
406 'foo' changed state to [RUNNING]
407 'foo' changed state to [STOPPING]
408 'foo' changed state to [STOPPED]
409 </programlisting>
410 </para>
411 <para>
412 <command>lxc-wait</command> command will wait for a specific
413 state change and exit. This is useful for scripting to
414 synchronize the launch of a container or the end. The
415 parameter is an ORed combination of different states. The
416 following example shows how to wait for a container if it successfully
417 started as a daemon.
418
419 <programlisting>
420 <![CDATA[
421 # launch lxc-wait in background
422 lxc-wait -n foo -s STOPPED &
423 LXC_WAIT_PID=$!
424
425 # this command goes in background
426 lxc-execute -n foo mydaemon &
427
428 # block until the lxc-wait exits
429 # and lxc-wait exits when the container
430 # is STOPPED
431 wait $LXC_WAIT_PID
432 echo "'foo' is finished"
433 ]]>
434 </programlisting>
435 </para>
436 </refsect2>
437
438 <refsect2>
439 <title>cgroup settings for containers</title>
440 <para>
441 The container is tied with the control groups, when a container is
442 started a control group is created and associated with it. The control
443 group properties can be read and modified when the container is running
444 by using the lxc-cgroup command.
445 </para>
446 <para>
447 <command>lxc-cgroup</command> command is used to set or get a
448 control group subsystem which is associated with a
449 container. The subsystem name is handled by the user, the
450 command won't do any syntax checking on the subsystem name, if
451 the subsystem name does not exists, the command will fail.
452 </para>
453 <para>
454 <programlisting>
455 lxc-cgroup -n foo cpuset.cpus
456 </programlisting>
457 will display the content of this subsystem.
458 <programlisting>
459 lxc-cgroup -n foo cpu.shares 512
460 </programlisting>
461 will set the subsystem to the specified value.
462 </para>
463 </refsect2>
464 </refsect1>
465
466 &seealso;
467
468 <refsect1>
469 <title>Author</title>
470 <para>Daniel Lezcano <email>daniel.lezcano@free.fr</email></para>
471 <para>Christian Brauner <email>christian.brauner@ubuntu.com</email></para>
472 <para>Serge Hallyn <email>serge@hallyn.com</email></para>
473 <para>Stéphane Graber <email>stgraber@ubuntu.com</email></para>
474 </refsect1>
475
476 </refentry>
477
478 <!-- Keep this comment at the end of the file Local variables: mode:
479 sgml sgml-omittag:t sgml-shorttag:t sgml-minimize-attributes:nil
480 sgml-always-quote-attributes:t sgml-indent-step:2 sgml-indent-data:t
481 sgml-parent-document:nil sgml-default-dtd-file:nil
482 sgml-exposed-tags:nil sgml-local-catalogs:nil
483 sgml-local-ecat-files:nil End: -->