]> git.proxmox.com Git - ceph.git/blob - ceph/src/boost/libs/graph/doc/graph_theory_review.html
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / src / boost / libs / graph / doc / graph_theory_review.html
1 <HTML>
2 <!--
3 Copyright (c) Jeremy Siek 2000
4 Copyright (c) Daniel Trebbien 2010
5
6 Distributed under the Boost Software License, Version 1.0.
7 (See accompanying file LICENSE_1_0.txt or copy at
8 http://www.boost.org/LICENSE_1_0.txt)
9 -->
10 <Head>
11 <Title>Boost Graph Library: Graph Theory Review</Title>
12 <BODY BGCOLOR="#ffffff" LINK="#0000ee" TEXT="#000000" VLINK="#551a8b"
13 ALINK="#ff0000">
14 <IMG SRC="../../../boost.png"
15 ALT="C++ Boost" width="277" height="86">
16
17 <BR Clear>
18
19 <H1>Review of Elementary Graph Theory</H1>
20
21 <P>
22
23 <P>
24 This chapter is meant as a refresher on elementary graph theory. If
25 the reader has some previous acquaintance with graph algorithms, this
26 chapter should be enough to get started. If the reader has no
27 previous background in graph algorithms we suggest a more thorough
28 introduction such as <a
29 href="http://toc.lcs.mit.edu/~clr/">Introduction to Algorithms</a>
30 by Cormen, Leiserson, and Rivest.
31
32 <P>
33
34 <H1>The Graph Abstraction</H1>
35
36 <P>
37 A graph is a mathematical abstraction that is useful for solving many
38 kinds of problems. Fundamentally, a graph consists of a set of
39 vertices, and a set of edges, where an edge is something that connects
40 two vertices in the graph. More precisely, a <a
41 name="def:graph"><I>graph</I></a> is a pair <i>(V,E)</i>, where
42 <i>V</i> is a finite set and <i>E</i> is a binary relation on
43 <i>V</i>. <i>V</i> is called a <a name="def:vertex-set"><I>vertex
44 set</I></a> whose elements are called <I>vertices</I>. <i>E</i> is a
45 collection of edges, where an <a name="def:edge"><I>edge</I></a> is a
46 pair <i>(u,v)</i> with <i>u,v</i> in <i>V</i>. In a <a
47 name="def:directed-graph"><I>directed graph</I></a>, edges are ordered
48 pairs, connecting a <a name="def:source"><I>source</I></a> vertex to a
49 <a name="def:target"><I>target</I></a> vertex. In an <a
50 name="def:undirected-graph"><I>undirected graph</I></a> edges are
51 unordered pairs and connect the two vertices in both directions, hence
52 in an undirected graph <i>(u,v)</i> and <i>(v,u)</i> are two ways of
53 writing the same edge.
54
55 <P>
56 This definition of a graph is vague in certain respects; it does not
57 say what a vertex or edge represents. They could be cities with
58 connecting roads, or web-pages with hyperlinks. These details are left
59 out of the definition of a graph for an important reason; they are not
60 a necessary part of the graph <I>abstraction</I>. By leaving out the
61 details we can construct a theory that is reusable, that can help us
62 solve lots of different kinds of problems.
63
64 <P>
65 Back to the definition: a graph is a set of vertices and edges. For
66 purposes of demonstration, let us consider a graph where we have
67 labeled the vertices with letters, and we write an edge simply as a
68 pair of letters. Now we can write down an example of a directed graph
69 as follows:
70
71 <P>
72 <BR>
73 <DIV ALIGN="center">
74 <table><tr><td><tt>
75 V = {v, b, x, z, a, y } <br>
76 E = { (b,y), (b,y), (y,v), (z,a), (x,x), (b,x), (x,v), (a,z) } <br>
77 G = (V, E)
78 </tt></td></tr></table>
79 </DIV>
80 <BR CLEAR="ALL"><P></P>
81
82
83 <P>
84 <A HREF="#fig:directed-graph">Figure 1</A> gives a pictorial view of
85 this graph. The edge <i>(x,x)</i> is called a <a
86 name="def:self-loop"><I>self-loop</I></a>. Edges <i>(b,y)</i> and
87 <i>(b,y)</i> are <I>parallel edges</I>, which are allowed in a <a
88 name="def:multigraph"><I>multigraph</I></a> (but are normally not
89 allowed in a directed or undirected graph).
90
91 <P>
92
93 <P></P>
94 <DIV ALIGN="center"><A NAME="fig:directed-graph"></A>
95 <TABLE>
96 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 1:</STRONG>
97 Example of a directed graph.</CAPTION>
98 <TR><TD><IMG SRC="./figs/digraph.gif" width="124" height="163"></TD></TR>
99 </TABLE>
100 </DIV><P></P>
101
102 <P>
103 Next we have a similar graph, though this time it is undirected. <A
104 HREF="#fig:undirected-graph">Figure 2</A> gives the pictorial view.
105 Self loops are not allowed in undirected graphs. This graph is the <a
106 name="def:undirected-version"><I>undirected version</i></a. of the the
107 previous graph (minus the parallel edge <i>(b,y)</i>), meaning it has
108 the same vertices and the same edges with their directions removed.
109 Also the self edge has been removed, and edges such as <i>(a,z)</i>
110 and <i>(z,a)</i> are collapsed into one edge. One can go the other
111 way, and make a <a name="def:directed-version"><I>directed version</i>
112 of an undirected graph be replacing each edge by two edges, one
113 pointing in each direction.
114
115 <P>
116 <BR>
117 <DIV ALIGN="CENTER">
118 <table><tr><td><tt>
119 V = {v, b, x, z, a, y }<br>
120 E = { (b,y), (y,v), (z,a), (b,x), (x,v) }<br>
121 G = (V, E)
122 </tt></td></tr></table>
123 </DIV>
124 <BR CLEAR="ALL"><P></P>
125
126 <P>
127
128 <P></P>
129 <DIV ALIGN="CENTER"><A NAME="fig:undirected-graph"></A><A NAME="1524"></A>
130 <TABLE>
131 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 2:</STRONG>
132 Example of an undirected graph.</CAPTION>
133 <TR><TD><IMG SRC="./figs/undigraph.gif" width="103" height="163"></TD></TR>
134 </TABLE>
135 </DIV><P></P>
136
137 <P>
138 Now for some more graph terminology. If some edge <i>(u,v)</i> is in
139 graph , then vertex <i>v</i> is <a
140 name="def:adjacent"><I>adjacent</I></a> to vertex <i>u</i>. In a
141 directed graph, edge <i>(u,v)</i> is an <a
142 name="def:out-edge"><I>out-edge</I></a> of vertex <i>u</i> and an <a
143 name="def:in-edge"><I>in-edge</I></a> of vertex <i>v</i>. In an
144 undirected graph edge <i>(u,v)</i> is <a
145 name="def:incident-on"><I>incident on</I></a> vertices <i>u</i> and
146 <i>v</i>.
147
148 <P>
149 In <A HREF="#fig:directed-graph">Figure 1</A>,
150 vertex <i>y</i> is adjacent to vertex <i>b</i> (but <i>b</i> is not
151 adjacent to <i>y</i>). The edge <i>(b,y)</i> is an out-edge of
152 <i>b</i> and an in-edge of <i>y</i>. In <A
153 HREF="#fig:undirected-graph">Figure 2</A>,
154 <i>y</i> is adjacent to <i>b</i> and vice-versa. The edge
155 <i>(y,b)</i> is incident on vertices <i>y</i> and <i>b</i>.
156
157 <P>
158 In a directed graph, the number of out-edges of a vertex is its <a
159 name="def:out-degree"><I>out-degree</I></a> and the number of in-edges
160 is its <a name="def:in-degree"><I>in-degree</I></a>. For an
161 undirected graph, the number of edges incident to a vertex is its <a
162 name="def:degree"><I>degree</I></a>. In <A
163 HREF="#fig:directed-graph">Figure 1</A>, vertex <i>b</i> has an
164 out-degree of 3 and an in-degree of zero. In <A
165 HREF="#fig:undirected-graph">Figure 2</A>, vertex <i>b</i> simply has
166 a degree of 2.
167
168 <P>
169 Now a <a name="def:path"><i>path</i></a> is a sequence of edges in a
170 graph such that the target vertex of each edge is the source vertex of
171 the next edge in the sequence. If there is a path starting at vertex
172 <i>u</i> and ending at vertex <i>v</i> we say that <i>v</i> is <a
173 name="def:reachable"><i>reachable</i></a> from <i>u</i>. A path is <a
174 name="def:simple-path"><I>simple</I></a> if none of the vertices in
175 the sequence are repeated. The path &lt;(b,x), (x,v)&gt; is simple,
176 while the path &lt;(a,z), (z,a)&gt; is not. Also, the path &lt;(a,z),
177 (z,a)&gt; is called a <a name="def:cycle"><I>cycle</I></a> because the
178 first and last vertex in the path are the same. A graph with no cycles
179 is <a name="def:acyclic"><I>acyclic</I></a>.
180
181 <P>
182 A <a name="def:planar-graph"><I>planar graph</I></a> is a graph that
183 can be drawn on a plane without any of the edges crossing over each
184 other. Such a drawing is called a <I>plane graph</I>. A <a
185 name="def:face"><I>face</I></a> of a plane graph is a connected region
186 of the plane surrounded by edges. An important property of planar
187 graphs is that the number of faces, edges, and vertices are related
188 through Euler's formula: <i>|F| - |E| + |V| = 2</i>. This means that a
189 simple planar graph has at most <i>O(|V|)</i> edges.
190
191 <P>
192
193 <P>
194
195 <H1>Graph Data Structures</H1>
196
197 <P>
198 The primary property of a graph to consider when deciding which data
199 structure to use is <I>sparsity</I>, the number of edges relative to
200 the number of vertices in the graph. A graph where <i>E</i> is close
201 to </i>V<sup>2</sup></i> is a <I>dense</I> graph, whereas a graph
202 where <i>E = alpha V</i> and <i>alpha</i> is much smaller than
203 <i>V</i> is a <I>sparse</I> graph. For dense graphs, the
204 <I>adjacency-matrix representation</I> is usually the best choice,
205 whereas for sparse graphs the <I>adjacency-list representation</I> is
206 a better choice. Also the <I>edge-list representation</I> is a space
207 efficient choice for sparse graphs that is appropriate in some
208 situations.
209
210 <P>
211
212 <H2>Adjacency Matrix Representation</H2>
213
214 <P>
215 An adjacency-matrix representation of a graph is a 2-dimensional <i>V
216 x V</i> array. Each element in the array <i>a<sub>uv</sub></i> stores
217 a Boolean value saying whether the edge <i>(u,v)</i> is in the graph.
218 <A HREF="#fig:adj-matrix">Figure 3</A> depicts an adjacency matrix for
219 the graph in <A HREF="#fig:directed-graph">Figure 1</A> (minus the
220 parallel edge <i>(b,y)</i>). The ammount of space required to store
221 an adjacency-matrix is <i>O(V<sup>2</sup>)</i>. Any edge can be
222 accessed, added, or removed in <i>O(1)</i> time. To add or remove a
223 vertex requires reallocating and copying the whole graph, an
224 <i>O(V<sup>2</sup>)</i> operation. The <a
225 href="./adjacency_matrix.html"><tt>adjacency_matrix</tt></a> class
226 implements the BGL graph interface in terms of the adjacency-matrix
227 data-structure.
228
229
230 <P>
231
232 <P></P>
233 <DIV ALIGN="CENTER"><A NAME="fig:adj-matrix"></A><A NAME="1701"></A>
234 <TABLE>
235 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 3:</STRONG>
236 The Adjacency Matrix Graph Representation.</CAPTION>
237 <TR><TD><IMG SRC="./figs/adj_matrix.gif" width="136" height="135"></TD></TR>
238 </TABLE>
239 </DIV><P></P>
240
241 <P>
242
243 <H2><A NAME="sec:adjacency-list-representation"></A>
244 Adjacency List Representation
245 </H2>
246
247 <P>
248 An adjacency-list representation of a graph stores an out-edge
249 sequence for each vertex. For sparse graphs this saves space since
250 only <i>O(V + E)</i> memory is required. In addition, the out-edges
251 for each vertex can be accessed more efficiently. Edge insertion is
252 <i>O(1)</i>, though accessing any given edge is <i>O(alpha)</i>, where
253 <i>alpha</i> is the sparsity factor of the matrix (which is equal to
254 the maximum number of out-edges for any vertex in the graph). <A
255 HREF="#fig:adj-list">Figure 4</A> depicts an
256 adjacency-list representation of the graph in <A
257 HREF="#fig:directed-graph">Figure 1</A>. The
258 <a href="./adjacency_list.html"><TT>adjacency_list</TT></a> class is
259 an implementation of the adjacency-list representation.
260
261 <P>
262
263 <P></P>
264 <DIV ALIGN="CENTER"><A NAME="fig:adj-list"></A><A NAME="1713"></A>
265 <TABLE>
266 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 4:</STRONG>
267 The Adjacency List Graph Representation.</CAPTION>
268 <TR><TD><IMG SRC="./figs/adj_list.gif" width="108" height="122"></TD></TR>
269 </TABLE>
270 </DIV><P></P>
271
272 <P>
273
274 <H2>Edge List Representation</H2>
275
276 <P>
277 An edge-list representation of a graph is simply a sequence of edges,
278 where each edge is represented as a pair of vertex ID's. The memory
279 required is only <i>O(E)</i>. Edge insertion is typically <i>O(1)</i>,
280 though accessing a particular edge is <i>O(E)</i> (not efficient). <A
281 HREF="#fig:edge-list">Figure 5</A> shows an
282 edge-list representation of the graph in <A
283 HREF="#fig:directed-graph">Figure 1</A>. The
284 <a href="./edge_list.html"><TT>edge_list</TT></a> adaptor class can be
285 used to create implementations of the edge-list representation.
286
287 <P>
288
289 <P></P>
290 <DIV ALIGN="CENTER"><A NAME="fig:edge-list"></A><A NAME="1724"></A>
291 <TABLE>
292 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 5:</STRONG>
293 The Edge List Graph Representation.</CAPTION>
294 <TR><TD><IMG SRC="./figs/edge_list.gif" width="322" height="22"></TD></TR>
295 </TABLE>
296 </DIV><P></P>
297
298
299 <P>
300
301 <H1>Graph Algorithms</H1>
302
303 <P>
304
305 <H2>Graph Search Algorithms</H2>
306
307 <P>
308 <I>Tree edges</I> are edges in the search tree (or forest) constructed
309 (implicitly or explicitly) by running a graph search algorithm over a
310 graph. An edge <i>(u,v)</i> is a tree edge if <i>v</i> was first
311 discovered while exploring (corresponding to the <a
312 href="./visitor_concepts.html">visitor</a> <TT>explore()</TT> method)
313 edge <i>(u,v)</i>. <I>Back edges</I> connect vertices to their
314 ancestors in a search tree. So for edge <i>(u,v)</i> the vertex
315 <i>v</i> must be the ancestor of vertex <i>u</i>. Self loops are
316 considered to be back edges. <I>Forward edges</I> are non-tree edges
317 <i>(u,v)</i> that connect a vertex <i>u</i> to a descendant <i>v</i>
318 in a search tree. <I>Cross edges</I> are edges that do not fall into
319 the above three categories.
320
321 <P>
322
323 <H2><A NAME="sec:bfs-algorithm"></A>
324 Breadth-First Search
325 </H2>
326
327 <P>
328 Breadth-first search is a traversal through a graph that touches all
329 of the vertices reachable from a particular source vertex. In
330 addition, the order of the traversal is such that the algorithm will
331 explore all of the neighbors of a vertex before proceeding on to the
332 neighbors of its neighbors. One way to think of breadth-first search
333 is that it expands like a wave emanating from a stone dropped into a
334 pool of water. Vertices in the same ``wave'' are the same distance
335 from the source vertex. A vertex is <I>discovered</I> the first time
336 it is encountered by the algorithm. A vertex is <I>finished</I> after
337 all of its neighbors are explored. Here's an example to help make this
338 clear. A graph is shown in <A
339 HREF="#fig:bfs-example">Figure 6</A> and the
340 BFS discovery and finish order for the vertices is shown below.
341
342 <P>
343
344 <P></P>
345 <DIV ALIGN="CENTER"><A NAME="fig:bfs-example"></A><A NAME="1826"></A>
346 <TABLE>
347 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 6:</STRONG>
348 Breadth-first search spreading through a graph.</CAPTION>
349 <TR><TD><IMG SRC="./figs/bfs_example.gif" width="242" height="143"></TD></TR>
350 </TABLE>
351 </DIV><P></P>
352
353 <P>
354 <PRE>
355 order of discovery: s r w v t x u y
356 order of finish: s r w v t x u y
357 </PRE>
358
359 <P>
360 We start at vertex <i>s</i>, and first visit <i>r</i> and <i>w</i> (the two
361 neighbors of <i>s</i>). Once both neighbors of are visited, we visit the
362 neighbor of <i>r</i> (vertex <i>v</i>), then the neighbors of <i>w</i>
363 (the discovery order between <i>r</i> and <i>w</i> does not matter)
364 which are <i>t</i> and <i>x</i>. Finally we visit the neighbors of
365 <i>t</i> and <i>x</i>, which are <i>u</i> and <i>y</i>.
366
367 <P>
368 For the algorithm to keep track of where it is in the graph, and which
369 vertex to visit next, BFS needs to color the vertices (see the section
370 on <a href="./using_property_maps.html">Property Maps</a>
371 for more details about attaching properties to graphs). The place to
372 put the color can either be inside the graph, or it can be passed into
373 the algorithm as an argument.
374
375 <P>
376
377 <H2><A NAME="sec:dfs-algorithm"></A>
378 Depth-First Search
379 </H2>
380
381 <P>
382 A depth first search (DFS) visits all the vertices in a graph. When
383 choosing which edge to explore next, this algorithm always chooses to
384 go ``deeper'' into the graph. That is, it will pick the next adjacent
385 unvisited vertex until reaching a vertex that has no unvisited
386 adjacent vertices. The algorithm will then backtrack to the previous
387 vertex and continue along any as-yet unexplored edges from that
388 vertex. After DFS has visited all the reachable vertices from a
389 particular source vertex, it chooses one of the remaining undiscovered
390 vertices and continues the search. This process creates a set of
391 <I>depth-first trees</I> which together form the <I>depth-first
392 forest</I>. A depth-first search categorizes the edges in the graph
393 into three categories: tree-edges, back-edges, and forward or
394 cross-edges (it does not specify which). There are typically many
395 valid depth-first forests for a given graph, and therefore many
396 different (and equally valid) ways to categorize the edges.
397
398 <P>
399 One interesting property of depth-first search is that the discover
400 and finish times for each vertex form a parenthesis structure. If we
401 use an open-parenthesis when a vertex is discovered, and a
402 close-parenthesis when a vertex is finished, then the result is a
403 properly nested set of parenthesis. <A
404 HREF="#fig:dfs-example">Figure 7</A> shows
405 DFS applied to an undirected graph, with the edges labeled in the
406 order they were explored. Below we list the vertices of the graph
407 ordered by discover and finish time, as well as show the parenthesis structure. DFS is used as the kernel for several other graph
408 algorithms, including topological sort and two of the connected
409 component algorithms. It can also be used to detect cycles (see the <A
410 HREF="file_dependency_example.html#sec:cycles">Cylic Dependencies </a>
411 section of the File Dependency Example).
412
413 <P>
414
415 <P></P>
416 <DIV ALIGN="CENTER"><A NAME="fig:dfs-example"></A><A NAME="1841"></A>
417 <TABLE>
418 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 7:</STRONG>
419 Depth-first search on an undirected graph.</CAPTION>
420 <TR><TD><IMG SRC="./figs/dfs.gif" width="141" height="204"></TD></TR>
421 </TABLE>
422 </DIV><P></P>
423
424 <P>
425 <PRE>
426 order of discovery: a b e d c f g h i
427 order of finish: d f c e b a
428 parenthesis: (a (b (e (d d) (c (f f) c) e) b) a) (g (h (i i) h) g)
429 </PRE>
430
431 <P>
432
433 <H2><a name="sec:minimum-spanning-tree">Minimum Spanning Tree Problem</a></H2>
434
435 <P>
436 The <I>minimum-spanning-tree problem</I> is defined as follows: find
437 an acyclic subset <i>T</i> of <i>E</i> that connects all of the vertices
438 in the graph and whose total weight is minimized, where the
439 total weight is given by
440 <P></P>
441 <DIV ALIGN="left">
442 <i>w(T)</i> = sum of <i>w(u,v)</i> over all <i>(u,v)</i> in <i>T</i>,
443 where <i>w(u,v)</i> is the weight on the edge <i>(u,v)</i>
444 </DIV>
445 <BR CLEAR="ALL">
446 <P></P>
447 <i>T</i> is called the <I>spanning tree</I>.
448
449 <!--
450 <P>
451 Kruskal's algorithm&nbsp;[<A
452 HREF="bibliography.html#kruskal56">18</A>]
453 for solving the minimum spanning tree problem. This is a greedy
454 algorithm to calculate the minimum spanning tree for an undirected
455 graph with weighted edges.
456
457 <P>
458 This is Prim's algorithm&nbsp;[<A
459 HREF="bibliography.html#prim57:_short">25</A>] for solving the
460 minimum spanning tree problem for an undirected graph with weighted
461 edges. The implementation is simply a call to <a
462 href="./dijkstra_shortest_paths.html"><TT>dijkstra_shortest_paths()</TT></a>
463 with the appropriate choice of comparison and combine functors.
464 -->
465
466 <P>
467
468 <H2><a name="sec:shortest-paths-algorithms">Shortest-Paths Algorithms</a></H2>
469
470 <P>
471 One of the classic problems in graph theory is to find the shortest
472 path between two vertices in a graph. Formally, a <I>path</I> is a
473 sequence of vertices <i>&lt;v<sub>0</sub>,v<sub>1</sub>,...,v<sub>k</sub>&gt;</i>
474 in a graph <i>G = (V, E)</i> such that each vertex is connected to the
475 next vertex in the sequence (the edges
476 <i>(v<sub>i</sub>,v<sub>i+1</sub>)</i> for <i>i=0,1,...,k-1</i> are in
477 the edge set <i>E</i>). In the shortest-path problem, each edge is
478 given a real-valued weight. We can therefore talk about the <I>weight
479 of a path</I><BR>
480
481 <p></p>
482 <DIV ALIGN="left">
483 <i>w(p) = sum from i=1..k of w(v<sub>i-1</sub>,v<sub>i</sub>)</i>
484 </DIV>
485 <BR CLEAR="ALL"><P></P>
486
487 The <I>shortest path weight</I> from vertex <i>u</i> to <i>v</i> is then
488
489 <BR>
490 <p></p>
491 <DIV ALIGN="left">
492 <i>delta (u,v) = min { w(p) : u --> v }</i> if there is a path from
493 <i>u</i> to <i>v</i> <br>
494 <i>delta (u,v) = infinity</i> otherwise.
495 </DIV>
496 <BR CLEAR="ALL"><P></P>
497
498 A <I>shortest path</I> is any path who's path weight is equal to the
499 <I>shortest path weight</I>.
500
501 <P>
502 There are several variants of the shortest path problem. Above we
503 defined the <I>single-pair</I> problem, but there is also the
504 <I>single-source problem</I> (all shortest paths from one vertex to
505 every other vertex in the graph), the equivalent
506 <I>single-destination problem</I>, and the <I>all-pairs problem</I>.
507 It turns out that there are no algorithms for solving the single-pair
508 problem that are asymptotically faster than algorithms that solve the
509 single-source problem.
510
511 <P>
512 A <I>shortest-paths tree</I> rooted at vertex in graph <i>G=(V,E)</i>
513 is a directed subgraph <G'> where <i>V'</i> is a subset
514 of <i>V</i> and <i>E'</i> is a subset of <i>E</i>, <i>V'</i> is the
515 set of vertices reachable from , <i>G'</i> forms a rooted tree with
516 root , and for all <i>v</i> in <i>V'</i> the unique simple path from
517 to <i>v</i> in <i>G'</i> is a shortest path from to <i>v</i> in . The
518 result of a single-source algorithm is a shortest-paths tree.
519
520 <P>
521
522 <H2><a name="sec:network-flow-algorithms">Network Flow Algorithms</H2>
523
524 <p>
525 A flow network is a directed graph <i>G=(V,E)</i> with a
526 <b><i>source</i></b> vertex <i>s</i> and a <b><i>sink</i></b> vertex
527 <i>t</i>. Each edge has a positive real valued <b><i>capacity</i></b>
528 function <i>c</i> and there is a <b><i>flow</i></b> function <i>f</i>
529 defined over every vertex pair. The flow function must satisfy three
530 contraints:
531
532 <p>
533 <i>f(u,v) <= c(u,v) for all (u,v) in V x V</i> (Capacity constraint) <br>
534 <i>f(u,v) = - f(v,u) for all (u,v) in V x V</i> (Skew symmetry)<br>
535 <i>sum<sub>v in V</sub> f(u,v) = 0 for all u in V - {s,t}</i> (Flow conservation)
536
537 <p>
538 The <b><i>flow</i></b> of the network is the net flow entering the
539 sink vertex <i>t</i> (which is equal to the net flow leaving the
540 source vertex <i>s</i>).<p>
541
542 <i>|f| = sum<sub>u in V</sub> f(u,t) = sum<sub>v in V</sub> f(s,v)</i>
543
544 <p>
545 The <b><i>residual capacity</i></b> of an edge is <i>r(u,v) = c(u,v) -
546 f(u,v)</i>. The edges with <i>r(u,v) > 0</i> are residual edges
547 <i>E<sub>f</sub></i> which induce the residual graph <i>G<sub>f</sub>
548 = (V, E<sub>f</sub>)</i>. An edge with <i>r(u,v) = 0</i> is
549 <b><i>saturated</i></b>.
550
551 <p>
552 The <b><i>maximum flow problem</i></b> is to determine the maximum
553 possible value for <i>|f|</i> and the corresponding flow values for
554 every vertex pair in the graph.
555 <p>
556 The <b><i>minimum cost maximum flow problem</i></b> is to determine the maximum flow which minimizes <i> sum<sub>(u,v) in E</sub>
557 cost(u,v) * f(u,v) </i>.
558
559 <p>
560 A flow network is shown in <a href="#fig:max-flow">Figure
561 8</a>. Vertex <i>A</i> is the source vertex and <i>H</i> is the target
562 vertex.
563
564 <P></P>
565 <DIV ALIGN="center"><A NAME="fig:max-flow"></A>
566 <TABLE>
567 <CAPTION ALIGN="BOTTOM"><STRONG>Figure 8:</STRONG> A Maximum Flow
568 Network.<br> Edges are labeled with the flow and capacity
569 values.</CAPTION>
570 <TR><TD><IMG SRC="./figs/max-flow.gif" width="578" height="240"></TD></TR>
571 </TABLE>
572 </DIV><P></P>
573
574 <p>
575 There is a long history of algorithms for solving the maximum flow
576 problem, with the first algorithm due to <a
577 href="./bibliography.html#ford56:_maxim">Ford and Fulkerson</a>. The
578 best general purpose algorithm to date is the push-relabel algorithm
579 of <a
580 href="./bibliography.html#goldberg85:_new_max_flow_algor">Goldberg</a>
581 which is based on the notion of a <b><i>preflow</i></b> introduced by
582 <a href="./bibliography.html#karzanov74:_deter">Karzanov</a>.
583
584
585 <h2><a name="sec:min-cut-algorithms">Minimum Cut Algorithms</a></h2>
586
587 <h3>Undirected Graphs</h3>
588 <p>Given an undirected graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>X</i> and <img src="stoer_wagner_imgs/6e4.gif" alt="\overline{X} = V - X" style="vertical-align: middle; padding-bottom: 2px">. The <i>weight</i> of a cut is defined as the number of edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is unweighted, or the sum of the weights of all edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is weighted (each edge has an associated, non-negative weight).
589
590 <p>When the weight of a cut <img src="stoer_wagner_imgs/8b7.gif" alt="C = \{X, \overline{X}\}" style="vertical-align: middle"> is minimal for a graph <i>G</i> (that is, no other cut of <i>G</i> has a lesser weight), then the cut is known as a <em>minimum cut</em> or a <em>min-cut</em>. For example, given this weighted graph:
591
592 <p><a href="stoer_wagner_imgs/stoer_wagner-example.dot"><img src="stoer_wagner_imgs/stoer_wagner-example.gif"></a>
593
594 <p>The cut {{0, 6}, {3, 2, 7, 1, 5, 4}} has weight 13:
595
596 <p><a href="stoer_wagner_imgs/stoer_wagner-example-c1.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-c1.gif"></a>
597
598 <p>And the min-cut is {{0, 1, 4, 5}, {2, 3, 6, 7}} (weight 4):
599
600 <p><a href="stoer_wagner_imgs/stoer_wagner-example-min-cut.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-min-cut.gif"></a>
601
602 <p>Unlike this example, a graph will sometimes have multiple min-cuts, all of equal weight. A minimum cut algorithm determines one of them as well as the min-cut weight.
603
604 <h3>Directed Graphs</h3>
605
606 <p>Given a directed graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>S</i> and <i>T</i> where <i>S</i> is known as the set of <em>source vertices</em> and <i>T</i> is known as the set of <em>sink vertices</em>. The <em>capacity</em> of a cut <i>C</i> = (<i>S</i>, <i>T</i>) is the number of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is unweighted, or the sum of weights of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is weighted.
607
608 <p>When the capacity of a cut <i>C</i> = (<i>S</i>, <i>T</i>) of a directed graph is minimal (that is, no other cut of <i>G</i> has lesser capacity), then <i>C</i> is known as a <em>minimum cut</em> or <em>min-cut</em>.
609
610 <p>For example, given this directed graph:
611
612 <p><a href="stoer_wagner_imgs/digraph1.dot"><img src="stoer_wagner_imgs/digraph1.gif"></a>
613
614 <p>A min-cut is:
615
616 <p><a href="stoer_wagner_imgs/digraph1-min-cut.dot"><img src="stoer_wagner_imgs/digraph1-min-cut.gif"></a>
617
618 <p>where <i>S</i> = {0}, <i>T</i> = {1, 2, 3}, and the min-cut capacity is 1.
619
620 <br>
621 <HR>
622 <TABLE>
623 <TR valign=top>
624 <TD nowrap>Copyright &copy; 2000-2001</TD><TD>
625 <A HREF="http://www.boost.org/people/jeremy_siek.htm">Jeremy Siek</A>, Indiana University (<A HREF="mailto:jsiek@osl.iu.edu">jsiek@osl.iu.edu</A>)
626 </TD></TR></TABLE>
627
628 </BODY>
629 </HTML>