]> git.proxmox.com Git - ceph.git/blame - ceph/src/boost/libs/graph/doc/graph_theory_review.html
bump version to 12.2.2-pve1
[ceph.git] / ceph / src / boost / libs / graph / doc / graph_theory_review.html
CommitLineData
7c673cae
FG
1<HTML>
2<!--
3 Copyright (c) Jeremy Siek 2000
4 Copyright (c) Daniel Trebbien 2010
5
6 Distributed under the Boost Software License, Version 1.0.
7 (See accompanying file LICENSE_1_0.txt or copy at
8 http://www.boost.org/LICENSE_1_0.txt)
9 -->
10<Head>
11<Title>Boost Graph Library: Graph Theory Review</Title>
12<BODY BGCOLOR="#ffffff" LINK="#0000ee" TEXT="#000000" VLINK="#551a8b"
13 ALINK="#ff0000">
14<IMG SRC="../../../boost.png"
15 ALT="C++ Boost" width="277" height="86">
16
17<BR Clear>
18
19<H1>Review of Elementary Graph Theory</H1>
20
21<P>
22
23<P>
24This chapter is meant as a refresher on elementary graph theory. If
25the reader has some previous acquaintance with graph algorithms, this
26chapter should be enough to get started. If the reader has no
27previous background in graph algorithms we suggest a more thorough
28introduction such as <a
29href="http://toc.lcs.mit.edu/~clr/">Introduction to Algorithms</a>
30by Cormen, Leiserson, and Rivest.
31
32<P>
33
34<H1>The Graph Abstraction</H1>
35
36<P>
37A graph is a mathematical abstraction that is useful for solving many
38kinds of problems. Fundamentally, a graph consists of a set of
39vertices, and a set of edges, where an edge is something that connects
40two vertices in the graph. More precisely, a <a
41name="def:graph"><I>graph</I></a> is a pair <i>(V,E)</i>, where
42<i>V</i> is a finite set and <i>E</i> is a binary relation on
43<i>V</i>. <i>V</i> is called a <a name="def:vertex-set"><I>vertex
44set</I></a> whose elements are called <I>vertices</I>. <i>E</i> is a
45collection of edges, where an <a name="def:edge"><I>edge</I></a> is a
46pair <i>(u,v)</i> with <i>u,v</i> in <i>V</i>. In a <a
47name="def:directed-graph"><I>directed graph</I></a>, edges are ordered
48pairs, connecting a <a name="def:source"><I>source</I></a> vertex to a
49<a name="def:target"><I>target</I></a> vertex. In an <a
50name="def:undirected-graph"><I>undirected graph</I></a> edges are
51unordered pairs and connect the two vertices in both directions, hence
52in an undirected graph <i>(u,v)</i> and <i>(v,u)</i> are two ways of
53writing the same edge.
54
55<P>
56This definition of a graph is vague in certain respects; it does not
57say what a vertex or edge represents. They could be cities with
58connecting roads, or web-pages with hyperlinks. These details are left
59out of the definition of a graph for an important reason; they are not
60a necessary part of the graph <I>abstraction</I>. By leaving out the
61details we can construct a theory that is reusable, that can help us
62solve lots of different kinds of problems.
63
64<P>
65Back to the definition: a graph is a set of vertices and edges. For
66purposes of demonstration, let us consider a graph where we have
67labeled the vertices with letters, and we write an edge simply as a
68pair of letters. Now we can write down an example of a directed graph
69as follows:
70
71<P>
72<BR>
73<DIV ALIGN="center">
74<table><tr><td><tt>
75V = {v, b, x, z, a, y } <br>
76E = { (b,y), (b,y), (y,v), (z,a), (x,x), (b,x), (x,v), (a,z) } <br>
77G = (V, E)
78</tt></td></tr></table>
79</DIV>
80<BR CLEAR="ALL"><P></P>
81
82
83<P>
84<A HREF="#fig:directed-graph">Figure 1</A> gives a pictorial view of
85this graph. The edge <i>(x,x)</i> is called a <a
86name="def:self-loop"><I>self-loop</I></a>. Edges <i>(b,y)</i> and
87<i>(b,y)</i> are <I>parallel edges</I>, which are allowed in a <a
88name="def:multigraph"><I>multigraph</I></a> (but are normally not
89allowed in a directed or undirected graph).
90
91<P>
92
93<P></P>
94<DIV ALIGN="center"><A NAME="fig:directed-graph"></A>
95<TABLE>
96<CAPTION ALIGN="BOTTOM"><STRONG>Figure 1:</STRONG>
97Example of a directed graph.</CAPTION>
98<TR><TD><IMG SRC="./figs/digraph.gif" width="124" height="163"></TD></TR>
99</TABLE>
100</DIV><P></P>
101
102<P>
103Next we have a similar graph, though this time it is undirected. <A
104HREF="#fig:undirected-graph">Figure 2</A> gives the pictorial view.
105Self loops are not allowed in undirected graphs. This graph is the <a
106name="def:undirected-version"><I>undirected version</i></a. of the the
107previous graph (minus the parallel edge <i>(b,y)</i>), meaning it has
108the same vertices and the same edges with their directions removed.
109Also the self edge has been removed, and edges such as <i>(a,z)</i>
110and <i>(z,a)</i> are collapsed into one edge. One can go the other
111way, and make a <a name="def:directed-version"><I>directed version</i>
112of an undirected graph be replacing each edge by two edges, one
113pointing in each direction.
114
115<P>
116<BR>
117<DIV ALIGN="CENTER">
118<table><tr><td><tt>
119V = {v, b, x, z, a, y }<br>
120E = { (b,y), (y,v), (z,a), (b,x), (x,v) }<br>
121G = (V, E)
122</tt></td></tr></table>
123</DIV>
124<BR CLEAR="ALL"><P></P>
125
126<P>
127
128<P></P>
129<DIV ALIGN="CENTER"><A NAME="fig:undirected-graph"></A><A NAME="1524"></A>
130<TABLE>
131<CAPTION ALIGN="BOTTOM"><STRONG>Figure 2:</STRONG>
132Example of an undirected graph.</CAPTION>
133<TR><TD><IMG SRC="./figs/undigraph.gif" width="103" height="163"></TD></TR>
134</TABLE>
135</DIV><P></P>
136
137<P>
138Now for some more graph terminology. If some edge <i>(u,v)</i> is in
139graph , then vertex <i>v</i> is <a
140name="def:adjacent"><I>adjacent</I></a> to vertex <i>u</i>. In a
141directed graph, edge <i>(u,v)</i> is an <a
142name="def:out-edge"><I>out-edge</I></a> of vertex <i>u</i> and an <a
143name="def:in-edge"><I>in-edge</I></a> of vertex <i>v</i>. In an
144undirected graph edge <i>(u,v)</i> is <a
145name="def:incident-on"><I>incident on</I></a> vertices <i>u</i> and
146<i>v</i>.
147
148<P>
149In <A HREF="#fig:directed-graph">Figure 1</A>,
150 vertex <i>y</i> is adjacent to vertex <i>b</i> (but <i>b</i> is not
151 adjacent to <i>y</i>). The edge <i>(b,y)</i> is an out-edge of
152 <i>b</i> and an in-edge of <i>y</i>. In <A
153 HREF="#fig:undirected-graph">Figure 2</A>,
154 <i>y</i> is adjacent to <i>b</i> and vice-versa. The edge
155 <i>(y,b)</i> is incident on vertices <i>y</i> and <i>b</i>.
156
157<P>
158In a directed graph, the number of out-edges of a vertex is its <a
159name="def:out-degree"><I>out-degree</I></a> and the number of in-edges
160is its <a name="def:in-degree"><I>in-degree</I></a>. For an
161undirected graph, the number of edges incident to a vertex is its <a
162name="def:degree"><I>degree</I></a>. In <A
163HREF="#fig:directed-graph">Figure 1</A>, vertex <i>b</i> has an
164out-degree of 3 and an in-degree of zero. In <A
165HREF="#fig:undirected-graph">Figure 2</A>, vertex <i>b</i> simply has
166a degree of 2.
167
168<P>
169Now a <a name="def:path"><i>path</i></a> is a sequence of edges in a
170graph such that the target vertex of each edge is the source vertex of
171the next edge in the sequence. If there is a path starting at vertex
172<i>u</i> and ending at vertex <i>v</i> we say that <i>v</i> is <a
173name="def:reachable"><i>reachable</i></a> from <i>u</i>. A path is <a
174name="def:simple-path"><I>simple</I></a> if none of the vertices in
175the sequence are repeated. The path &lt;(b,x), (x,v)&gt; is simple,
176while the path &lt;(a,z), (z,a)&gt; is not. Also, the path &lt;(a,z),
177(z,a)&gt; is called a <a name="def:cycle"><I>cycle</I></a> because the
178first and last vertex in the path are the same. A graph with no cycles
179is <a name="def:acyclic"><I>acyclic</I></a>.
180
181<P>
182A <a name="def:planar-graph"><I>planar graph</I></a> is a graph that
183can be drawn on a plane without any of the edges crossing over each
184other. Such a drawing is called a <I>plane graph</I>. A <a
185name="def:face"><I>face</I></a> of a plane graph is a connected region
186of the plane surrounded by edges. An important property of planar
187graphs is that the number of faces, edges, and vertices are related
188through Euler's formula: <i>|F| - |E| + |V| = 2</i>. This means that a
189simple planar graph has at most <i>O(|V|)</i> edges.
190
191<P>
192
193<P>
194
195<H1>Graph Data Structures</H1>
196
197<P>
198The primary property of a graph to consider when deciding which data
199structure to use is <I>sparsity</I>, the number of edges relative to
200the number of vertices in the graph. A graph where <i>E</i> is close
201to </i>V<sup>2</sup></i> is a <I>dense</I> graph, whereas a graph
202where <i>E = alpha V</i> and <i>alpha</i> is much smaller than
203<i>V</i> is a <I>sparse</I> graph. For dense graphs, the
204<I>adjacency-matrix representation</I> is usually the best choice,
205whereas for sparse graphs the <I>adjacency-list representation</I> is
206a better choice. Also the <I>edge-list representation</I> is a space
207efficient choice for sparse graphs that is appropriate in some
208situations.
209
210<P>
211
212<H2>Adjacency Matrix Representation</H2>
213
214<P>
215An adjacency-matrix representation of a graph is a 2-dimensional <i>V
216x V</i> array. Each element in the array <i>a<sub>uv</sub></i> stores
217a Boolean value saying whether the edge <i>(u,v)</i> is in the graph.
218<A HREF="#fig:adj-matrix">Figure 3</A> depicts an adjacency matrix for
219the graph in <A HREF="#fig:directed-graph">Figure 1</A> (minus the
220parallel edge <i>(b,y)</i>). The ammount of space required to store
221an adjacency-matrix is <i>O(V<sup>2</sup>)</i>. Any edge can be
222accessed, added, or removed in <i>O(1)</i> time. To add or remove a
223vertex requires reallocating and copying the whole graph, an
224<i>O(V<sup>2</sup>)</i> operation. The <a
225href="./adjacency_matrix.html"><tt>adjacency_matrix</tt></a> class
226implements the BGL graph interface in terms of the adjacency-matrix
227data-structure.
228
229
230<P>
231
232<P></P>
233<DIV ALIGN="CENTER"><A NAME="fig:adj-matrix"></A><A NAME="1701"></A>
234<TABLE>
235<CAPTION ALIGN="BOTTOM"><STRONG>Figure 3:</STRONG>
236The Adjacency Matrix Graph Representation.</CAPTION>
237<TR><TD><IMG SRC="./figs/adj_matrix.gif" width="136" height="135"></TD></TR>
238</TABLE>
239</DIV><P></P>
240
241<P>
242
243<H2><A NAME="sec:adjacency-list-representation"></A>
244Adjacency List Representation
245</H2>
246
247<P>
248An adjacency-list representation of a graph stores an out-edge
249sequence for each vertex. For sparse graphs this saves space since
250only <i>O(V + E)</i> memory is required. In addition, the out-edges
251for each vertex can be accessed more efficiently. Edge insertion is
252<i>O(1)</i>, though accessing any given edge is <i>O(alpha)</i>, where
253<i>alpha</i> is the sparsity factor of the matrix (which is equal to
254the maximum number of out-edges for any vertex in the graph). <A
255HREF="#fig:adj-list">Figure 4</A> depicts an
256adjacency-list representation of the graph in <A
257HREF="#fig:directed-graph">Figure 1</A>. The
258<a href="./adjacency_list.html"><TT>adjacency_list</TT></a> class is
259an implementation of the adjacency-list representation.
260
261<P>
262
263<P></P>
264<DIV ALIGN="CENTER"><A NAME="fig:adj-list"></A><A NAME="1713"></A>
265<TABLE>
266<CAPTION ALIGN="BOTTOM"><STRONG>Figure 4:</STRONG>
267The Adjacency List Graph Representation.</CAPTION>
268<TR><TD><IMG SRC="./figs/adj_list.gif" width="108" height="122"></TD></TR>
269</TABLE>
270</DIV><P></P>
271
272<P>
273
274<H2>Edge List Representation</H2>
275
276<P>
277An edge-list representation of a graph is simply a sequence of edges,
278where each edge is represented as a pair of vertex ID's. The memory
279required is only <i>O(E)</i>. Edge insertion is typically <i>O(1)</i>,
280though accessing a particular edge is <i>O(E)</i> (not efficient). <A
281HREF="#fig:edge-list">Figure 5</A> shows an
282edge-list representation of the graph in <A
283HREF="#fig:directed-graph">Figure 1</A>. The
284<a href="./edge_list.html"><TT>edge_list</TT></a> adaptor class can be
285used to create implementations of the edge-list representation.
286
287<P>
288
289<P></P>
290<DIV ALIGN="CENTER"><A NAME="fig:edge-list"></A><A NAME="1724"></A>
291<TABLE>
292<CAPTION ALIGN="BOTTOM"><STRONG>Figure 5:</STRONG>
293The Edge List Graph Representation.</CAPTION>
294<TR><TD><IMG SRC="./figs/edge_list.gif" width="322" height="22"></TD></TR>
295</TABLE>
296</DIV><P></P>
297
298
299<P>
300
301<H1>Graph Algorithms</H1>
302
303<P>
304
305<H2>Graph Search Algorithms</H2>
306
307<P>
308<I>Tree edges</I> are edges in the search tree (or forest) constructed
309(implicitly or explicitly) by running a graph search algorithm over a
310graph. An edge <i>(u,v)</i> is a tree edge if <i>v</i> was first
311discovered while exploring (corresponding to the <a
312href="./visitor_concepts.html">visitor</a> <TT>explore()</TT> method)
313edge <i>(u,v)</i>. <I>Back edges</I> connect vertices to their
314ancestors in a search tree. So for edge <i>(u,v)</i> the vertex
315<i>v</i> must be the ancestor of vertex <i>u</i>. Self loops are
316considered to be back edges. <I>Forward edges</I> are non-tree edges
317<i>(u,v)</i> that connect a vertex <i>u</i> to a descendant <i>v</i>
318in a search tree. <I>Cross edges</I> are edges that do not fall into
319the above three categories.
320
321<P>
322
323<H2><A NAME="sec:bfs-algorithm"></A>
324Breadth-First Search
325</H2>
326
327<P>
328Breadth-first search is a traversal through a graph that touches all
329of the vertices reachable from a particular source vertex. In
330addition, the order of the traversal is such that the algorithm will
331explore all of the neighbors of a vertex before proceeding on to the
332neighbors of its neighbors. One way to think of breadth-first search
333is that it expands like a wave emanating from a stone dropped into a
334pool of water. Vertices in the same ``wave'' are the same distance
335from the source vertex. A vertex is <I>discovered</I> the first time
336it is encountered by the algorithm. A vertex is <I>finished</I> after
337all of its neighbors are explored. Here's an example to help make this
338clear. A graph is shown in <A
339HREF="#fig:bfs-example">Figure 6</A> and the
340BFS discovery and finish order for the vertices is shown below.
341
342<P>
343
344<P></P>
345<DIV ALIGN="CENTER"><A NAME="fig:bfs-example"></A><A NAME="1826"></A>
346<TABLE>
347<CAPTION ALIGN="BOTTOM"><STRONG>Figure 6:</STRONG>
348Breadth-first search spreading through a graph.</CAPTION>
349<TR><TD><IMG SRC="./figs/bfs_example.gif" width="242" height="143"></TD></TR>
350</TABLE>
351</DIV><P></P>
352
353<P>
354<PRE>
355 order of discovery: s r w v t x u y
356 order of finish: s r w v t x u y
357</PRE>
358
359<P>
360We start at vertex <i>s</i>, and first visit <i>r</i> and <i>w</i> (the two
361neighbors of <i>s</i>). Once both neighbors of are visited, we visit the
362neighbor of <i>r</i> (vertex <i>v</i>), then the neighbors of <i>w</i>
363(the discovery order between <i>r</i> and <i>w</i> does not matter)
364which are <i>t</i> and <i>x</i>. Finally we visit the neighbors of
365<i>t</i> and <i>x</i>, which are <i>u</i> and <i>y</i>.
366
367<P>
368For the algorithm to keep track of where it is in the graph, and which
369vertex to visit next, BFS needs to color the vertices (see the section
370on <a href="./using_property_maps.html">Property Maps</a>
371for more details about attaching properties to graphs). The place to
372put the color can either be inside the graph, or it can be passed into
373the algorithm as an argument.
374
375<P>
376
377<H2><A NAME="sec:dfs-algorithm"></A>
378Depth-First Search
379</H2>
380
381<P>
382A depth first search (DFS) visits all the vertices in a graph. When
383choosing which edge to explore next, this algorithm always chooses to
384go ``deeper'' into the graph. That is, it will pick the next adjacent
385unvisited vertex until reaching a vertex that has no unvisited
386adjacent vertices. The algorithm will then backtrack to the previous
387vertex and continue along any as-yet unexplored edges from that
388vertex. After DFS has visited all the reachable vertices from a
389particular source vertex, it chooses one of the remaining undiscovered
390vertices and continues the search. This process creates a set of
391<I>depth-first trees</I> which together form the <I>depth-first
392 forest</I>. A depth-first search categorizes the edges in the graph
393into three categories: tree-edges, back-edges, and forward or
394cross-edges (it does not specify which). There are typically many
395valid depth-first forests for a given graph, and therefore many
396different (and equally valid) ways to categorize the edges.
397
398<P>
399One interesting property of depth-first search is that the discover
400and finish times for each vertex form a parenthesis structure. If we
401use an open-parenthesis when a vertex is discovered, and a
402close-parenthesis when a vertex is finished, then the result is a
403properly nested set of parenthesis. <A
404HREF="#fig:dfs-example">Figure 7</A> shows
405DFS applied to an undirected graph, with the edges labeled in the
406order they were explored. Below we list the vertices of the graph
407ordered by discover and finish time, as well as show the parenthesis structure. DFS is used as the kernel for several other graph
408algorithms, including topological sort and two of the connected
409component algorithms. It can also be used to detect cycles (see the <A
410HREF="file_dependency_example.html#sec:cycles">Cylic Dependencies </a>
411section of the File Dependency Example).
412
413<P>
414
415<P></P>
416<DIV ALIGN="CENTER"><A NAME="fig:dfs-example"></A><A NAME="1841"></A>
417<TABLE>
418<CAPTION ALIGN="BOTTOM"><STRONG>Figure 7:</STRONG>
419Depth-first search on an undirected graph.</CAPTION>
420<TR><TD><IMG SRC="./figs/dfs.gif" width="141" height="204"></TD></TR>
421</TABLE>
422</DIV><P></P>
423
424<P>
425<PRE>
426 order of discovery: a b e d c f g h i
427 order of finish: d f c e b a
428 parenthesis: (a (b (e (d d) (c (f f) c) e) b) a) (g (h (i i) h) g)
429</PRE>
430
431<P>
432
433<H2><a name="sec:minimum-spanning-tree">Minimum Spanning Tree Problem</a></H2>
434
435<P>
436The <I>minimum-spanning-tree problem</I> is defined as follows: find
437an acyclic subset <i>T</i> of <i>E</i> that connects all of the vertices
438in the graph and whose total weight is minimized, where the
439total weight is given by
440<P></P>
441<DIV ALIGN="left">
442<i>w(T)</i> = sum of <i>w(u,v)</i> over all <i>(u,v)</i> in <i>T</i>,
443where <i>w(u,v)</i> is the weight on the edge <i>(u,v)</i>
444</DIV>
445<BR CLEAR="ALL">
446<P></P>
447<i>T</i> is called the <I>spanning tree</I>.
448
449<!--
450<P>
451Kruskal's algorithm&nbsp;[<A
452 HREF="bibliography.html#kruskal56">18</A>]
453for solving the minimum spanning tree problem. This is a greedy
454algorithm to calculate the minimum spanning tree for an undirected
455graph with weighted edges.
456
457<P>
458This is Prim's algorithm&nbsp;[<A
459 HREF="bibliography.html#prim57:_short">25</A>] for solving the
460 minimum spanning tree problem for an undirected graph with weighted
461 edges. The implementation is simply a call to <a
462 href="./dijkstra_shortest_paths.html"><TT>dijkstra_shortest_paths()</TT></a>
463 with the appropriate choice of comparison and combine functors.
464-->
465
466<P>
467
468<H2><a name="sec:shortest-paths-algorithms">Shortest-Paths Algorithms</a></H2>
469
470<P>
471One of the classic problems in graph theory is to find the shortest
472path between two vertices in a graph. Formally, a <I>path</I> is a
473sequence of vertices <i>&lt;v<sub>0</sub>,v<sub>1</sub>,...,v<sub>k</sub>&gt;</i>
474in a graph <i>G = (V, E)</i> such that each vertex is connected to the
475next vertex in the sequence (the edges
476<i>(v<sub>i</sub>,v<sub>i+1</sub>)</i> for <i>i=0,1,...,k-1</i> are in
477the edge set <i>E</i>). In the shortest-path problem, each edge is
478given a real-valued weight. We can therefore talk about the <I>weight
479of a path</I><BR>
480
481<p></p>
482<DIV ALIGN="left">
483<i>w(p) = sum from i=1..k of w(v<sub>i-1</sub>,v<sub>i</sub>)</i>
484</DIV>
485<BR CLEAR="ALL"><P></P>
486
487The <I>shortest path weight</I> from vertex <i>u</i> to <i>v</i> is then
488
489<BR>
490<p></p>
491<DIV ALIGN="left">
492<i>delta (u,v) = min { w(p) : u --> v }</i> if there is a path from
493<i>u</i> to <i>v</i> <br>
494<i>delta (u,v) = infinity</i> otherwise.
495</DIV>
496<BR CLEAR="ALL"><P></P>
497
498A <I>shortest path</I> is any path who's path weight is equal to the
499<I>shortest path weight</I>.
500
501<P>
502There are several variants of the shortest path problem. Above we
503defined the <I>single-pair</I> problem, but there is also the
504<I>single-source problem</I> (all shortest paths from one vertex to
505every other vertex in the graph), the equivalent
506<I>single-destination problem</I>, and the <I>all-pairs problem</I>.
507It turns out that there are no algorithms for solving the single-pair
508problem that are asymptotically faster than algorithms that solve the
509single-source problem.
510
511<P>
512A <I>shortest-paths tree</I> rooted at vertex in graph <i>G=(V,E)</i>
513is a directed subgraph <G'> where <i>V'</i> is a subset
514of <i>V</i> and <i>E'</i> is a subset of <i>E</i>, <i>V'</i> is the
515set of vertices reachable from , <i>G'</i> forms a rooted tree with
516root , and for all <i>v</i> in <i>V'</i> the unique simple path from
517to <i>v</i> in <i>G'</i> is a shortest path from to <i>v</i> in . The
518result of a single-source algorithm is a shortest-paths tree.
519
520<P>
521
522<H2><a name="sec:network-flow-algorithms">Network Flow Algorithms</H2>
523
524<p>
525A flow network is a directed graph <i>G=(V,E)</i> with a
526<b><i>source</i></b> vertex <i>s</i> and a <b><i>sink</i></b> vertex
527<i>t</i>. Each edge has a positive real valued <b><i>capacity</i></b>
528function <i>c</i> and there is a <b><i>flow</i></b> function <i>f</i>
529defined over every vertex pair. The flow function must satisfy three
530contraints:
531
532<p>
533<i>f(u,v) <= c(u,v) for all (u,v) in V x V</i> (Capacity constraint) <br>
534<i>f(u,v) = - f(v,u) for all (u,v) in V x V</i> (Skew symmetry)<br>
535<i>sum<sub>v in V</sub> f(u,v) = 0 for all u in V - {s,t}</i> (Flow conservation)
536
537<p>
538The <b><i>flow</i></b> of the network is the net flow entering the
539sink vertex <i>t</i> (which is equal to the net flow leaving the
540source vertex <i>s</i>).<p>
541
542<i>|f| = sum<sub>u in V</sub> f(u,t) = sum<sub>v in V</sub> f(s,v)</i>
543
544<p>
545The <b><i>residual capacity</i></b> of an edge is <i>r(u,v) = c(u,v) -
546f(u,v)</i>. The edges with <i>r(u,v) > 0</i> are residual edges
547<i>E<sub>f</sub></i> which induce the residual graph <i>G<sub>f</sub>
548= (V, E<sub>f</sub>)</i>. An edge with <i>r(u,v) = 0</i> is
549<b><i>saturated</i></b>.
550
551<p>
552The <b><i>maximum flow problem</i></b> is to determine the maximum
553possible value for <i>|f|</i> and the corresponding flow values for
554every vertex pair in the graph.
555<p>
556The <b><i>minimum cost maximum flow problem</i></b> is to determine the maximum flow which minimizes <i> sum<sub>(u,v) in E</sub>
557cost(u,v) * f(u,v) </i>.
558
559<p>
560A flow network is shown in <a href="#fig:max-flow">Figure
5618</a>. Vertex <i>A</i> is the source vertex and <i>H</i> is the target
562vertex.
563
564<P></P>
565<DIV ALIGN="center"><A NAME="fig:max-flow"></A>
566<TABLE>
567<CAPTION ALIGN="BOTTOM"><STRONG>Figure 8:</STRONG> A Maximum Flow
568Network.<br> Edges are labeled with the flow and capacity
569values.</CAPTION>
570<TR><TD><IMG SRC="./figs/max-flow.gif" width="578" height="240"></TD></TR>
571</TABLE>
572</DIV><P></P>
573
574<p>
575There is a long history of algorithms for solving the maximum flow
576problem, with the first algorithm due to <a
577href="./bibliography.html#ford56:_maxim">Ford and Fulkerson</a>. The
578best general purpose algorithm to date is the push-relabel algorithm
579of <a
580href="./bibliography.html#goldberg85:_new_max_flow_algor">Goldberg</a>
581which is based on the notion of a <b><i>preflow</i></b> introduced by
582<a href="./bibliography.html#karzanov74:_deter">Karzanov</a>.
583
584
585<h2><a name="sec:min-cut-algorithms">Minimum Cut Algorithms</a></h2>
586
587<h3>Undirected Graphs</h3>
588<p>Given an undirected graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>X</i> and <img src="stoer_wagner_imgs/6e4.gif" alt="\overline{X} = V - X" style="vertical-align: middle; padding-bottom: 2px">. The <i>weight</i> of a cut is defined as the number of edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is unweighted, or the sum of the weights of all edges between sets <i>X</i> and <img src="stoer_wagner_imgs/f79.gif" alt="\overline{X}" style="vertical-align: middle; padding-bottom: 3px"> if <i>G</i> is weighted (each edge has an associated, non-negative weight).
589
590<p>When the weight of a cut <img src="stoer_wagner_imgs/8b7.gif" alt="C = \{X, \overline{X}\}" style="vertical-align: middle"> is minimal for a graph <i>G</i> (that is, no other cut of <i>G</i> has a lesser weight), then the cut is known as a <em>minimum cut</em> or a <em>min-cut</em>. For example, given this weighted graph:
591
592<p><a href="stoer_wagner_imgs/stoer_wagner-example.dot"><img src="stoer_wagner_imgs/stoer_wagner-example.gif"></a>
593
594<p>The cut {{0, 6}, {3, 2, 7, 1, 5, 4}} has weight 13:
595
596<p><a href="stoer_wagner_imgs/stoer_wagner-example-c1.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-c1.gif"></a>
597
598<p>And the min-cut is {{0, 1, 4, 5}, {2, 3, 6, 7}} (weight 4):
599
600<p><a href="stoer_wagner_imgs/stoer_wagner-example-min-cut.dot"><img src="stoer_wagner_imgs/stoer_wagner-example-min-cut.gif"></a>
601
602<p>Unlike this example, a graph will sometimes have multiple min-cuts, all of equal weight. A minimum cut algorithm determines one of them as well as the min-cut weight.
603
604<h3>Directed Graphs</h3>
605
606<p>Given a directed graph <i>G</i> = (<i>V</i>, <i>E</i>), a <em>cut</em> of <i>G</i> is a partition of the vertices into two, non-empty sets <i>S</i> and <i>T</i> where <i>S</i> is known as the set of <em>source vertices</em> and <i>T</i> is known as the set of <em>sink vertices</em>. The <em>capacity</em> of a cut <i>C</i> = (<i>S</i>, <i>T</i>) is the number of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is unweighted, or the sum of weights of edges from a vertex in <i>S</i> to a vertex in <i>T</i> if <i>G</i> is weighted.
607
608<p>When the capacity of a cut <i>C</i> = (<i>S</i>, <i>T</i>) of a directed graph is minimal (that is, no other cut of <i>G</i> has lesser capacity), then <i>C</i> is known as a <em>minimum cut</em> or <em>min-cut</em>.
609
610<p>For example, given this directed graph:
611
612<p><a href="stoer_wagner_imgs/digraph1.dot"><img src="stoer_wagner_imgs/digraph1.gif"></a>
613
614<p>A min-cut is:
615
616<p><a href="stoer_wagner_imgs/digraph1-min-cut.dot"><img src="stoer_wagner_imgs/digraph1-min-cut.gif"></a>
617
618<p>where <i>S</i> = {0}, <i>T</i> = {1, 2, 3}, and the min-cut capacity is 1.
619
620<br>
621<HR>
622<TABLE>
623<TR valign=top>
624<TD nowrap>Copyright &copy; 2000-2001</TD><TD>
625<A HREF="http://www.boost.org/people/jeremy_siek.htm">Jeremy Siek</A>, Indiana University (<A HREF="mailto:jsiek@osl.iu.edu">jsiek@osl.iu.edu</A>)
626</TD></TR></TABLE>
627
628</BODY>
629</HTML>