]> git.proxmox.com Git - mirror_edk2.git/blob - AppPkg/Applications/Python/Python-2.7.2/Lib/heapq.py
EmbeddedPkg: Extend NvVarStoreFormattedLib LIBRARY_CLASS
[mirror_edk2.git] / AppPkg / Applications / Python / Python-2.7.2 / Lib / heapq.py
1 # -*- coding: latin-1 -*-
2
3 """Heap queue algorithm (a.k.a. priority queue).
4
5 Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for
6 all k, counting elements from 0. For the sake of comparison,
7 non-existing elements are considered to be infinite. The interesting
8 property of a heap is that a[0] is always its smallest element.
9
10 Usage:
11
12 heap = [] # creates an empty heap
13 heappush(heap, item) # pushes a new item on the heap
14 item = heappop(heap) # pops the smallest item from the heap
15 item = heap[0] # smallest item on the heap without popping it
16 heapify(x) # transforms list into a heap, in-place, in linear time
17 item = heapreplace(heap, item) # pops and returns smallest item, and adds
18 # new item; the heap size is unchanged
19
20 Our API differs from textbook heap algorithms as follows:
21
22 - We use 0-based indexing. This makes the relationship between the
23 index for a node and the indexes for its children slightly less
24 obvious, but is more suitable since Python uses 0-based indexing.
25
26 - Our heappop() method returns the smallest item, not the largest.
27
28 These two make it possible to view the heap as a regular Python list
29 without surprises: heap[0] is the smallest item, and heap.sort()
30 maintains the heap invariant!
31 """
32
33 # Original code by Kevin O'Connor, augmented by Tim Peters and Raymond Hettinger
34
35 __about__ = """Heap queues
36
37 [explanation by François Pinard]
38
39 Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for
40 all k, counting elements from 0. For the sake of comparison,
41 non-existing elements are considered to be infinite. The interesting
42 property of a heap is that a[0] is always its smallest element.
43
44 The strange invariant above is meant to be an efficient memory
45 representation for a tournament. The numbers below are `k', not a[k]:
46
47 0
48
49 1 2
50
51 3 4 5 6
52
53 7 8 9 10 11 12 13 14
54
55 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
56
57
58 In the tree above, each cell `k' is topping `2*k+1' and `2*k+2'. In
59 an usual binary tournament we see in sports, each cell is the winner
60 over the two cells it tops, and we can trace the winner down the tree
61 to see all opponents s/he had. However, in many computer applications
62 of such tournaments, we do not need to trace the history of a winner.
63 To be more memory efficient, when a winner is promoted, we try to
64 replace it by something else at a lower level, and the rule becomes
65 that a cell and the two cells it tops contain three different items,
66 but the top cell "wins" over the two topped cells.
67
68 If this heap invariant is protected at all time, index 0 is clearly
69 the overall winner. The simplest algorithmic way to remove it and
70 find the "next" winner is to move some loser (let's say cell 30 in the
71 diagram above) into the 0 position, and then percolate this new 0 down
72 the tree, exchanging values, until the invariant is re-established.
73 This is clearly logarithmic on the total number of items in the tree.
74 By iterating over all items, you get an O(n ln n) sort.
75
76 A nice feature of this sort is that you can efficiently insert new
77 items while the sort is going on, provided that the inserted items are
78 not "better" than the last 0'th element you extracted. This is
79 especially useful in simulation contexts, where the tree holds all
80 incoming events, and the "win" condition means the smallest scheduled
81 time. When an event schedule other events for execution, they are
82 scheduled into the future, so they can easily go into the heap. So, a
83 heap is a good structure for implementing schedulers (this is what I
84 used for my MIDI sequencer :-).
85
86 Various structures for implementing schedulers have been extensively
87 studied, and heaps are good for this, as they are reasonably speedy,
88 the speed is almost constant, and the worst case is not much different
89 than the average case. However, there are other representations which
90 are more efficient overall, yet the worst cases might be terrible.
91
92 Heaps are also very useful in big disk sorts. You most probably all
93 know that a big sort implies producing "runs" (which are pre-sorted
94 sequences, which size is usually related to the amount of CPU memory),
95 followed by a merging passes for these runs, which merging is often
96 very cleverly organised[1]. It is very important that the initial
97 sort produces the longest runs possible. Tournaments are a good way
98 to that. If, using all the memory available to hold a tournament, you
99 replace and percolate items that happen to fit the current run, you'll
100 produce runs which are twice the size of the memory for random input,
101 and much better for input fuzzily ordered.
102
103 Moreover, if you output the 0'th item on disk and get an input which
104 may not fit in the current tournament (because the value "wins" over
105 the last output value), it cannot fit in the heap, so the size of the
106 heap decreases. The freed memory could be cleverly reused immediately
107 for progressively building a second heap, which grows at exactly the
108 same rate the first heap is melting. When the first heap completely
109 vanishes, you switch heaps and start a new run. Clever and quite
110 effective!
111
112 In a word, heaps are useful memory structures to know. I use them in
113 a few applications, and I think it is good to keep a `heap' module
114 around. :-)
115
116 --------------------
117 [1] The disk balancing algorithms which are current, nowadays, are
118 more annoying than clever, and this is a consequence of the seeking
119 capabilities of the disks. On devices which cannot seek, like big
120 tape drives, the story was quite different, and one had to be very
121 clever to ensure (far in advance) that each tape movement will be the
122 most effective possible (that is, will best participate at
123 "progressing" the merge). Some tapes were even able to read
124 backwards, and this was also used to avoid the rewinding time.
125 Believe me, real good tape sorts were quite spectacular to watch!
126 From all times, sorting has always been a Great Art! :-)
127 """
128
129 __all__ = ['heappush', 'heappop', 'heapify', 'heapreplace', 'merge',
130 'nlargest', 'nsmallest', 'heappushpop']
131
132 from itertools import islice, repeat, count, imap, izip, tee, chain
133 from operator import itemgetter
134 import bisect
135
136 def cmp_lt(x, y):
137 # Use __lt__ if available; otherwise, try __le__.
138 # In Py3.x, only __lt__ will be called.
139 return (x < y) if hasattr(x, '__lt__') else (not y <= x)
140
141 def heappush(heap, item):
142 """Push item onto heap, maintaining the heap invariant."""
143 heap.append(item)
144 _siftdown(heap, 0, len(heap)-1)
145
146 def heappop(heap):
147 """Pop the smallest item off the heap, maintaining the heap invariant."""
148 lastelt = heap.pop() # raises appropriate IndexError if heap is empty
149 if heap:
150 returnitem = heap[0]
151 heap[0] = lastelt
152 _siftup(heap, 0)
153 else:
154 returnitem = lastelt
155 return returnitem
156
157 def heapreplace(heap, item):
158 """Pop and return the current smallest value, and add the new item.
159
160 This is more efficient than heappop() followed by heappush(), and can be
161 more appropriate when using a fixed-size heap. Note that the value
162 returned may be larger than item! That constrains reasonable uses of
163 this routine unless written as part of a conditional replacement:
164
165 if item > heap[0]:
166 item = heapreplace(heap, item)
167 """
168 returnitem = heap[0] # raises appropriate IndexError if heap is empty
169 heap[0] = item
170 _siftup(heap, 0)
171 return returnitem
172
173 def heappushpop(heap, item):
174 """Fast version of a heappush followed by a heappop."""
175 if heap and cmp_lt(heap[0], item):
176 item, heap[0] = heap[0], item
177 _siftup(heap, 0)
178 return item
179
180 def heapify(x):
181 """Transform list into a heap, in-place, in O(len(x)) time."""
182 n = len(x)
183 # Transform bottom-up. The largest index there's any point to looking at
184 # is the largest with a child index in-range, so must have 2*i + 1 < n,
185 # or i < (n-1)/2. If n is even = 2*j, this is (2*j-1)/2 = j-1/2 so
186 # j-1 is the largest, which is n//2 - 1. If n is odd = 2*j+1, this is
187 # (2*j+1-1)/2 = j so j-1 is the largest, and that's again n//2-1.
188 for i in reversed(xrange(n//2)):
189 _siftup(x, i)
190
191 def nlargest(n, iterable):
192 """Find the n largest elements in a dataset.
193
194 Equivalent to: sorted(iterable, reverse=True)[:n]
195 """
196 it = iter(iterable)
197 result = list(islice(it, n))
198 if not result:
199 return result
200 heapify(result)
201 _heappushpop = heappushpop
202 for elem in it:
203 _heappushpop(result, elem)
204 result.sort(reverse=True)
205 return result
206
207 def nsmallest(n, iterable):
208 """Find the n smallest elements in a dataset.
209
210 Equivalent to: sorted(iterable)[:n]
211 """
212 if hasattr(iterable, '__len__') and n * 10 <= len(iterable):
213 # For smaller values of n, the bisect method is faster than a minheap.
214 # It is also memory efficient, consuming only n elements of space.
215 it = iter(iterable)
216 result = sorted(islice(it, 0, n))
217 if not result:
218 return result
219 insort = bisect.insort
220 pop = result.pop
221 los = result[-1] # los --> Largest of the nsmallest
222 for elem in it:
223 if cmp_lt(elem, los):
224 insort(result, elem)
225 pop()
226 los = result[-1]
227 return result
228 # An alternative approach manifests the whole iterable in memory but
229 # saves comparisons by heapifying all at once. Also, saves time
230 # over bisect.insort() which has O(n) data movement time for every
231 # insertion. Finding the n smallest of an m length iterable requires
232 # O(m) + O(n log m) comparisons.
233 h = list(iterable)
234 heapify(h)
235 return map(heappop, repeat(h, min(n, len(h))))
236
237 # 'heap' is a heap at all indices >= startpos, except possibly for pos. pos
238 # is the index of a leaf with a possibly out-of-order value. Restore the
239 # heap invariant.
240 def _siftdown(heap, startpos, pos):
241 newitem = heap[pos]
242 # Follow the path to the root, moving parents down until finding a place
243 # newitem fits.
244 while pos > startpos:
245 parentpos = (pos - 1) >> 1
246 parent = heap[parentpos]
247 if cmp_lt(newitem, parent):
248 heap[pos] = parent
249 pos = parentpos
250 continue
251 break
252 heap[pos] = newitem
253
254 # The child indices of heap index pos are already heaps, and we want to make
255 # a heap at index pos too. We do this by bubbling the smaller child of
256 # pos up (and so on with that child's children, etc) until hitting a leaf,
257 # then using _siftdown to move the oddball originally at index pos into place.
258 #
259 # We *could* break out of the loop as soon as we find a pos where newitem <=
260 # both its children, but turns out that's not a good idea, and despite that
261 # many books write the algorithm that way. During a heap pop, the last array
262 # element is sifted in, and that tends to be large, so that comparing it
263 # against values starting from the root usually doesn't pay (= usually doesn't
264 # get us out of the loop early). See Knuth, Volume 3, where this is
265 # explained and quantified in an exercise.
266 #
267 # Cutting the # of comparisons is important, since these routines have no
268 # way to extract "the priority" from an array element, so that intelligence
269 # is likely to be hiding in custom __cmp__ methods, or in array elements
270 # storing (priority, record) tuples. Comparisons are thus potentially
271 # expensive.
272 #
273 # On random arrays of length 1000, making this change cut the number of
274 # comparisons made by heapify() a little, and those made by exhaustive
275 # heappop() a lot, in accord with theory. Here are typical results from 3
276 # runs (3 just to demonstrate how small the variance is):
277 #
278 # Compares needed by heapify Compares needed by 1000 heappops
279 # -------------------------- --------------------------------
280 # 1837 cut to 1663 14996 cut to 8680
281 # 1855 cut to 1659 14966 cut to 8678
282 # 1847 cut to 1660 15024 cut to 8703
283 #
284 # Building the heap by using heappush() 1000 times instead required
285 # 2198, 2148, and 2219 compares: heapify() is more efficient, when
286 # you can use it.
287 #
288 # The total compares needed by list.sort() on the same lists were 8627,
289 # 8627, and 8632 (this should be compared to the sum of heapify() and
290 # heappop() compares): list.sort() is (unsurprisingly!) more efficient
291 # for sorting.
292
293 def _siftup(heap, pos):
294 endpos = len(heap)
295 startpos = pos
296 newitem = heap[pos]
297 # Bubble up the smaller child until hitting a leaf.
298 childpos = 2*pos + 1 # leftmost child position
299 while childpos < endpos:
300 # Set childpos to index of smaller child.
301 rightpos = childpos + 1
302 if rightpos < endpos and not cmp_lt(heap[childpos], heap[rightpos]):
303 childpos = rightpos
304 # Move the smaller child up.
305 heap[pos] = heap[childpos]
306 pos = childpos
307 childpos = 2*pos + 1
308 # The leaf at pos is empty now. Put newitem there, and bubble it up
309 # to its final resting place (by sifting its parents down).
310 heap[pos] = newitem
311 _siftdown(heap, startpos, pos)
312
313 # If available, use C implementation
314 try:
315 from _heapq import *
316 except ImportError:
317 pass
318
319 def merge(*iterables):
320 '''Merge multiple sorted inputs into a single sorted output.
321
322 Similar to sorted(itertools.chain(*iterables)) but returns a generator,
323 does not pull the data into memory all at once, and assumes that each of
324 the input streams is already sorted (smallest to largest).
325
326 >>> list(merge([1,3,5,7], [0,2,4,8], [5,10,15,20], [], [25]))
327 [0, 1, 2, 3, 4, 5, 5, 7, 8, 10, 15, 20, 25]
328
329 '''
330 _heappop, _heapreplace, _StopIteration = heappop, heapreplace, StopIteration
331
332 h = []
333 h_append = h.append
334 for itnum, it in enumerate(map(iter, iterables)):
335 try:
336 next = it.next
337 h_append([next(), itnum, next])
338 except _StopIteration:
339 pass
340 heapify(h)
341
342 while 1:
343 try:
344 while 1:
345 v, itnum, next = s = h[0] # raises IndexError when h is empty
346 yield v
347 s[0] = next() # raises StopIteration when exhausted
348 _heapreplace(h, s) # restore heap condition
349 except _StopIteration:
350 _heappop(h) # remove empty iterator
351 except IndexError:
352 return
353
354 # Extend the implementations of nsmallest and nlargest to use a key= argument
355 _nsmallest = nsmallest
356 def nsmallest(n, iterable, key=None):
357 """Find the n smallest elements in a dataset.
358
359 Equivalent to: sorted(iterable, key=key)[:n]
360 """
361 # Short-cut for n==1 is to use min() when len(iterable)>0
362 if n == 1:
363 it = iter(iterable)
364 head = list(islice(it, 1))
365 if not head:
366 return []
367 if key is None:
368 return [min(chain(head, it))]
369 return [min(chain(head, it), key=key)]
370
371 # When n>=size, it's faster to use sorted()
372 try:
373 size = len(iterable)
374 except (TypeError, AttributeError):
375 pass
376 else:
377 if n >= size:
378 return sorted(iterable, key=key)[:n]
379
380 # When key is none, use simpler decoration
381 if key is None:
382 it = izip(iterable, count()) # decorate
383 result = _nsmallest(n, it)
384 return map(itemgetter(0), result) # undecorate
385
386 # General case, slowest method
387 in1, in2 = tee(iterable)
388 it = izip(imap(key, in1), count(), in2) # decorate
389 result = _nsmallest(n, it)
390 return map(itemgetter(2), result) # undecorate
391
392 _nlargest = nlargest
393 def nlargest(n, iterable, key=None):
394 """Find the n largest elements in a dataset.
395
396 Equivalent to: sorted(iterable, key=key, reverse=True)[:n]
397 """
398
399 # Short-cut for n==1 is to use max() when len(iterable)>0
400 if n == 1:
401 it = iter(iterable)
402 head = list(islice(it, 1))
403 if not head:
404 return []
405 if key is None:
406 return [max(chain(head, it))]
407 return [max(chain(head, it), key=key)]
408
409 # When n>=size, it's faster to use sorted()
410 try:
411 size = len(iterable)
412 except (TypeError, AttributeError):
413 pass
414 else:
415 if n >= size:
416 return sorted(iterable, key=key, reverse=True)[:n]
417
418 # When key is none, use simpler decoration
419 if key is None:
420 it = izip(iterable, count(0,-1)) # decorate
421 result = _nlargest(n, it)
422 return map(itemgetter(0), result) # undecorate
423
424 # General case, slowest method
425 in1, in2 = tee(iterable)
426 it = izip(imap(key, in1), count(0,-1), in2) # decorate
427 result = _nlargest(n, it)
428 return map(itemgetter(2), result) # undecorate
429
430 if __name__ == "__main__":
431 # Simple sanity test
432 heap = []
433 data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]
434 for item in data:
435 heappush(heap, item)
436 sort = []
437 while heap:
438 sort.append(heappop(heap))
439 print sort
440
441 import doctest
442 doctest.testmod()