See Our team
Wondering how we keep quality?
Got unsolved questions? Ask Questions
Data structure with C 10CS35 unit-7
UNIT - 7 PRIORITY QUEUES
7.1 Single- and Double-Ended Priority Queues
Need for priority queue:
In a multi user environment, the operating system scheduler must decide which of several processes
to run only for a fixed period for time.
For that we can use the algorithm of QUEUE, where Jobs are initially placed at the end of the queue.
The scheduler will repeatedly take the first job on the queue, run it until either it finishes or its time
limit is up, and placing it at the and of the queue if it doesn‘t finish.
This strategy is generally not approximate, because very short jobs will soon to take a long time
because of the wait involved to run.
Generally, it is important that short jobs finish as fast as possible, so these jobs should have
precedence over jobs that have already been running.
Further more, some jobs that are not short are still very important and should also have precedence.
This particular application seems to require a special kind of queue, known as aPRIORITY QUEUE.
It is a collection of ordered elements that provides fast access to the minimum or maximum element.
Basic Operations performed by priority queue are:
1. Insert operation
2. Deletemin operation
Insert operation is the equivalent of queue‘s Enqueue operation.
Deletemin operation is the priority queue equivalent of the queue‘s Dequeue operation.
7.2 Leftist Trees
A (single-ended) priority queue is a data type supporting the following operations on an ordered set of
1) find the maximum value (FindMax);
2) delete the maximum value (DeleteMax);
3) add a new value x (Insert(x)).
Obviously, the priority queue can be redefined by substituting operations 1) and 2) with FindMin and
DeleteMin, respectively. Several structures, some implicitly stored in an array and some using more complex
data structures, have been presented for implementing this data type, including max heaps (or min-heaps)
Conceptually, a max-heap is a binary tree having the following properties:
a) heap-shape: all leaves lie on at most two adjacent levels, and the leaves on the last level occupy the
leftmost positions; all other levels are complete.
b) max-ordering: the value stored at a node is greater than or equal to the values stored at its children. A
max-heap of size n can be constructed in linear time and can be stored in an n-element array; hence it is referred to as an implicit data structure [g].
When a max-heap implements a priority queue, FindMax can be performed in constant time, while both
DeleteMax and Insert(x) have logarithmic time. We shall consider a more powerful data type, the
doubleended priority queue, which allows both FindMin and FindMax, as well as DeleteMin, DeleteMax,
and Insert(x) operations. An important application of this data type is in external quicksort .
A traditional heap does not allow efficient implementation of all the above operations; for example, FindMin requires linear (instead of constant) time in a max-heap. One approach to overcoming this intrinsic limitation of heaps, is to place a max-heap ―back-to-back‖ with a min-heap Definition A double-ended priority queue (DEPQ) is a collection of zero or more elements. Each element has a priority or value. The operations performed on a double-ended priority queue are:
1. isEmpty() ... return true iff the DEPQ is empty
2. size() ... return the number of elements in the DEPQ
3. getMin() ... return element with minimum priority
4. getMax() ... return element with maximum priority
5. put(x) ... insert the element x into the DEPQ
6. removeMin() ... remove an element with minimum priority and return this element
7. removeMax() ... remove an element with maximum priority and return this element
7.3 Binomial Heaps
Binomial heap is a heap similar to a binary heap but also supports quickly merging two heaps. This is achieved by using a special tree structure. It is important as an implementation of the mergeable heap abstract data type (also called meldable heap), which is a priority queue supporting merge operation. A binomial heap is implemented as a collection of binomial trees (compare with a binary heap, which has a shape of a single binary tree). A binomial tree is defined recursively:
A binomial tree of order 0 is a single node
A binomial tree of order k has a root node whose children are roots of binomial trees of orders k−1,
k−2, ..., 2, 1, 0 (in this order).
Binomial trees of order 0 to 3: Each tree has a root node with subtrees of all lower ordered binomial trees,
which have been highlighted. For example, the order 3 binomial tree is connected to an order 2, 1, and
0(highlighted as blue, green and red respectively) binomial tree.
A binomial tree of order k has 2k nodes, height k. Because of its unique structure, a binomial tree of order k can be constructed from two trees of order k−1trivially by attaching one of them as the leftmost child of root of the other one. This feature is central to the merge operation of a binomial heap, which is its major advantage over other conventional heaps
Structure of a binomial heap
A binomial heap is implemented as a set of binomial trees that satisfy the binomial heap properties: Each binomial tree in a heap obeys the minimum-heap property: the key of a node is greater than or equal to the key of its parent. There can only be either one or zero binomial trees for each order, including zero order. The first property ensures that the root of each binomial tree contains the smallest key in the tree, which applies to the entire heap. The second property implies that a binomial heap with n nodes consists of at most log n + 1 binomial trees. In fact, the number and orders of these trees are uniquely determined by the number of nodes n: each binomial tree corresponds to one digit in the binary representation of number n. For example number 13 is 1101 in binary, , and thus a binomial heap with 13 nodes will consist of three binomial trees of orders 3, 2, and 0 (see figure below).
7.4 Fibonacci Heaps
A Fibonacci heap is a heap data structure consisting of a collection of trees. It has a better amortized running time than a binomial heap. Fibonacci heaps were developed by Michael L. Fredman and Robert E. Tarjan in 1984 and first published in a scientific journal in 1987. The name of Fibonacci heap comes from Fibonacci numbers which are used in the running time analysis. Find-minimum is O(1) amortized time.Operations insert, decrease key, and merge (union) work in constant amortized time. Operations delete and delete minimum work in O(log n) amortized time. This means that starting from an empty data structure, any sequence of a operations from the first group and b operations from the second group would take O(a + b log n) time. In a binomial heap such a sequence of operations would take O((a + b)log (n)) time. A Fibonacci heap is thus better than a binomial heap when b is asymptotically smaller than a. Using Fibonacci heaps for priority queues improves the asymptotic running time of important algorithms, such as Dijkstra's algorithm for computing the shortest path between two nodes in a graph.
Fibonacci heap is a collection of trees satisfying the minimum-heap property, that is, the key of a child is always greater than or equal to the key of the parent. This implies that the minimum key is always at the rootof one of the trees. Compared with binomial heaps, the structure of a Fibonacci heap is more flexible. The trees do not have a prescribed shape and in the extreme case the heap can have every element in a separate tree. This flexibility allows some operations to be executed in a "lazy" manner, postponing the work for later operations. For example merging heaps is done simply by concatenating the two lists of trees, and operation decrease key sometimes cuts a node from its parent and forms a new tree. However at some point some order needs to be introduced to the heap to achieve the desired running time. In particular, degrees of nodes (here degree means the number of children) are kept quite low: every node has degree at most O(log n) and the size of a subtree rooted in a node of degree k is at least Fk + 2, where Fk is the kth Fibonacci number. This is achieved by the rule that we can cut at most one child of each nonroot node. When a second child is cut, the node itself needs to be cut from its parent and becomes the root. As a result of a relaxed structure, some operations can take a long time while others are done very quickly. In the amortized running time analysis we pretend that very fast operations take a little bit longer than they actually do. This additional time is then later subtracted from the actual running time of slow operations. The amount of time saved for later use is measured at any given moment by a potential function. The potential of a Fibonacci heap is given by Potential = t + 2m where t is the number of trees in the Fibonacci heap, and m is the number of marked nodes. A node is marked if at least one of its children was cut since this node was made a child of another node (all roots are unmarked). Thus, the root of each tree in a heap has one unit of time stored. This unit of time can be used later to link this tree with another tree at amortized time 0. Also, each marked node has two units of time stored. One can be used to cut the node from its parent. If this happens, the node becomes a root and the second unit of time will remain stored in it as in any other root.
Implementation of operations
To allow fast deletion and concatenation, the roots of all trees are linked using a circular, doubly linked list. The children of each node are also linked using such a list. For each node, we maintain its number of children and whether the node is marked. Moreover we maintain a pointer to the root containing the minimum key. Operation find minimum is now trivial because we keep the pointer to the node containing it. It does not change the potential of the heap, therefore both actual and amortized cost is constant. As mentioned above,
merge is implemented simply by concatenating the lists of tree roots of the two heaps. This can be done in constant time and the potential does not change, leading again to constant amortized time. Operation insert works by creating a new heap with one element and doing merge. This takes constant time, and the potential increases by one, because the number of trees increases. The amortized cost is thus still constant.
7.5 Pairing Heaps.
Pairing heaps are a type of heap data structure with relatively simple implementation and excellent practical amortized performance. However, it has proven very difficult to determine the preciseasymptotic running time of pairing heaps. Pairing heaps are heap ordered multiway trees. Describing the various heap operations is relatively simple (in the following we assume a min-heap):
find-min: simply return the top element of the heap.
merge: compare the two root elements, the smaller remains the root of the result, the larger element
and its subtree is appended as a child of this root.
insert: create a new heap for the inserted element and merge into the original heap.
decrease-key (optional): remove the subtree rooted at the key to be decreased then merge it with the
heap. delete-min: remove the root and merge its subtrees. Various strategies are employed. The amortized time per delete-min is O(logn).The operations find-min, merge, and insert take O(1) amortized time and decrease-key takes amortized time. Fredman proved that the amortized time per decrease-key is at least Ω(loglogn). That is, they are less efficient than Fibonacci heaps, which perform decrease-key in O(1) amortized time.
A pairing heap is either an empty heap, or a pair consisting of a root element and a possibly empty list of pairing heaps. The heap ordering property requires that all the root elements of the subheaps in the list are not smaller that then root element of the heap. The following description assumes a purely functional heap that does not support the decrease-key operation. type PairingHeap[Elem] = Empty | Heap(elem: Elem, subheaps: List[PairingHeap[Elem]])
The function find-min simply returns the root element of the heap:
if heap == Empty
Merging with an empty heap returns the other heap, otherwise a new heap is returned that has the minimum
of the two root elements as its root element and just adds the heap with the larger root to the list of subheaps:
function merge(heap1, heap2)
if heap1 == Empty
elsif heap2 == Empty
elseif heap1.elem < heap2.elem
return Heap(heap1.elem, heap2 :: heap1.subheaps)
return Heap(heap2.elem, heap1 :: heap2.subheaps)
The easiest way to insert an element into a heap is to merge the heap with a new heap containing just this
element and an empty list of subheaps:
function insert(elem, heap)
return merge(Heap(elem, ), heap)
The only non-trivial fundamental operation is the deletion of the minimum element from the heap. The
standard strategy first merges the subheaps in pairs (this is the step that gave this datastructure its name)
from left to right and then merges the resulting list of heaps from right to left:
if heap == Empty
elsif length(heap.subheaps) == 0
elsif length(heap.subheaps) == 1
This uses the auxiliary function merge-pairs:
if length(l) == 0
elsif length(l) == 1
return merge(merge(l, l), merge-pairs(l[2.. ]))