您的位置:首页 > 其它

构造函数初始化列表中成员列出顺序要和它们在类中声明顺序相同

2012-06-01 09:45 561 查看
1.  The quicksort algorithm has a worst-case running time of θ(n^2) on an input array of n numbers. Despite this slow worst-case running time, quicksort is often the best practical choice for sorting because it is remarkably efficient on the average: its expected running time is θ(n lg n), and the constant factors hidden in the θ(n lg n) notation are quite small. It also has the advantage of sorting in place 
and it works well even in virtual-memory environments.
 
2.  Quicksort, like merge sort, applies the divide-and-conquer paradigm. The following is the three-step divide-and-conquer process for sorting a typical subarray A[p...r]:

    1)  Divide: Partition (rearrange) the array A[p...r] into two (possibly empty) subarrays A[p...q-1] and A[q+1...r] such that each element of A[p...q-1] is

less than or equal to A[q], which is, in turn, less than or equal to each element of A[q+1...r]. Compute the index q as part of this partitioning method.

    2)  Conquer: Sort the two subarrays A[p...q-1] and A[q+1...r] by recursive calls to quicksort.

    3)  Combine: Because the subarrays are already sorted, no work is needed to combine them: the entire array A[p...r] is now sorted.

 

3.  Implementation of quick sort:

// sort arr[start...end-1]
void quickSort(int[] arr, int start, int end) {
assert start < end;
assert end <= arr.length;
int q = partition(arr, start, end);
quickSort(arr, start, q);
quickSort(arr, q+1, end);
}

To sort an entire array A, the initial call is quickSort(A, 0, A.length).

 

4.  Implementation of partition method:

// use the last element to partition the array arr[start...end-1]
int partition(int[] arr, int start, int end){
int pivot = arr[--end]
int i = start - 1;
for ( int j = start; j < end ; j ++ ) {
if ( arr[j] < pivot ) {
// swap i + 1 with j
int temp = arr[++i];
arr[i] = arr[j];
arr[j] = temp;
}
}
// swap i + 1 with end
int temp = arr[++i];
arr[i] = arr[end];
arr[end] = temp;
return i;
}

partition always selects the last element as a pivot element around which to partition the array from start to end-1. During the loop, j is used to traverse the array while i indicates the boundary of those smaller or equal to pivot and those greater than pivot for all traversed elements. The running time is θ(n) where n = end - start.

 

5.  The running time of quicksort depends on whether the partitioning is balanced or unbalanced, which in turn depends on which elements are used for partitioning.

 

6.  The worst-case behavior for quicksort occurs when the partitioning routine produces one subproblem with n-1 elements and one with 0 elements. Let us assume that this unbalanced partitioning arises in each recursive call. Then the running time T(n) = T(n-1) + T(0) + θ(n) = T(n-1) + θ(n) = θ(n^2). The worst case occurs when the input array is already completely sorted.

 

7.  By equally balancing the two sides of the partition at every level of the recursion, we get an asymptotically faster algorithm. In the most even possible split, partition produces two subproblems, each of size no more than n/2, since one is of size ⌊n/2⌋ and one of size ⌈n/2⌉-1. The recurrence for the running time is then T(n) = 2T(n/2) + θ(n) = θ(n lgn).

 

8.  Any split of constant proportionality yields a recursion tree of depth θ(lg n), where the cost at each level is O(n). The running time is therefore O(n lg n) whenever the split has constant proportionality.



 

9.  Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits:



The combination of the bad split followed by the good split produces three subarrays of sizes 0, (n-1)/2-1 and (n-1)/2 at a combined partitioning cost

of θ(n)+θ(n-1)=θ(n). Certainly, this situation is no worse than a single level of partitioning that produces two subarrays of size (n-1)/2, at a cost of θ(n). Intuitively, the θ(n-1) cost of the bad split can be absorbed into the θ(n) cost of the good split, and the resulting split is good. Thus, the running time of quicksort, when levels alternate between good and bad splits, is like the running time for good splits alone: still O(n lg n), but with a slightly larger constant hidden by the O-notation.

 

10.  For the randomized version of quick sort , instead of always using the last element as the pivot, we will select a randomly chosen element from the subarray.

We do so by first exchanging last element with an element chosen at random

from the subarray. By randomly sampling the subarray, we ensure that the pivot

element is equally likely to be any of the element in the subarray. Because we randomly choose the pivot element, we expect the split of the input array to be reasonably well balanced on average.

 

11.  The running time of quick sort is dominated by the time spent in the partition

method. Each time the partition method is called, it selects a pivot element, and this element is never included in any future recursive calls to quickSort and partion. Thus, there can be at most n calls to partition over the entire execution of the quicksort algorithm. One call to partition takes O(1) time plus an amount of time that is proportional to the number of comparisons performed in partition method. We will not attempt to analyze how many comparisons are made in each call to partition. Rather, we will derive an overall bound on the total number of comparisons. For ease of analysis, we

rename the elements of the array A as z1, z2, ... , zn, with zi being the ith smallest element. We also define the set Zij ={zi, zi+1, ... ,zj } to be the set of elements between zi and zj , inclusive. Each pair of elements is compared at most once. Because elements are compared only to the pivot element and, after a particular call of partition finishes, the pivot element used in that call is never again compared to any other elements. We define indicator random variable : 

    Xij = I {zi is compared to zj }

So the total number of comparisons performed by the algorithm is :

   X = ∑(i=1 to n-1) { ∑(j=i+1 to n) { Xij } }

By linearity of expectation :

  E(X) =  ∑(i=1 to n-1) { ∑(j=i+1 to n) { E(Xij) } } = ∑(i=1 to n-1) { ∑(j=i+1 to n) { Pr{zi is compared to zj } } }

once a pivot x is chosen with zi < x < zj , we know that zi and zj cannot be compared at any subsequent time. If, on the other hand, zi is chosen as a pivot before any other item in Zij, then zi will be compared to each item in Zij , except for itself. Similarly, if zj is chosen as a pivot before any other item in Zij , then zj will be compared to each item in Zij , except for itself. Since any element of Zij is equally likely to be the first one chosen as a pivot independently, we have :

  P{ zi is compared to zj }

= P{ zi or zj is first pivot chosen from Zij }

= P{ zi is first pivot chosen from Zij } +  Pr{zj is first pivot chosen from Zij }

= 1/ (j - i + 1) + 1/ (j - i + 1) = 2/(j - i + 1)

So, we have :

  E(X) = ∑(i=1 to n-1) { ∑(j=i+1 to n) { 2/(j - i + 1) } }

          = ∑(i=1 to n-1) { ∑(z=1 to n-i) { 2(z + 1) } }     (z = j-i)

          < ∑(i=1 to n-1) { ∑(z=1 to n) { 2/z } }

          = ∑(i=1 to n-1) O(lg n) = O(n lg n)

 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐