您的位置:首页 > 数据库 > MySQL

MySQL系列之D------MySQL多实例安装安装

2015-07-12 19:59 766 查看
1.  Problem Denition

    --  Input: Frequencies p1, p2, ... , pn for items 1, 2, ... , n. [Assume items in sorted order, 1 < 2 < ... < n]

    --  Goal: Compute a valid search tree that minimizes the weighted (average) search time:

        C(T) = Sum(items i ) { pi * [search time for i in T] (Depth of i in T + 1) }

        Example: If T is a red-black tree, then C(T) = O(log n). (Assuming Sum(i){ pi }= 1.)

 

2.  Comparison with Human Codes

    --  Similarities:

        --  Output = a binary tree

        --  Goal is (essentially) to minimize average depth with respect to given probabilities

    --  Dierences:

        --  With Human codes, constraint was prex-freeness [i.e., symbols only at leaves]

        --  Here, constraint = search tree property

 

3.  Greedy Doesn't Work

    --  Intuition: Want the most (respectively, least) frequently accessed items closest (respectively, furthest) from the root.

    --  Bottom-up [populate lowest level with least frequently accessed keys]

    --  Top-down [put most frequently accessed item at root, recurse]

    --  Counter examples:



 

4.  Optimal Substructure

    --  Suppose an optimal BST for keys {1, 2, ... , n} has root r , left subtree T1, right subtree T2. T1 is optimal for the keys {1, 2, ... , r - 1} and T2 is optimal for the keys {r + 1, r + 2, ... , n}

    --  Proof :

        a)  Let T be an optimal BST for keys {1, 2, ... , n} with frequencies p1, ... , pn. Suppose T has root r . Suppose for contradiction that T1 is not optimal for {1, 2, ... , r - 1} [other case is similar] with C(T1* ) < C(T1). Obtain T* from T by "cutting+pasting" T1* in for T1.

        b)  C(T) = Sum(i = 1 to n) { pi * [search time for i in T] }

                    = pr * 1 + Sum(i = 1 to n-1 ) { pi * [search time for i in T] } + Sum(i=r+1 to n){ pi * [search time for i in T] }

                    = Sum(i = 1 to n) { pi } + Sum(i = 1 to n-1 ) { pi * [search time for i in T1] } + Sum(i=r+1 to n){ pi * [search time for i in T2] }

                    = Sum(i = 1 to n) { pi } + C(T1) + C(T2)

           c)  C(T*) = Sum(i = 1 to n) { pi } + C(T1*) + C(T2)  < C(T) contradicting optimality of T.

 

5.  Optimal Substructure

    -- Items in a subproblem are either a prefix or a suffix of the original problem.

    -- Let {1, 2, ... , n} = original items. We need to compute the optimal BST for subsets {i, i + 1, ... , j-1, j} for every i<=j ( continuous intervals )

 

6.  The Recurrence

    --  Notation: For 1 <= i <= j <= n, Let C(i , j) = weighted search cost of an optimal BST for the items {i , i + 1, ... , j - 1, j} [with probabilities pi , pi+1, ... , pj-1, pj ]

    --  Recurrence: For every 1 <= i <= j <= n:

        C(i , j) = min (r=i to j) { Sum(k = i to j) {pk} + C(i, r-1) , + C( r+1, j) }

        (Recall formula C(T) = Sum (k){pk} + C(T1) + C(T2) )  Interpret C(x , y) = 0 if x > y

 

7.  The Algorithm

    --  Important: Solve smallest subproblems (with fewest number (j - i + 1) of items) first.

    --  Let A = 2-D array. ( A[i , j ] represents optimal BST value of items {i, ... , j} )

        For s = 0 to n - 1 [s represents j - i ]

            For i = 1 to n [so i + s plays role of j ]

                A[i , i + s] = min(r=i to i+s) { Sum(k=i to i+s) {pk} + A[i , r-1] + A[r + 1 , i + s] }

        Return A[1 , n]

    --  Interpret as 0 if 1st index > 2nd index. Available for O(1)-time lookup

    --  Running Time

        a)  O(n^2) subproblems

        b)  (j - i ) time to compute A[i , j ]

        c)  O(n^3) time overall



 

 

 
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: