Matrix-Add-Recursive(A, B, C, n)
1 if n == 1
2 // Base case.
3 c11 = c11 + a11 + b11
4 return
5 // Divide.
6 partition A, B, and C into n/2 x n/2 submatrices
A11, A12, A21, A22; B11, B12, B21, B22;
and C11, C12, C21, C22; respectively
7 // Conquer.
8 Matrix-Add-Recursive(A11, B11, C11, n/2)
9 Matrix-Add-Recursive(A12, B12, C12, n/2)
10 Matrix-Add-Recursive(A21, B21, C21, n/2)
11 Matrix-Add-Recursive(A22, B22, C22, n/2)
The recurrence is T(n)=4T(n/2)+Θ(1) and resolves to T(n)=Θ(n2) (case 1 of the master theorem).
If we copy submatrices instead of performing index calculations, then the recurrence changes into T(n)=4T(n/2)+Θ(n2) and is T(n)=Θ(n2lgn) (case 2 of the master theorem).
Exercise 4.2-1
Available in the latest revision of the IM.
Exercise 4.2-2
Available in the latest revision of the IM.
Exercise 4.2-3
Available in the latest revision of the IM.
Exercise 4.2-4
Available in the latest revision of the IM.
Exercise 4.2-5
Available in the latest revision of the IM.
Here is another solution using the same total number of operations as the one depicted in the IM.
A trivial function f(n)=0 does satisfy the regularity condition without being in Ω(nlogba+ϵ) for any ϵ>0. Therefore, we assume that f(n) is asymptotically positive for sufficiently large n, which is anyhow the case in practice. Since the main recurrence is algorithmic, we can also assume that f(n) is defined for n≥n0, as stated in the book.
Performing j=⌊logb(n/n0)⌋ iterations of the inequality af(n/b)≤cf(n) yields ajf(n/bj)≤cjf(n). Notice that n/bj≥n0 and n/bj+1<n0, so n/bj∈[n0,bn0). We have
f(n)≥(a/c)⌊logb(n/n0)⌋f(n/b⌊logb(n/n0)⌋)
Let μ=[n0,bn0)inff, which can be regarded as a positive constant, because it is independent of n. As our driving function is bounded below and is defined on it's domain, such an infimum exists inside a half-closed interval belonging to the domain. We have two cases depending on the ratio (a/c) that comprises the base of an exponential in the formula above.
f(n)≥(a/c)⌊logb(n/n0)⌋f(n/b⌊logb(n/n0)⌋)≥(a/c)logb(n/n0)μ=(n/n0)logbc(n/n0)logbaμ=nlogba−logbc⋅n0logbc−logba⋅μ=d⋅nlogba−logbc(by equations (3.18), (3.20), and (3.21))(d=n0logbc−logba⋅μ>0)
f(n)≥(a/c)⌊logb(n/n0)⌋f(n/b⌊logb(n/n0)⌋)≥(a/c)logb(n/n0)−1μ=a(n/n0)logbcc(n/n0)logbaμ=nlogba−logbc⋅n0logbc−logba⋅(c/a)⋅μ=d⋅nlogba−logbc(by equations (3.18), (3.20), and (3.21))(d=n0logbc−logba⋅(c/a)⋅μ>0)
logbc<0, so let ϵ=−logbc>0. We get f(n)≥d⋅nlogba+ϵ=Ω(nlogba+ϵ).
Exercise 4.6-3
In principle, the statement can be included into Lemma 4.3 as an extension to case~2. Of course, we must tolerate some lack of rigor, as f(n) is not defined for n=1, which is assumed to be true in Lemma 4.3. Since f(n)=Θ(nlogba/lgn) we have that f(n/bj)=Θ((n/bj)logba/lg(n/bj)). Notice that n/bj=1 when logbn is an integer, so we will sum up to ⌊logbn⌋−1 to avoid potential division by zero error. The rest of the steps are rather mechanical.
g(n)=j=0∑⌊logbn⌋−1ajΘ((bjn)logba/lg(bjn))=Θj=0∑⌊logbn⌋−1aj(bjn)logba/lg(bjn)=Θnlogbaj=0∑⌊logbn⌋−1bjlogbalg(n/bj)aj=Θnlogbaj=0∑⌊logbn⌋−1lg(n/bj)1=Θnlogbaj=0∑⌊logbn⌋−1logb(n/bj)logb2=Θnlogbalogb2j=0∑⌊logbn⌋−1logbn−j1=Θnlogbaj=0∑⌊logbn⌋−1logbn−j1(by Exercise A.1-1)(by equation (3.21))(by equation (3.19))(by equations (3.18) and (3.20))(logb2>0 is a constant)
The summation within the Θ-notation can be bounded from above as follows:
We first specify the initial conditions for T′(n) as T′(n)=cT(n) for 0<n<n0, where n0 is the corresponding threshold value. This is possible, since the implicit initial conditions of T(n) are given. We now prove by induction on n that we maintain the equality T′(n)=cT(n) for all n>0.
For 0<n<n0 (base case) the equality holds by design. For the inductive step, assume that for all n0≤m<n, we have T′(m)=cT(m). This holds for all arguments n/bi in recursive calls, as bi>1 implies that 0<n/bi<n.
We use strong induction, because the Akra-Bazzi recurrence involves recursive calls on arguments n/bi that eventually fall into the valid base range.
T′(n)=cf(n)+i=1∑kaiT′(n/bi)=cf(n)+i=1∑kaicT(n/bi)=c(f(n)+i=1∑kaiT(n/bi))=cT(n)(by the inductive hypothesis)
We have shown that by choosing initial conditions such that T′(n)=cT(n) for 0<n<n0, the recurrence implies T′(n)=cT(n) for n≥n0. Therefore, T′(n)=cT(n) for all n>0.
If T(n) has Θ(g(n)) as a solution, then the solution of T′(n) is c⋅Θ(g(n))=Θ(g(n)). Thus, scaling the driving function by a constant factor does not alter the asymptotic growth of the solution to the Akra-Bazzi recurrence.
This proves that f(n)=n2 satisfies the polynomial-growth condition.
Select any ϕ>1, so that for ψ=2 we have
f(ψn)=2ψn=22n=2n⋅2n≰d2n(for any constant d)
This proves that f(n)=2n does not satisfy the polynomial-growth condition.
Exercise 4.7-3
In this exercise, let n0=n^>0 to reconcile an inconsistency, at the time of writing this text, between the problem statement and the definition of the polynomial-growth condition from the book.
A trivial nonnegative function f(n)=0 for all n≥n0 satisfies the polynomial-growth condition, but it cannot represent the driving function in any realistic scenario. So, assume that f(n) denotes a proper cost function in equation (4.22).
Suppose, for the sake of contradiction, that there exists some n≥n0 where f(n)=0. Then, for any ψ∈[1,ϕ] the polynomial-growth inequality becomes 0≤f(ψn)≤0, which implies f(ψn)=0. Since ϕ can be chosen arbitrarily large, this means that f(n) must be identically zero on [n0,∞). But this contradicts our initial assumption that f(n) is nontrivial above some threshold. Therefore, f(n) must be positive for all sufficiently large n.
To solve math problems, you can use a mathematical tool such as WolframAlpha. Each subproblem includes executable commands for WolframAlpha along with their corresponding responses, which are substituted into equation (4.23).
a.
solve 1/2^p + 1/3^p + 1/6^p = 1 for real p
We get that p=1. We can now calculate the required definite integral.
Integrate[x Log[2,x]/x^2,{x,1,n}]
The result is ln4ln2n=2lgelg2n, hence T(n)=Θ(nlg2n).
b.
solve 3/3^p + 8/4^p = 1 for real p
We get that p≈1.86<2. Notice that the integral becomes ∫1nlgxx1−pdx, so it does not converge due to division by zero. Instead, we will find it's lower and upper bound and use them to estimate the bounds on T(n).
For the lower bound, observe that lgn≥lgx on the whole interval of integration. We get a simple integral lgn1∫1nx1−pdx=lgn(2−p)n2−p−1, thus T(n)=Ω(n2/lgn).
For the upper bound, we can use the substitution method, assuming that it is asymptotically the same as the lower bound. So, the inductive hypothesis is that T(n)≤cn2/lgn for some constants c>0, n0>1 and n≥4n0, so that it is valid for recursive cases that eventually land in the base range.
Evidently p=0, since a1+a2=1. From part (a) we also know the integral. Therefore, T(n)=Θ(lg2n).
d.
We can immediately see that p=−1. The integral is lnn that can be read out from a table of integrals. Thus, T(n)=Θ(lgn/n).
e.
solve 3/3^p + 3/(3/2)^p = 1 for real p
We get that p=3. The integral is −1/n that can be read out from a table of integrals. Thus, T(n)=Θ(n3).
Exercise 4.7-6
The master method is a special case of the Akra-Bazzi framework, where k=1, a1=a, and b1=b. Therefore, a/bp=1⟹p=lgba. Also, let n0≥1 such that for all n≥n0 a driving function is defined and nonnegative.
Notice that the integral I0=∫1n0xp+1f(x)dx is some nonnegative value, due to f(n) being nonnegative. We use this fact in all subproblems.
Case 1
Suppose f(n)=O(np−ϵ) for some constant ϵ>0, so 0≤f(n)≤cnp−ϵ.
Ta1(N,n)=Ta1(N,n/2)+Θ(1) resolves to Ta1(N,n)=Θ(1)⋅Θ(lgn)=Θ(lgn) (see Exercise 4.5.-3).
Ta2(N,n)=Ta2(N,n/2)+Θ(N) resolves to Ta2(N,n)=Θ(N)⋅Θ(lgn)=Θ(Nlgn).
Ta3(N,n)=Ta3(N,n/2)+Θ(n) resolves to Ta3(N,n)=Θ(n) according to case 3 of the master theorem. Observe that the regularity condition trivially holds, since f(n)=c(n+o(n)) for some constant c>0.
b.
Tb1(N,n)=2Tb1(N,n/2)+Θ(n) resolves to Tb1(N,n)=Θ(nlgn).
Tb2(N,n)=2Tb2(N,n/2)+Θ(n+N) resolves to Tb2(N,n)=Θ(n(lgn+N)). These two recurrences could be simplified by recalling that N≥n. We have to be careful with that extra Θ(N) term, since it is a cost that occurs at each recursive call. At each level, we have a branching factor of 2, therefore we perform array copying 2lgn=n times. This is fundamentally different situation than with a binary search from the previous subproblem.
Tb3(N,n)=2Tb3(N,n/2)+Θ(n) resolves to Tb3(N,n)=Θ(nlgn).
c.
Tc1(N,n)=8Tc1(N,n/2)+Θ(1) resolves to Tc1(N,n)=Θ(n3).
Tc2(N,n)=8Tc2(N,n/2)+Θ(N2) resolves to Tc2(N,n)=Θ(n3N2) (see the commend in part (b) for the second case).
Tc3(N,n)=8Tc3(N,n/2)+Θ(n2) resolves to Tc3(N,n)=Θ(n3) (see Exercise 4.1-3).
Problem 4-3
Available in the latest revision of the IM.
Problem 4-4
Available in the latest revision of the IM.
The solutions in the latest IM are based on the previous edition of the book. Despite being correct, they could be solved more efficiently using the newly added chapters 4.6 and 4.7.
Parts (b) and (e) are belonging to the extended case 2 of the master theorem (see Exercise 4.6-3).
Part (d) is covered by the Akra-Bazzi method. Based on Exercise 4.6-1, we can ignore the scaling factor of ½ on a driving function and take that f(n)=n. It trivially satisfies the polynomial-growth condition. Furthermore, the perturbation factor hi(n)=−2 surely satisfies ∣hi(n)∣=2=O(n/lg1+ϵ) for some constant ϵ>0, so it can be also ignored. In this problem p=1 and the definite integral becomes trivial.
Part (f) is also easily solved using the Akra-Bazzi method with 0<p<1 (actually p≈0.88) and basic form of the definite integral.
Part (j) can be easily solved with a change of variables. Let S(n)=T(n)/n=S(n)+1 and m=lgn. In terms of m, we have R(m)=R(m/2)+1 that resolves to R(m)=Θ(lgm). Thus, S(n)=R(m)=Θ(lglgn) and finally T(n)=nS(n)=Θ(nlglgn).
Problem 4-5
Available in the latest revision of the IM.
Problem 4-6
Available in the latest revision of the IM.
If the number of bad chips is unknown, the conclusion in part (a) still holds. While having at least n/2 bad chips is sufficient, it is not necessary. If bad chips are conspiring by always lying, we could only form two separate groups, without being able to distinguish between them.
Problem 4-7
a.
The "only if" direction is trivial. Just substitute k=i+1 and l=j+1 into the main definition.
You can find the proof of the "if" part here (lookup Lemma 1.1).
b.
We should ensure that A[1,3]∈[24,29].
c.
Suppose for the sake of contradiction that there exists an index 1≤i<m such that f(i)>f(i+1). We have A[i,f(i+1)]>A[i,f(i)] and A[i+1,f(i)]≥A[i+1,f(i+1)] implying A[i,f(i+1)]+A[i+1,f(i)]>A[i,f(i)]+A[i+1,f(i+1)]. But this contradicts the definition of an array being Monge for coordinates j=f(i+1), k=i+1, and l=f(i).
d.
To calculate f(i) for odd i we use values of f in neighboring rows. By part (c) we know that f(i−1)≤f(i)≤f(i+1), so the index of a minimum element must be inside an interval [f(i−1),f(i+1)]. The size of this interval is f(i+1)−f(i−1)+1. The total number of comparisons is (assume f(0)=1 and f(m+1)=n)
1≤i≤mi is odd∑(f(i+1)−f(i−1)+1)=O(m)+(f(m)−f(0))=O(m)+O(n)=O(m+n)(telescoping sum)
e.
The algorithmic recurrence is T(m,n)=T(⌊m/2⌋,n)+O(m+n) for m>1 and n>1.
Let c>0 and n0>1 represent the constants hidden by the O-notation. Assume n≥n0. We use the substitution method and induction on m to prove that T(m,n)≤c′(m+nlgm) for some c′>0.