# Chapter 4

## Exercise 4.1-1

**Available in the latest revision of the IM.**

## Exercise 4.1-2

**Available in the latest revision of the IM.**

## Exercise 4.1-3

**Available in the latest revision of the IM.**

## Exercise 4.1-4

Assume that $$n$$ is an exact power of 2. The pseudocode calculates $$C=C+A+B$$ by recursively finding the following sums:

$$
\begin{align\*}
C\_{11} &= C\_{11}+A\_{11}+B\_{11} \\
C\_{12} &= C\_{12}+A\_{12}+B\_{12}\\
C\_{21} &= C\_{21}+A\_{21}+B\_{21}\\
C\_{22} &= C\_{22}+A\_{22}+B\_{22}
\end{align\*}
$$

```
Matrix-Add-Recursive(A, B, C, n)
1   if n == 1
2       // Base case.
3       c11 = c11 + a11 + b11
4       return
5   // Divide.
6       partition A, B, and C into n/2 x n/2 submatrices
        A11, A12, A21, A22; B11, B12, B21, B22;
        and C11, C12, C21, C22; respectively
7   // Conquer.
8   Matrix-Add-Recursive(A11, B11, C11, n/2)
9   Matrix-Add-Recursive(A12, B12, C12, n/2)
10  Matrix-Add-Recursive(A21, B21, C21, n/2)
11  Matrix-Add-Recursive(A22, B22, C22, n/2)
```

The recurrence is $$T(n)=4T(n/2)+\Theta(1)$$ and resolves to $$T(n)=\Theta(n^2)$$ (case 1 of the master theorem).

If we copy submatrices instead of performing index calculations, then the recurrence changes into $$T(n)=4T(n/2)+\Theta(n^2)$$ and is $$T(n)=\Theta(n^2\lg n)$$ (case 2 of the master theorem).

## Exercise 4.2-1

**Available in the latest revision of the IM.**

## Exercise 4.2-2

**Available in the latest revision of the IM.**

## Exercise 4.2-3

**Available in the latest revision of the IM.**

## Exercise 4.2-4

**Available in the latest revision of the IM.**

## Exercise 4.2-5

**Available in the latest revision of the IM.**

Here is another solution using the same total number of operations as the one depicted in the IM.

$$
\begin{align\*}
p\_1&=a(c+d)\space\space\space(=ac+ad) \\
p\_2&=b(c−d)\space\space\space(=bc−bd) \\
p\_3&=d(a+b)\space\space\space(=ad+bd) \\
\bold{re}&=p\_1−p\_3\space\space\space(=ac+ad−ad−bd=ac−bd) \\
\bold{im}&=p\_2+p\_3\space\space\space(=bc−bd+ad+bd=ad+bc)
\end{align\*}
$$

## Exercise 4.2-6

**Available in the latest revision of the IM.**

## Exercise 4.3-1

**Available in the latest revision of the IM.**

## Exercise 4.3-2

**Available in the latest revision of the IM.**

## Exercise 4.3-3

**Available in the latest revision of the IM.**

## Exercise 4.4-1

**Available in the latest revision of the IM.**

## Exercise 4.4-2

**Available in the latest revision of the IM.**

## Exercise 4.4-3

**Available in the latest revision of the IM.**

## Exercise 4.4-4

**Available in the latest revision of the IM.**

## Exercise 4.5-1

**Available in the latest revision of the IM.**

## Exercise 4.5-2

**Available in the latest revision of the IM.**

## Exercise 4.5-3

**Available in the latest revision of the IM.**

## Exercise 4.5-4

**Available in the latest revision of the IM.**

## Exercise 4.5-5

**Available in the latest revision of the IM.**

## Exercise 4.6-1

$$
\begin{align\*}
\sum\_{j=0}^{\lfloor\log\_bn\rfloor}(\log\_bn-j)^k &\ge \sum\_{j=0}^{\lfloor\log\_bn\rfloor}(\lfloor\log\_bn\rfloor-j)^k \\
&= \sum\_{j=0}^{\lfloor\log\_bn\rfloor}j^k && \text{(reindexing)} \\
&= \Omega(\lfloor\log\_bn\rfloor^{k+1}) && \text{(by Exercise A.1-5)} \\
&= \Omega(\log\_b^{k+1}n) && \text{(by Exercise 3.3-3)}
\end{align\*}
$$

## Exercise 4.6-2

A trivial function $$f(n)=0$$ does satisfy the regularity condition without being in $$\Omega(n^{\log\_ba+\epsilon})$$ for any $$\epsilon > 0$$. Therefore, we assume that $$f(n)$$ is asymptotically positive for sufficiently large $$n$$, which is anyhow the case in practice. Since the main recurrence is algorithmic, we can also assume that $$f(n)$$ is defined for $$n \ge n\_0$$, as stated in the book.

Performing $$j=\lfloor\log\_b(n/n\_0)\rfloor$$ iterations of the inequality $$af(n/b) \le cf(n)$$ yields $$a^jf(n/b^j) \le c^jf(n)$$. Notice that $$n/b^j\ge n\_0$$ and $$n/b^{j+1} < n\_0$$, so $$n/b^j \in \[n\_0,bn\_0)$$. We have

$$
f(n) \ge (a/c)^{\lfloor\log\_b(n/n\_0)\rfloor}f\bigl(n/b^{\lfloor\log\_b(n/n\_0)\rfloor}\bigr)
$$

Let $$\mu=\underset{\[n\_0,bn\_0)}\inf f$$, which can be regarded as a positive constant, because it is independent of $$n$$. As our driving function is bounded below and is defined on it's domain, such an infimum exists inside a half-closed interval belonging to the domain. We have two cases depending on the ratio $$(a/c)$$ that comprises the base of an exponential in the formula above.

{% tabs %}
{% tab title="(a/c) < 1" %}
$$
\begin{align\*}
f(n) &\ge (a/c)^{\lfloor\log\_b(n/n\_0)\rfloor}f\bigl(n/b^{\lfloor\log\_b(n/n\_0)\rfloor}\bigr) \\
&\ge (a/c)^{\log\_b(n/n\_0)}\mu \\
&= \frac{(n/n\_0)^{\log\_ba}}{(n/n\_0)^{\log\_bc}},\mu && \text{(by equations (3.18), (3.20), and (3.21))} \\\[1mm]
&= n^{\log\_ba-\log\_bc}\cdot n\_0^{\log\_bc-\log\_ba}\cdot\mu \\
&= d \cdot n^{\log\_ba-\log\_bc} && \text{(} d=n\_0^{\log\_bc-\log\_ba}\cdot\mu>0  \text{)}
\end{align\*}
$$
{% endtab %}

{% tab title="(a/c) ≥ 1" %}
$$
\begin{align\*}
f(n) &\ge (a/c)^{\lfloor\log\_b(n/n\_0)\rfloor}f\bigl(n/b^{\lfloor\log\_b(n/n\_0)\rfloor}\bigr) \\
&\ge (a/c)^{\log\_b(n/n\_0)-1}\mu \\
&= \frac{c(n/n\_0)^{\log\_ba}}{a(n/n\_0)^{\log\_bc}},\mu && \text{(by equations (3.18), (3.20), and (3.21))} \\\[1mm]
&= n^{\log\_ba-\log\_bc}\cdot n\_0^{\log\_bc-\log\_ba}\cdot(c/a)\cdot\mu \\
&= d \cdot n^{\log\_ba-\log\_bc} && \text{(} d=n\_0^{\log\_bc-\log\_ba}\cdot(c/a)\cdot\mu>0\text{)}
\end{align\*}
$$
{% endtab %}
{% endtabs %}

$$\log\_bc<0$$, so let $$\epsilon=-\log\_bc>0$$. We get $$f(n) \ge d \cdot n^{\log\_ba+\epsilon}=\Omega(n^{\log\_ba+\epsilon})$$.

## Exercise 4.6-3

In principle, the statement can be included into Lemma 4.3 as an extension to case\~2. Of course, we must tolerate some lack of rigor, as $$f(n)$$ is not defined for $$n=1$$, which is assumed to be true in Lemma 4.3. Since $$f(n)=\Theta(n^{\log\_ba}/\lg n)$$ we have that $$f(n/b^j)=\Theta((n/b^j)^{\log\_ba}/\lg(n/b^j))$$. Notice that $$n/b^j=1$$ when $$\log\_bn$$ is an integer, so we will sum up to $$\lfloor\log\_bn\rfloor-1$$ to avoid potential *division by zero* error. The rest of the steps are rather mechanical.

$$
\begin{align\*}
g(n) &= \sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}a^j\Theta\left(\left(\frac{n}{b^j}\right)^{\log\_ba}!!\Bigm/\lg\left(\frac{n}{b^j}\right)\right) \\
&= \Theta\left(\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}a^j\left(\frac{n}{b^j}\right)^{\log\_ba}!!\Bigm/\lg\left(\frac{n}{b^j}\right)\right) && \text{(by Exercise A.1-1)} \\
&= \Theta\left(n^{\log\_ba}\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{a^j}{b^{j\log\_ba}\lg(n/b^j)}\right) \\
&= \Theta\left(n^{\log\_ba}\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\lg(n/b^j)}\right) && \text{(by equation (3.21))} \\
&= \Theta\left(n^{\log\_ba}\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{\log\_b2}{\log\_b(n/b^j)}\right) && \text{(by equation (3.19))} \\
&= \Theta\left(n^{\log\_ba}\log\_b2\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\log\_bn-j}\right) && \text{(by equations (3.18) and (3.20))} \\
&= \Theta\left(n^{\log\_ba}\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\log\_bn-j}\right) && \text{($\log\_b2>0$ is a constant)}
\end{align\*}
$$

The summation within the $$\Theta$$-notation can be bounded from above as follows:

$$
\begin{align\*}
\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\log\_bn-j} &\le \sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\lfloor\log\_bn\rfloor-j} \\
&= \sum\_{j=1}^{\lfloor\log\_bn\rfloor}\frac{1}{j} && \text{(reindexing)} \\
&= \ln(\lfloor\log\_bn\rfloor)+O(1) && \text{(by equation (A.9))} \\
&= O(\lg\lg n)
\end{align\*}
$$

The summation within the $$\Theta$$-notation can be bounded from below as follows:

$$
\begin{align\*}
\sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\log\_bn-j} &\ge \sum\_{j=0}^{\lfloor\log\_bn\rfloor-1}\frac{1}{\lfloor\log\_bn\rfloor+1-j} \\
&= \sum\_{j=2}^{\lfloor\log\_bn\rfloor+1}\frac{1}{j} && \text{(reindexing)} \\
&> \sum\_{j=1}^{\lfloor\log\_bn\rfloor}\frac{1}{j} -1 \\
&= \ln(\lfloor\log\_bn\rfloor)+O(1)-1 && \text{(by equation (A.9))} \\
&= \Omega(\lg\lg n)
\end{align\*}
$$

Theorem 3.1 implies that $$g(n)=\Theta(n^{\log\_ba}\lg\lg n)$$. Following the outline of the proof of Theorem 4.4 for case 2 from the book, we have

$$
\begin{align\*}
f'(n) &= f(n\_0n) \\
&= \Theta((n\_0n)^{\log\_ba}/\lg(n\_0n)) \\
&= \Theta(n^{\log\_ba}/\lg n)
\end{align\*}
$$

$$
\begin{align\*}
T(n) &= T'(n/n\_0) \\
&= \Theta((n/n\_0)^{\log\_ba})+\Theta((n/n\_0)^{\log\_ba}\lg\lg(n/n\_0)) \\
&= \Theta(n^{\log\_ba})+\Theta(n^{\log\_ba}\lg\lg n) \\
&= \Theta(n^{\log\_ba}\lg\lg n) && \text{(by Problem3-5(c))}
\end{align\*}
$$

## Exercise 4.7-1

We first specify the initial conditions for $$T'(n)$$ as $$T'(n)=cT(n)$$ for $$0\<n\<n\_0$$, where $$n\_0$$ is the corresponding threshold value. This is possible, since the implicit initial conditions of $$T(n)$$ are given. We now prove by induction on $$n$$ that we maintain the equality $$T'(n)=cT(n)$$ for all $$n>0$$.

For $$0\<n\<n\_0$$ (base case) the equality holds by design. For the inductive step, assume that for all $$n\_0 \le m\<n$$, we have $$T'(m)=cT(m)$$. This holds for all arguments $$n/b\_i$$ in recursive calls, as $$b\_i>1$$ implies that $$0\<n/b\_i\<n$$.

{% hint style="info" %}
We use strong induction, because the Akra-Bazzi recurrence involves recursive calls on arguments $$n/b\_i$$ that eventually fall into the valid base range.&#x20;
{% endhint %}

$$
\begin{align\*}
T'(n) &= cf(n)+\sum\_{i=1}^k\ a\_iT'(n/b\_i)\\
&= cf(n)+\sum\_{i=1}^k a\_icT(n/b\_i) && \text{(by the inductive hypothesis)} \\
&= c\bigg(f(n)+\sum\_{i=1}^k a\_iT(n/b\_i)\bigg) \\
&= cT(n)
\end{align\*}
$$

We have shown that by choosing initial conditions such that $$T'(n)=cT(n)$$ for $$0\<n\<n\_0$$, the recurrence implies $$T'(n)=cT(n)$$ for $$n \ge n\_0$$. Therefore, $$T'(n)=cT(n)$$ for all $$n>0$$.&#x20;

If $$T(n)$$ has $$\Theta(g(n))$$ as a solution, then the solution of $$T'(n)$$ is $$c \cdot \Theta(g(n))=\Theta(g(n))$$. Thus, scaling the driving function by a constant factor does not alter the asymptotic growth of the solution to the Akra-Bazzi recurrence.

## Exercise 4.7-2

Choose $$d=\phi^2+1>1$$, so that we have

$$
\begin{align\*}
\bold{f(n)/d} &= n^2/(\phi^2+1) \\
&\le n^2 \\
&\le \bold{\psi^2n^2} && \text{($\psi \ge 1$)} \\
&\le (\phi^2+1)n^2 && \text{($\psi \le \phi$)} \\
&= \bold{df(n)}
\end{align\*}
$$

This proves that $$f(n)=n^2$$ satisfies the polynomial-growth condition.

Select any $$\phi>1$$, so that for $$\psi=2$$ we have

$$
\begin{align\*}
f(\psi n) &= 2^{\psi n} \\
&= 2^{2n} \\
&= 2^n\cdot2^n \\
&\nleq d2^n && \text{(for any constant $d$)}
\end{align\*}
$$

This proves that $$f(n)=2^n$$ does not satisfy the polynomial-growth condition.

## Exercise 4.7-3

In this exercise, let $$n\_0=\hat n>0$$ to reconcile an inconsistency, at the time of writing this text, between the problem statement and the definition of the polynomial-growth condition from the book.

A trivial nonnegative function $$f(n)=0$$ for all $$n \ge n\_0$$ satisfies the polynomial-growth condition, but it cannot represent the driving function in any realistic scenario. So, assume that $$f(n)$$ denotes a proper cost function in equation (4.22).

Suppose, for the sake of contradiction, that there exists some $$n \ge n\_0$$ where $$f(n)=0$$. Then, for any $$\psi \in\[1,\phi]$$ the polynomial-growth inequality becomes $$0 \le f(\psi n) \le 0$$, which implies $$f(\psi n)=0$$. Since $$\phi$$ can be chosen arbitrarily large, this means that $$f(n)$$ must be identically zero on $$\[n\_0, \infin)$$. But this contradicts our initial assumption that $$f(n)$$ is nontrivial above some threshold. Therefore, $$f(n)$$ must be positive for all sufficiently large $$n$$.

## Exercise 4.7-4

I provided the answer [here](https://math.stackexchange.com/a/5031384/753413).

## Exercise 4.7-5

To solve math problems, you can use a mathematical tool such as [WolframAlpha](https://www.wolframalpha.com/). Each subproblem includes executable commands for WolframAlpha along with their corresponding responses, which are substituted into equation (4.23).

### a.

```wolfram
solve 1/2^p + 1/3^p + 1/6^p = 1 for real p
```

We get that $$p=1$$. We can now calculate the required definite integral.

```wolfram
Integrate[x Log[2,x]/x^2,{x,1,n}]
```

The result is $$\cfrac{\ln^2 n}{\ln 4}=\cfrac{\lg^2 n}{2\lg e}$$, hence $$T(n)=\Theta(n\lg^2 n)$$.

### b.

```wolfram
solve 3/3^p + 8/4^p = 1 for real p
```

We get that $$p \approx 1.86<2$$. Notice that the integral becomes $$\int\_1^n \frac{x^{1-p}}{\lg x}dx$$, so it does not converge due to division by zero. Instead, we will find it's lower and upper bound and use them to estimate the bounds on $$T(n)$$.&#x20;

For the lower bound, observe that $$\lg n \ge \lg x$$ on the whole interval of integration. We get a simple integral $$\frac{1}{\lg n} \int\_1^n x^{1-p}dx=\frac{n^{2-p}-1}{\lg n (2-p)}$$, thus $$T(n)=\Omega(n^2/\lg n)$$.

For the upper bound, we can use the substitution method, assuming that it is asymptotically the same as the lower bound. So, the inductive hypothesis is that $$T(n) \le cn^2/\lg n$$ for some constants $$c>0$$, $$n\_0>1$$ and $$n \ge 4n\_0$$, so that it is valid for recursive cases that eventually land in the base range.

$$
\begin{align\*}
T(n) &= 3T(n/3)+8T(n/4)+\frac{n^2}{\lg n} \\
&\le \frac{3c(n/3)^2}{\lg(n/3)}+\frac{8c(n/4)^2}{\lg(n/4)}+\frac{n^2}{\lg n} \\
&= \frac{cn^2}{3(\lg n-\lg3)}+\frac{cn^2}{2(\lg n-2)}+\frac{n^2}{\lg n}
\end{align\*}
$$

Notice that $$3(\lg n-\lg3) \ge (8/3)\lg n$$ while $$n\ge2^{3\cdot3\lg3}=3^9=19{,}683$$. Furthermore, $$2(\lg n-2) \ge (5/3)\lg n$$ while $$n\ge2^{3\cdot2\cdot2}=2^{12}=4{,}096$$. If we let $$n\_0\ge\lceil3^9!/4\rceil$$ we have

$$
\begin{align\*}
T(n) &\le \frac{cn^2}{(8/3)\lg n}+\frac{cn^2}{(5/3)\lg n}+\frac{n^2}{\lg n} \\
&= ((3/8)c+(3/5)c+1)\cdot\frac{n^2}{\lg n} \\
&= ((39/40)c+1)\cdot\frac{n^2}{\lg n} \\
&\le \frac{cn^2}{\lg n}
\end{align\*}
$$

as long as $$c\ge40$$. Consequently, $$T(n)=O(n^2/\lg n)$$.&#x20;

Theorem 3.1 implies that $$T(n)=\Theta(n^2/\lg n)$$.

### c.

Evidently $$p=0$$, since $$a\_1+a\_2=1$$. From part (a) we also know the integral. Therefore, $$T(n)=\Theta(\lg^2 n)$$.

### d.

We can immediately see that $$p=-1$$. The integral is $$\ln n$$ that can be read out from a table of integrals. Thus, $$T(n)=\Theta(\lg n/n)$$.

### e.

```wolfram
solve 3/3^p + 3/(3/2)^p = 1 for real p
```

We get that $$p=3$$. The integral is $$-1/n$$ that can be read out from a table of integrals. Thus, $$T(n)=\Theta(n^3)$$.

## Exercise 4.7-6

The master method is a special case of the Akra-Bazzi framework, where $$k=1$$, $$a\_1=a$$, and $$b\_1=b$$. Therefore, $$a/b^p=1 \implies p=\lg\_b a$$. Also, let $$n\_0\ge1$$ such that for all $$n \ge n\_0$$ a driving function is defined and nonnegative.

Notice that the integral $$I\_0 = \int\_1^{n\_0}\frac{f(x)}{x^{p+1}},dx$$ is some nonnegative value, due to $$f(n)$$ being nonnegative.  We use this fact in all subproblems.

### Case 1

Suppose $$f(n)=O(n^{p-\epsilon})$$ for some constant $$\epsilon>0$$, so $$0 \le f(n)\le cn^{p-\epsilon}$$.

The upper bound is

$$
\begin{align\*}
I &= I\_0+\int\_{n\_0}^n\frac{f(x)}{x^{p+1}},dx \\\[1mm]
&\le I\_0+\int\_{n\_0}^n\frac{cx^{p-\epsilon}}{x^{p+1}},dx \\\[1mm]
&= I\_0+c\int\_{n\_0}^nx^{-\epsilon-1},dx \\\[1mm]
&= I\_0+c\left\[\frac{x^{-\epsilon}}{-\epsilon}\right]\_{n\_0}^n \\\[1mm]
&= I\_0+(c/\epsilon)(n\_0^{-\epsilon}-n^{-\epsilon}) \\
&< I\_0+(c/\epsilon)n\_0^{-\epsilon} \\
&= O(1)
\end{align\*}
$$

Since the integral $$I$$ is always positive, if we exclude an identically zero driving function from the picture, we get that $$I=\Theta(1)$$.

By equation (4.23), we get that $$T(n)=\Theta(n^p(1+I))=\Theta(n^p)=\Theta(n^{\log\_ba})$$.

### Case 2

Suppose $$f(n)=\Theta(n^p\lg^kn)$$ for some constant $$k\ge 0$$, so $$0 \le c\_1n^p\lg^kn\le f(n)\le c\_2n^p\lg^kn$$.

We first need to solve the indefinite integral $$\int\frac{\lg^kx}{x},dx$$, for example, by using WolframAlpha.

```wolfram
Integrate[Log[2,x]^k/x,x]
```

We get that it equals $$\cfrac{ \ln^{k+1} n}{(\ln 2)^k (k+1)}+C=\cfrac{(\ln2)\lg^{k+1}x}{k+1}+C$$.

The lower bound is

$$
\begin{align\*}
I &= I\_0+\int\_{n\_0}^n\frac{f(x)}{x^{p+1}},dx \\\[1mm]
&\ge \int\_{n\_0}^n\frac{c\_1x^p\lg^kx}{x^{p+1}},dx \\\[1mm]
&= c\_1\int\_{n\_0}^n\frac{\lg^kx}{x},dx \\\[1mm]
&= c\_1\left\[\frac{(\ln2)\lg^{k+1}x}{k+1}\right]\_{n\_0}^n \\\[1mm]
&= \frac{c\_1(\ln2)}{k+1}\cdot(\lg^{k+1}n-\lg^{k+1}n\_0) \\\[1mm]
&= \Omega(\lg^{k+1}n)
\end{align\*}
$$

The upper bound is

$$
\begin{align\*}
I &= I\_0+\int\_{n\_0}^n\frac{f(x)}{x^{p+1}},dx \\\[1mm]
&\le I\_0+\int\_{n\_0}^n\frac{c\_2x^p\lg^kx}{x^{p+1}},dx \\\[1mm]
&= I\_0+c\_2\int\_{n\_0}^n\frac{\lg^kx}{x},dx \\\[1mm]
&= I\_0+c\_2\left\[\frac{(\ln2)\lg^{k+1}x}{k+1}\right]\_{n\_0}^n \\\[1mm]
&= I\_0+\frac{c\_2(\ln2)}{k+1}\cdot(\lg^{k+1}n-\lg^{k+1}n\_0) \\\[1mm]
&= O(\lg^{k+1}n)
\end{align\*}
$$

Theorem 3.1 implies that $$I=\Theta(\lg^{k+1}n)$$. Finally, we have

$$
\begin{align\*}
T(n) &= \Theta(n^p(1+I)) \\
&= \Theta(n^p(1+\Theta(\lg^{k+1}n))) \\
&= \Theta(n^p\lg^{k+1}n) \\
&= \Theta(n^{\log\_ba}\lg^{k+1}n)
\end{align\*}
$$

### Case 3

I provided the answer[ here](https://math.stackexchange.com/a/5066143/753413).

## Problem 4-1

**Available in the latest revision of the IM.**

## Problem 4-2

### a.

* $$T\_{a1}(N,n) = T\_{a1}(N,n/2)+\Theta(1)$$ resolves to $$T\_{a1}(N,n)=\Theta(1)\cdot\Theta(\lg n)=\Theta(\lg n)$$ (see [Exercise 4.5.-3](#exercise-4.5-3)).
* $$T\_{a2}(N,n) = T\_{a2}(N,n/2)+\Theta(N)$$ resolves to $$T\_{a2}(N,n)=\Theta(N)\cdot\Theta(\lg n)=\Theta(N\lg n)$$.
* $$T\_{a3}(N,n) = T\_{a3}(N,n/2)+\Theta(n)$$ resolves to $$T\_{a3}(N,n)=\Theta(n)$$ according to case 3 of the master theorem. Observe that the regularity condition trivially holds, since $$f(n)=c(n+o(n))$$ for some constant $$c>0$$.

### b.

* $$T\_{b1}(N,n) = 2T\_{b1}(N,n/2)+\Theta(n)$$ resolves to $$T\_{b1}(N,n)=\Theta(n\lg n)$$.
* $$T\_{b2}(N,n) = 2T\_{b2}(N,n/2)+\Theta(n+N)$$ resolves to $$T\_{b2}(N,n)=\Theta(n(\lg n+N))$$. These two recurrences could be simplified by recalling that $$N \ge n$$. We have to be careful with that extra $$\Theta(N)$$ term, since it is a cost that occurs at each recursive call. At each level, we have a branching factor of 2, therefore we perform array copying $$2^{\lg n}=n$$ times. This is fundamentally different situation than with a binary search from the previous subproblem.
* $$T\_{b3}(N,n) = 2T\_{b3}(N,n/2)+\Theta(n)$$ resolves to $$T\_{b3}(N,n)=\Theta(n\lg n)$$.

### c.

* $$T\_{c1}(N,n) = 8T\_{c1}(N,n/2)+\Theta(1)$$ resolves to $$T\_{c1}(N,n)=\Theta(n^3)$$.
* $$T\_{c2}(N,n) = 8T\_{c2}(N,n/2)+\Theta(N^2)$$ resolves to $$T\_{c2}(N,n) = \Theta(n^3N^2)$$ (see the commend in part (b) for the second case).
* $$T\_{c3}(N,n) = 8T\_{c3}(N,n/2)+\Theta(n^2)$$ resolves to $$T\_{c3}(N,n)=\Theta(n^3)$$ (see [Exercise 4.1-3](#exercise-4.1-3)).

## Problem 4-3

**Available in the latest revision of the IM.**

## Problem 4-4

**Available in the latest revision of the IM.**

The solutions in the latest IM are based on the previous edition of the book. Despite being correct, they could be solved more efficiently using the newly added chapters 4.6 and 4.7.&#x20;

Parts (b) and (e) are belonging to the extended case 2 of the master theorem (see [Exercise 4.6-3](#exercise-4.6-3)).&#x20;

Part (d) is covered by the Akra-Bazzi method. Based on [Exercise 4.6-1](#exercise-4.6-1), we can ignore the scaling factor of ½ on a driving function and take that $$f(n)=n$$. It trivially satisfies the polynomial-growth condition. Furthermore,  the perturbation factor $$h\_i(n)=-2$$ surely satisfies $$|h\_i(n)|=2=O(n/\lg^{1+\epsilon})$$ for some constant $$\epsilon>0$$, so it can be also ignored. In this problem $$p=1$$ and the definite integral becomes trivial.

Part (f) is also easily solved using the Akra-Bazzi method with $$0\<p<1$$ (actually $$p \approx 0.88$$) and basic form of the definite integral.

Part (j) can be easily solved with a change of variables. Let $$S(n)=T(n)/n=S(\sqrt n)+1$$ and $$m=\lg n$$. In terms of $$m$$, we have $$R(m)=R(m/2)+1$$ that resolves to $$R(m)=\Theta(\lg m)$$. Thus, $$S(n)=R(m)=\Theta(\lg {\lg n})$$ and finally $$T(n)=nS(n)=\Theta(n \lg \lg n)$$.

## Problem 4-5

**Available in the latest revision of the IM.**

## Problem 4-6

**Available in the latest revision of the IM.**

If the number of bad chips is unknown, the conclusion in part (a) still holds. While having at least $$n/2$$ bad chips is sufficient, it is not necessary. If bad chips are conspiring by always lying, we could only form two separate groups, without being able to distinguish between them.

## Problem 4-7

### a.

The "only if" direction is trivial. Just substitute $$k=i+1$$ and $$l=j+1$$ into the main definition.

You can find the proof of the "if" part [here](https://dspace.mit.edu/handle/1721.1/27984) (lookup Lemma 1.1).

### b.

We should ensure that $$A\[1,3] \in \[24,29]$$.

### c.

Suppose for the sake of contradiction that there exists an index $$1\le i\<m$$ such that $$f(i)>f(i+1)$$. We have $$A\[i,f(i+1)] > A\[i,f(i)]$$ and $$A\[i+1,f(i)] \ge A\[i+1,f(i+1)]$$ implying $$A\[i,f(i+1)]+A\[i+1,f(i)] > A\[i,f(i)]+A\[i+1,f(i+1)]$$.  But this contradicts the definition of an array being Monge for coordinates $$j=f(i+1)$$, $$k=i+1$$, and $$l=f(i)$$.

### d.

To calculate $$f(i)$$ for odd $$i$$ we use values of $$f$$ in neighboring rows. By part (c) we know that $$f(i-1)\le f(i) \le f(i+1)$$, so the index of a minimum element must be inside an interval $$\[f(i-1),f(i+1)]$$. The size of this interval is $$f(i+1)-f(i-1)+1$$. The total number of comparisons is (assume $$f(0)=1$$ and $$f(m+1)=n$$)

$$
\begin{align\*}
\sum\_{\substack{1\le i\le m\\\text{$i$ is odd}}}(f(i+1)-f(i-1)+1) &= O(m)+(f(m)-f(0)) && \text{(telescoping sum)} \\\[2mm]
&= O(m)+O(n) \\
&= O(m+n)
\end{align\*}
$$

### e.

The algorithmic recurrence is $$T(m,n)=T(\lfloor m/2\rfloor,n)+O(m+n)$$ for $$m >1$$ and $$n >1$$.

Let $$c>0$$ and $$n\_0>1$$ represent the constants hidden by the O-notation. Assume $$n \ge n\_0$$. We use the substitution method and induction on $$m$$ to prove that $$T(m,n) \le c'(m+n \lg m)$$ for some $$c' > 0$$.

We have

$$
\begin{align\*}
T(m,n) &\le c'(\lfloor m/2\rfloor+n\lg(\lfloor m/2\rfloor))+c(m+n) \\
&\le c'(m/2+n\lg (m/2))+c(m+n) \\
&= c'(m+n\lg m)-c'(m/2+n)+c(m+n) \\
&\le c'(m+n\lg m)
\end{align\*}
$$

when $$-c'(m/2+n)+c(m+n) \le 0$$, which is true if $$c' \ge 2c$$.
