# Chapter 4

## Exercise 1

$$
\frac{N}{N+1}<1 \text{ for all }N\ge 0 \implies \frac{N}{N+1}=O(1).
$$

Expanding $$2^N$$ and $$N!$$ gives

$$
\lim\_{N \to \infty} \frac{2^N}{N!}=\lim\_{N \to \infty} \frac{2\times2\times\cdots\times2}{1\times2\times\cdots \times N}=0,
$$

thus $$2^N=o(N!)$$.

Clearly, $$\lim\_{N \to \infty} e^{1/N}=e^0=1$$, so $$\sqrt\[N]{e} \sim 1$$.

## Exercise 2 🌟

{% hint style="success" %}
This exercise demonstrates that $$g(N)=O(f(N))$$ also encompasses asymptotically negative functions.  Certain references, like CLRS's *Introduction to Algorithms*, restrict $$g(N)$$ to nonnegative functions and employ a specific notation O' to remove this limitation within O-notation.
{% endhint %}

We start by factoring $$N/(N+1)=1-1/(N+1)$$. Let’s show that the second summand is $$O(1/N)$$

$$
\left|\frac{-1/(N+1)}{1/N}\right|=\frac{N}{N+1}<1 \text{ for all }N\ge 0,
$$

therefore, $$N/(N+1)=1+O(1/N)$$. We also have

$$
\lim\_{N \to \infty} \frac{N/(N+1)}{1-1/N}=1,
$$

hence, $$N/(N+1) \sim1-1/N$$.

## Exercise 3

$$
\lim\_{N \to \infty} \frac{N^\alpha}{N^\beta}=\lim\_{N \to \infty} \frac{1}{N^{\beta-\alpha}}=0, \qquad \text{when }\alpha<\beta.
$$

Therefore, $$N^\alpha=o(N^\beta)$$ for $$\alpha<\beta$$.

## Exercise 4

The O- and o-notations provide ways to express upper bounds (with o being the stronger assertion). Consequently, $$g(N)=o(f(N)) \implies g(N)=O(f(N))$$. By combining  the results from the previous two exercises, for $$r\ge0$$ fixed, we can conclude that (see also section 4.3 of the book)&#x20;

$$
1±k/N =1+O(1/N) \text{ for } 0 \le k \le r ,\qquad (1+O(1/N))^r=1+O(1/N).
$$

This gives

$$
\begin{align\*}
\binom{N}{r} &= \frac{N(N-1)\cdots(N-r+1)}{r!} \\
&= \frac{N^r}{r!}\prod\_{k=0}^{r-1}\Big(1-\frac{k}{N}\Big) \\
&= \frac{N^r}{r!}\Big(1+O!\Big(\frac{1}{N}\Big)\Big)^r \\
&= \frac{N^r}{r!}\Big(1+O!\Big(\frac{1}{N}\Big)\Big) \\
&\therefore \boxed{\binom{N}{r} = \frac{N^r}{r!}+O(N^{r-1})}.
\end{align\*}
$$

Similarly, we have

$$
\begin{align\*}
\binom{N+r}{r} &= \frac{(N+r)(N+r-1)\cdots(N+1)}{r!} \\
&= \frac{N^r}{r!}\prod\_{k=1}^{r}\Big(1+\frac{k}{N}\Big) \\
&= \frac{N^r}{r!}\Big(1+O!\Big(\frac{1}{N}\Big)\Big)^r \\
&= \frac{N^r}{r!}\Big(1+O!\Big(\frac{1}{N}\Big)\Big) \\
&\therefore \boxed{\binom{N+r}{r} = \frac{N^r}{r!}+O(N^{r-1})}.
\end{align\*}
$$

### Alternative Proof

Instead of relying on basic algebraic manipulations involving the O-notation, we can represent the numerator of $$\binom{N}{r}$$ as a polynomial $$P(N)=N(N-1)\cdots(N-r+1)$$. Since there are $$r$$ terms in this product, multiplying the $$N$$ from each term gives us a leading factor of $$N^r$$. To find the next term (the coefficient of $$N^{r-1}$$), we sum the constant terms from each factor

$$
\text{Sum of constants} = -(0 + 1 + 2 + \dots + (r-1)) = -\frac{(r-1)r}{2}.
$$

Thus, the numerator expands to

$$
P(N) = N^r - \frac{r(r-1)}{2}N^{r-1} + \dots (\text{lower order terms}).
$$

Dividing by $$r!$$ gives

$$
\binom{N}{r} = \frac{N^r}{r!} - \frac{r(r-1)}{2 \cdot r!}N^{r-1} + \dots
$$

The first term is our leading term. All subsequent terms form a polynomial of degree $$r-1$$. Therefore, their sum is $$O(N^{r-1})$$ (see also the previous exercise). An analogous reasoning applies for the case $$\binom{N+r}{r}$$.

## Exercise 5

$$
\begin{align\*}
\lim\_{N \to \infty} \frac{\log N}{N^\epsilon} &\overset{L'Hôpital}{=} \lim\_{N \to \infty} \frac{1/N}{\epsilon N^{\epsilon - 1}} \\
&= \lim\_{N \to \infty} \frac{1}{\epsilon N^\epsilon} = 0 \\
&\therefore \boxed{\log N = o(N^\epsilon)}.
\end{align\*}
$$

## Exercise 6

$$
\begin{align\*}
&\lim\_{N \to \infty} \frac{1}{2+\ln N} = 0 \\
&\therefore \boxed{\frac{1}{2+\ln N} = o(1)}.
\end{align\*}
$$

The cosine function oscillates between -1 and 1, thus as $$N \to \infty$$ we have $$1/(2+\cos N) \in\[1/3,1]$$. Clearly, it’s bounded from above, but doesn’t tend to 0. Therefore, it is $$O(1)$$ but not $$o(1)$$.

## Exercise 7

{% hint style="warning" %}
The definition from the book *"...we often refer informally to a quantity as being exponentially small if it is smaller than any negative power of N—that is,* $$O(1/N^M)$$ *for any positive M."* is imprecise. Namely, the word "smaller" pertains to an asymptotic property of a decay, so instead of O- it should use o-notation.
{% endhint %}

$$
\begin{align\*}
\lim\_{N \to \infty} \frac{e^{-N^\epsilon}}{N^{-M}} &= \lim\_{N \to \infty} \frac{N^M}{e^{N^\epsilon}} \\
&= \lim\_{N \to \infty} \frac{e^{\ln N^M}}{e^{N^\epsilon}} \\
&= \lim\_{N \to \infty} \frac{e^{M\ln N}}{e^{N^\epsilon}} = 0.
\end{align\*}
$$

The last line follows from [Exercise 5](#exercise-5), thus $$e^{-N^\epsilon}$$ is exponentially small.

## Exercise 8

$$
\begin{align\*}
\lim\_{N \to \infty} \frac{e^{-\log^2 N}}{N^{-M}} &= \lim\_{N \to \infty} \frac{N^M}{e^{\log^2 N}} \\
&= \lim\_{N \to \infty} \frac{e^{\ln N^M}}{e^{\log^2 N}} \\
&= \lim\_{N \to \infty} \frac{e^{M\ln N}}{e^{\log^2 N}} = 0.
\end{align\*}
$$

In the last line $$M\ln N = o(\log^2 N)$$, since after cancelling one logarithm on both sides, we are left with a constant term "against" a logarithm that grows to infinity. Observe that we don’t care about the base of a logarithm on the RHS, as it’s just a constant multiple of a natural logarithm. Therefore, $$e^{-\log^2 N}$$ is exponentially small.

$$
\begin{align\*}
\lim\_{N \to \infty} \frac{(\log N)^{-\log N}}{N^{-M}} &= \lim\_{N \to \infty} \frac{N^M}{(\log N)^{\log N}} \\
&= \lim\_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\log N)^{\log N}}}} \\
&= \lim\_{N \to \infty} \frac{e^{M\ln N}}{e^{\log N (\ln {\log N})}} = 0.
\end{align\*}
$$

By employing the same reasoning as previously, we see that now $$\log \log n$$ grows faster than any constant. Therefore, $$(\log N)^{-\log N}$$ is exponentially small.

## Exercise 9 🌟

{% hint style="success" %}
This exercise shows why focusing on an asymptotically most significant term is beneficial. As the calculation shows, it’s a reliable estimator of the performance for sufficiently large problem sizes.
{% endhint %}

$$
\begin{align\*}
\lim\_{N \to \infty} \frac{(\alpha/\beta)^N}{N^{-M}} &= \lim\_{N \to \infty} \frac{N^M}{(\beta/\alpha)^N} \\
&= \lim\_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\beta/\alpha)^N}}} \\
&= \lim\_{N \to \infty} \frac{e^{M\ln N}}{e^{N\ln {(\beta/\alpha)}}} = 0,
\end{align\*}
$$

so $$\alpha^N$$ is exponentially small relative to $$\beta^N$$ (see also [Exercise 5](#exercise-5)).

The absolute error is $$\alpha^N$$, whilst the relative error is $$\alpha^N/(\alpha^N+\beta^N)$$. We have

* For $$N=10$$ the absolute error is ≈2.6 and the relative error is ≈29.53%.
* For $$N=100$$ the absolute error is ≈13,781 and the relative error is ≈0.02%.

## Exercise 10

{% hint style="warning" %}
See the remark for [Exercise 7](#exercise-7).
{% endhint %}

Let $$f(N)$$ be exponentially small, meaning for every $$M > 0$$, $$f(N) = o(N^{-M})$$. For any polynomial$$g(N)=O(N^d)$$, where $$d \ge 0$$ is the degree of the polynomial, we can choose $$M'=M+d$$, such that $$f(N)g(N)=o(N^{-M'})O(N^d)=o(N^{-M})$$. This concludes the proof.

## Exercise 11

Chapter 2 of the book explains the Master Theorem with a note that floors/ceilings can be ignored without breaking the asymptotic result. The term $$a\_{\lfloor n/2 \rfloor}+O(1)$$ implies that every time we recurse, we introduce a small discrepancy $$O(1)$$ due to potential rounding errors. This gives

$$
\begin{align\*}
a\_n &= 2a\_{n/2} + f(n) \\
&= 2(a\_{\lfloor n/2 \rfloor} + O(1)) + f(n) \\
&= 2a\_{\lfloor n/2 \rfloor} + \underbrace{O(1)}\_{\text{error}} + f(n).
\end{align\*}
$$

In a binary recursion tree, where every internal node has 2 children, the number of nodes doubles at every level. The sum of this error across the whole tree is proportional to the number of nodes in the tree

$$
\text{Total Error} = \sum\_{k=0}^{\lg n} 2^k \cdot O(1) = O(n).
$$

Consequently, $$a\_n=(\text{Ideal Solution}) + O(n)$$, where *ideal solution* denotes the case when $$n$$ is a power of two. We’ll derive these ideal solutions and see that the extra $$O(n)$$ error has no significance on them.

1. &#x20;$$a\_n = 2a\_{n/2} + O(n)$$ essentially falls under case 2 of the Master Theorem. We have $$\lg n$$ levels in the recursion tree taking $$O(n)$$ work per level. Therefore, $$\boxed{a\_n=O(n\log n)}$$. We cannot use $$\Theta$$-notation because the term $$O(n)$$ allows for the driving function to be 0 or even negative (see the next exercise).
2. $$a\_n = 2a\_{n/2} + o(n)$$ can also be handled using the summation approach. We have $$\lg n$$ levels in the recursion tree taking $$o(n)$$ work per level. Therefore, $$\boxed{a\_n=o(n\log n)}$$.
3. $$a\_n \sim 2a\_{n/2} + n$$ means that $$a\_n = 2a\_{n/2} + n + o(n)$$. Dividing by $$n$$ and letting $$b\_n = a\_n/n$$ yields $$b\_n = b\_{n/2} + 1 + o(1)$$. Iterating this recurrence gives $$b\_n = b\_0 + \lg n + o(\lg n)$$, so $$a\_n = n \lg n + o(n \lg n)$$. Therefore, $$\boxed{a\_n \sim n \lg n}$$.

## Exercise 12

This is a continuation of the previous exercise, where the first case is already covered:

* $$a\_n = 2a\_{n/2} + \Theta(n)$$ entails that $$\boxed{a\_n=\Theta(n\log n)}$$.
* $$a\_n = 2a\_{n/2} + \Omega(n)$$ entails that $$\boxed{a\_n=\Omega(n\log n)}$$.

## Exercise 13

Both $$a(x)$$ and $$b(x)$$ grow like $$x^\alpha$$, and the additive perturbation $$c$$ in the argument of $$b(x)$$ becomes asymptotically negligible. To see this, we just need to iterate the recurrences until hitting the base cases.

For $$a(x)$$ the recursion unfolds as

$$
a(x) = f(x) + f(x/\beta) + f(x/\beta^2) + \cdots + f(x/\beta^k),
$$

where $$x/\beta^k < 1$$ eventually, so the sum terminates. Since $$f(x) = x^\alpha$$, each term is $$(x/\beta^i)^\alpha$$, and the sum behaves like a convergent infinite geometric series as $$x \to \infty$$

$$
a(x) = x^\alpha \sum\_{i=0}^{\log\_\beta x} \beta^{-\alpha i} \sim \frac{\beta^\alpha}{\beta^\alpha-1}x^\alpha.
$$

For $$b(x)$$ the recurrence is slightly perturbed

$$
b(x) = f(x) + f(x/\beta + c) + f(x/\beta^2 + c(1 + 1/\beta)) + \cdots
$$

But asymptotically $$x/\beta^i + O(1) \sim x/\beta^i$$, so each term in the sum is still $$\sim (x/\beta^i)^\alpha$$, and the same geometric series argument applies. Because $$\alpha > 0$$, the geometric series is "head-dominated" (the first few terms contribute the most to the sum). The errors introduced by the perturbation $$c$$ accumulate at the tail end of the series where the terms are small. See the next section of why/how $$f(x)=x^\alpha$$ belongs to the class of functions that permits this reasoning.

### Extending the Class of Functions

In order to apply the above logic to a broader class of functions $$f(x)$$, we would need the following to hold:

* The term $$f(x/\beta^i)$$ behaves like $$\beta^{-\alpha i}f(x)$$, so we can factor $$f(x)$$ out of the sum, just like we did with $$x^\alpha$$.
* The perturbation $$c$$ disappears for large $$x$$, since we expect $$\frac{f(x+c)}{f(x)} \to 1$$ for this class of functions.

If $$f(x)$$ is a positive non-decreasing [regularly varying function ](https://www.math.u-szeged.hu/~kevei/tanitas/1819regvar/RegVar_notes.pdf)with index $$\rho \in \R$$ (see Definition 4 in the cited paper), meaning

$$
\lim\_{x \to \infty} \frac{f(\gamma x)}{f(x)} = \gamma^\rho \text{ for any } \gamma > 0,
$$

then we may assume the previously listed properties. This covers most functions polynomial in $$x$$ and $$\log x$$. For example, $$f(x)=x^2\log x$$ satisfies this, since

$$
f(\gamma x) = (\gamma x)^2 \log(\gamma x) = \gamma^2 x^2 (\log \gamma + \log x) \sim \gamma^2 x^2 \log x.
$$

Let’s employ the [squeeze theorem](https://mathworld.wolfram.com/SqueezeTheorem.html) to prove that we can perturb the arguments without impacting the asymptotic behavior. For any $$\epsilon > 0$$, we can choose $$x$$ large enough, such that the perturbed argument $$x+c$$ is eventually "sandwiched" between two constant scalings of $$x$$. The rest follows from $$f$$ being a positive non-decreasing regularly varying function.

$$
x(1 - \epsilon) < x + c < x(1 + \epsilon) \\\[0.2cm]
f(x(1 - \epsilon)) \leq f(x + c) \leq f(x(1 + \epsilon)) \\\[0.2cm]
\frac{f(x(1 - \epsilon))}{f(x)} \leq \frac{f(x + c)}{f(x)} \leq \frac{f(x(1 + \epsilon))}{f(x)} \\\[0.2cm]
\lim\_{x \to \infty} \frac{f(x(1 - \epsilon))}{f(x)} \leq \liminf\_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \limsup\_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \lim\_{x \to \infty} \frac{f(x(1 + \epsilon))}{f(x)} \\\[0.2cm]
(1 - \epsilon)^\rho \leq \liminf\_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \limsup\_{x \to \infty} \frac{f(x+c)}{f(x)} \leq (1 + \epsilon)^\rho \\\[0.2cm]
\therefore \lim\_{x \to \infty} \frac{f(x+c)}{f(x)} = 1 \implies f(x+c) \sim f(x).
$$

The conclusion follows by letting $$\epsilon \to 0$$.

{% hint style="info" %}
A full generalization of this exercise is part of the [Akra-Bazzi method](https://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method), which should be included in the toolbox of techniques for finding asymptotic approximations of recurrences.
{% endhint %}

## Exercise 14

According to [Exercise 3.17](https://evarga.gitbook.io/sh-intro-to-analysis-of-algs/chapter-3#exercise-3.17)

$$
\frac{f(z)}{g(z)} = \frac{1}{1 – 2z}.
$$

Thus, $$\beta=2$$, $$\nu=1$$, $$g'(1/2)=-2$$ and $$f(1/2)=1$$, so the approximate solution is $$a\_n \sim 2^n$$ as before.

{% hint style="warning" %}
The text of the exercise repeats the initial conditions, so we assume that it refers to the example given in section 3.3 of the book with $$a\_0=0$$ and $$a\_1=1$$.
{% endhint %}

$$
\frac{f(z)}{g(z)}=\frac{z}{(1-2z)^2}.
$$

Thus, $$\beta=2$$, $$\nu=2$$, $$g''(1/2)=8$$ and $$f(1/2)=1/2$$, so the approximate solution is $$a\_n \sim n2^{n-1}$$ as before.

## Exercise 15

According to [Exercise 3.18](https://evarga.gitbook.io/sh-intro-to-analysis-of-algs/chapter-3#exercise-3.18)

$$
\frac{f(z)}{g(z)}=\frac{z^2}{(1-z)^2(1+z)}.
$$

We have two poles of the same modulus, so we must take the one of highest multiplicity. Thus, $$\beta=1$$, $$\nu=2$$, $$g''(1)=4$$ and $$f(1)=1$$, so the approximate solution is $$a\_n \sim n/2$$ as expected.

## Exercise 16

According to [Exercise 3.20](https://evarga.gitbook.io/sh-intro-to-analysis-of-algs/chapter-3#exercise-3.20)

$$
\frac{f(z)}{g(z)}= \frac{z^2}{(1-z)^3}.
$$

Thus, $$\beta=1$$, $$\nu=3$$, $$g^{(3)}(1)=-6$$ and $$f(1)=1$$, so the approximate solution is $$a\_n \sim n^2/2$$ as expected.

## Exercise 17 🌟

{% hint style="success" %}
This exercise introduces several important theorems from real analysis.
{% endhint %}

According to the [fundamental theorem of algebra](https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra) a degree $$t$$ polynomial has exactly $$t$$ roots, counted with multiplicity.

### Showing that the Roots are Distinct

We can rewrite $$P(z)=z^t – z^{t–1} – \dots – z – 1$$ as

$$
P(z) = z^t - \sum\_{k=0}^{t-1} z^k = z^t - \frac{z^t - 1}{z - 1} = \frac{z^{t+1} - 2z^t + 1}{z - 1}=\frac{Q(z)}{z-1}.
$$

Observe that $$P(1) =1-t\neq 0$$ for $$t>1$$, thus $$z=1$$ isn’t a root. Consequently, the roots of $$P(z)$$ are exactly the roots of $$Q(z)$$, excluding $$z=1$$. A polynomial $$Q(z)$$ has multiple roots only if $$Q(z)$$ and its derivative $$Q'(z)$$ share a root.

$$
Q'(z) = (t+1)z^t - 2tz^{t-1} = z^{t-1} \[ (t+1)z - 2t ].
$$

The unique roots of the derivative are $$z=0$$ and $$z = \frac{2t}{t+1}$$. Apparently $$Q(0)=1$$, so $$z=0$$ isn’t shared. For the other candidate, we have

$$
\begin{align\*}
Q\left(\frac{2t}{t+1}\right) &= \left(\frac{2t}{t+1}\right)^{t+1} - 2\left(\frac{2t}{t+1}\right)^t + 1 \\
&= \left(\frac{2t}{t+1}\right)^t \left\[ \frac{2t}{t+1} - 2 \right] + 1 \\
&= 1 - \left(\frac{2t}{t+1}\right)^t \frac{2}{t+1}.
\end{align\*}
$$

We would need $$\left(\frac{2t}{t+1}\right)^t \frac{2}{t+1} = 1$$. But for all $$t>1$$, $$\frac{2t}{t+1} > 1$$, so the power term grows exponentially. Therefore, all roots of $$P(z)$$ are distinct.

### Showing that Exactly One Root has Modulus > 1

Because the coefficients of $$P(z)$$ are real, the singleton dominant root must be a real number. We can verify that $$P(1) = 1-t < 0$$ and $$P(2) = 1 > 0$$, hence by the [intermediate value theorem](https://en.wikipedia.org/wiki/Intermediate_value_theorem), there is a real root in $$(1,2)$$.

By using [Rouché's theorem](https://en.wikipedia.org/wiki/Geometrical_properties_of_polynomial_roots#Discs_containing_some_roots) on the unit disk, we show that the remaining $$t-1$$ roots of $$P(z)$$ lie inside the unit circle $$|z| \le 1$$; this theorem relates the number of zeros of two functions, $$f(z)$$ and $$f(z)+g(z)$$, inside a region. Let’s rearrange the terms of $$Q(z)$$

$$
\underbrace{ - 2z^t }*{f(z)}  +\underbrace{z^{t+1}+ 1}*{g(z)}.
$$

We focus our attention on a circle of radius $$|z| = 1 + \epsilon$$ (where $$\epsilon$$ is a very small positive number). The magnitudes of our functions on the contour line are:

* $$|f(z)| = |-2z^t| = 2(1+\epsilon)^t \approx 2 + 2t\epsilon$$
* $$|g(z)| \le |z|^{t+1} + 1 = (1+\epsilon)^{t+1} + 1 \approx 1 + (1 + (t+1)\epsilon) = 2 + (t+1)\epsilon$$

Since $$t > 1$$, we have $$2t > t+1$$, thus $$|f(z)| > |g(z)|$$. By Rouché's theorem, the polynomial $$Q(z) = f(z) + g(z)$$ has the same number of roots inside the circle $$|z| = 1+\epsilon$$ as $$f(z)$$.&#x20;

$$f(z)$$ has a root of multiplicity $$t$$ at the origin ($$z=0$$). Therefore, $$Q(z)$$ has exactly $$t$$ roots strictly inside the circle $$|z| < 1+\epsilon$$. Consequently, $$P(z)$$ has exactly $$t-1$$ roots inside the same disk; recall that we must exclude $$z=1$$, which was counted for $$Q(z)$$.

## Exercise 18

We need to apply the result from the previous exercise. Since the recurrence formula reflects $$P(z)=z^t – z^{t–1} – \dots – z – 1$$, we know there is a single root $$\beta$$ (approaches 2 as $$t \to \infty$$) of highest modulus. According to Theorem 4.1, $$F\_N^{\[t]} \sim C\beta^N$$. Wikipedia describes this variant as [$$n$$-acci sequence](https://en.wikipedia.org/wiki/Generalizations_of_Fibonacci_numbers#Higher_orders) and provides a precise expression for the leading term.

## Exercise 19

According to [Exercise 3.55](https://evarga.gitbook.io/sh-intro-to-analysis-of-algs/chapter-3#exercise-3.55)

$$
\frac{f(z)}{g(z)}= \frac{1}{(1-z^{d\_1})(1-z^{d\_2})\times \dots \times (1-z^{d\_t})}.
$$

Thus, $$\beta=1$$, $$\nu=t$$ and $$f(1)=1$$. We need to compute $$g^{(t)}(1)$$.&#x20;

Each factor can be written as

$$
1 - z^{d\_i} = (1 - z)h\_i(z), \qquad h\_i(z)=z^{d\_i-1}+z^{d\_i-2}+\dots+1.
$$

Apparently, $$h\_i(1)=d\_i$$ and $$g(z)=(1-z)^th(z)$$, where $$h(z)=\prod\_{i=1}^th\_i(z)$$. Consequently, $$h(1)=\prod\_{i=1}^td\_i$$. By the [general Leibniz rule](https://en.wikipedia.org/wiki/General_Leibniz_rule) we have

$$
g^{(t)}(z) = \sum\_{k=0}^{t} \binom{t}{k} \frac{d^k}{dz^k}(1-z)^t \cdot h^{(t-k)}(z).
$$

At $$z = 1$$ all terms with $$k\<t$$ vanish because $$\frac{d^k}{dz^k}(1-z)^t$$ still contains a factor of $$1-z$$. For $$k=t$$ we have $$\frac{d^t}{dz^t}(1-z)^t = (-1)^tt!$$. Thus,

$$
g^{(t)}(1) = \binom{t}{t}(-1)^t ,t! , h(1) =   (-1)^t,t! \prod\_{i=1}^{t} d\_i.
$$

By inserting all components into the formula from Theorem 4.1, we get the required identity of this exercise.

## Exercise 20

The Python script below generates the extended table. It uses [pandas](https://pandas.pydata.org/) to produce formatted output.

{% code expandable="true" %}

```python
import numpy as np
import pandas as pd
from scipy.special import digamma

gamma = 0.5772156649015328606

def harmonic(n):
    return digamma(n + 1) + gamma

def estimate_1(n):
    return 2 * (n + 1) * (harmonic(n + 1) - 1)

def estimate_2(n):
    return 2 * n * np.log(n)

def estimate_3(n):
    return (2 * gamma - 2) * n

def estimate_4(n):
    return 2 * (np.log(n) + gamma) + 1

Ns = [10, 100, 1_000, 10_000, 100_000, 1_000_000]

rows = []
for n in Ns:
    e1 = estimate_1(n)
    e2 = estimate_2(n)
    e3 = e2 + estimate_3(n)
    e4 = e3 + estimate_4(n)
    rows.append([n, e1, e2, e3, e4])

df = pd.DataFrame(rows, columns=[
    "N",
    "2(N+1)(H_{N+1} - 1)",
    "2N ln N",
    "+(2γ - 2)N",
    "+2(ln N + γ) + 1"
])
pd.set_option("display.float_format", "{:,.2f}".format)
print(df)

```

{% endcode %}

The code displays the following table:

```
         N  2(N+1)(H_{N+1} - 1)       2N ln N    +(2γ - 2)N  +2(ln N + γ) + 1
0       10                44.44         46.05         37.60             44.36
1      100               847.85        921.03        836.48            847.84
2     1000            12,985.91     13,815.51     12,969.94         12,985.91
3    10000           175,771.70    184,206.81    175,751.12        175,771.70
4   100000         2,218,053.41  2,302,585.09  2,218,028.23      2,218,053.41
5  1000000        26,785,482.23 27,631,021.12 26,785,452.45     26,785,482.23
```

## Exercise 21

Tool support is useful, once the underlying mechanisms are properly understood, to save time and reduce a chance for an error. For example, we can type in WolframAlpha the following command

```wolfram
series ln(1-x+x^2) to order 3
```

It produces the requested expansion

$$
-x + \dfrac{1}{2}x^{2} + \dfrac{2}{3}x^{3} + O(x^{4}).
$$

## Exercise 22

$$
\begin{align\*}
\ln(N^\alpha+N^\beta) &= \ln N^\alpha + \ln \left(1+\underbrace{\frac{1}{N^{\alpha-\beta}}}\_{x \to 0}\right) \\
&= \alpha\ln N + \frac{1}{N^{\alpha-\beta}}-\frac{1}{2N^{2(\alpha-\beta)}}+\frac{1}{3N^{3(\alpha-\beta)}}+O\left(\frac{1}{N^{4(\alpha-\beta)}}\right).
\end{align\*}
$$

## Exercise 23 🌟

{% hint style="success" %}
This exercise demonstrates how to convert an asymptotic expansion to another scale.
{% endhint %}

Let’s start with a Taylor expansion of the expression in terms of $$x=1/(N-1)$$

$$
\begin{align\*}
\frac{N}{N-1}\ln \frac{N}{N-1} &= \left(1+\underbrace{\frac{1}{N-1}}\_{x}\right),\ln \left(1+\frac{1}{N-1}\right) \\
&= \frac{1}{N-1}+\frac{1}{2(N-1)^2}-\frac{1}{6(N-1)^3}+O\left(\frac{1}{(N-1)^4}\right) \qquad (1+x)(x-x^2/2+x^3/3+O(x^4))=x+x^2/2-x^3/6+O(x^4).
\end{align\*}
$$

A more streamlined version would be stated in terms of $$1/N$$, since $$N-1 \sim N$$. By taking $$x=1/N$$ in the geometric series, we get $$1/(N-1)=1/N+1/N^2+1/N^3+O(N^{-4})$$. We need to replace the factors involving $$1/(N-1)$$ in the above series with the given expansion of $$1/(N-1)$$. This gives

$$
\boxed{\frac{N}{N-1}\ln \frac{N}{N-1} = \frac{1}{N}+\frac{3}{2N^2}+\frac{11}{6N^3}+O\left(\frac{1}{N^4}\right)}.
$$

## Exercise 24

Let $$x=0.1$$, so $$\ln 0.9=\ln (1-0.1)$$. By using Table 4.2 and expanding each function to within $$O(x^5)$$, we get

$$
2+2x+x^2/2+x^3/2+x^4/3+O(x^5).
$$

We should be careful when summing up the terms of a logarithm, due to subtraction and negative value of $$x$$. The computed value is 2.20553, which is to within $$10^{-4}$$.&#x20;

## Exercise 25

Observe that $$9801=99^2=(100-1)^2$$, so&#x20;

$$
\frac{1}{9801}=\frac{1}{(100-1)^2}=\frac{1}{10^4(1-0.01)^2}.
$$

Table 4.2 has an expansion of the geometric series. By differentiating both sides of that identity, we get

$$
\frac{1}{(1-x)^2} = \sum\_{k=1}^{\infty} k x^{k-1}.
$$

Now, we substitute $$x$$ into our derived series

$$
\begin{align\*}
\frac{1}{9801} &= \frac{1}{10^4} \times \left\[ 1 + 2(0.01) + 3(0.01)^2 + 4(0.01)^3 + \dots \right] \\
&= 0.00|01|02|03|04í\dots|47|48|49|50|\dots
\end{align\*}
$$

Vertical bars are used to delineate slots populated with successive natural numbers. Each slot accommodates two digits, enabling this sequence to progress up to the number 97. In other words, we can predict digits in this manner within $$100^{-196}$$. The pattern breaks at the slot with 98, as illustrated below:

<pre><code>Slot:   [97] [98] [99] [00] [01]
k = 97:  97
k = 98:       98
k = 99:            99
<strong>k = 100:           01   00   
</strong>k = 101:                01   01
--------------------------------
Result:  97   99   00   01   02 ...
</code></pre>

The carry from the 100th term impacts the slots containing 98 and 99.

We can generalize this to generate sequences of integers padded to $$n$$ digits

$$
\frac{1}{(10^n - 1)^2}=\frac{1}{10^{2n}(1 - 10^{-n})^2}.
$$

For example, $$1/81=0.012345679\dots$$ (8 is missing due to the carry-over from 9).

## Exercise 26

If we apply the nonasymptotic version of Stirling’s formula to describe $$\binom{2N}{N}$$, we get

$$
\binom{2N}{N}
\=\frac{(2N)!}{(N!)^2}
\=\frac{4^N}{\sqrt{\pi N}};\frac{1+\dfrac{\theta\_{2N}}{24N}}{\bigg(1+\dfrac{\theta\_N}{12N}\bigg)^2}.
$$

Our equation becomes

$$
\underbrace{\frac{N4^{N-1}}{\binom{2N}{N}}}*{R\_N}
\=\underbrace{\frac{N\sqrt{\pi N}}{4}}*{A\_N};\underbrace{\frac{\bigg(1+\dfrac{\theta\_N}{12N}\bigg)^2}{1+\dfrac{\theta\_{2N}}{24N}}}\_{E\_N},
$$

where $$R\_N,A\_N,E\_N$$ represent the reference, approximate and error terms, respectively.

Since $$0<\theta\_N<1$$ and $$0<\theta\_{2N}<1$$, we have the following bounds

$$
1 < \big(1+\tfrac{\theta\_N}{12N}\big)^2 < \Big(1+\tfrac{1}{12N}\Big)^2,
\qquad
1 < 1+\tfrac{\theta\_{2N}}{24N} < 1+\tfrac{1}{24N}.
$$

This gives

$$
A\_N\cdot\frac{1}{1+\dfrac{1}{24N}}
\<R\_N < A\_N\cdot\Bigg(1+\dfrac{1}{12N}\Bigg)^2.
$$

Let the relative error $$\delta\_N$$ be defined such that $$R\_N = A\_N(1 + \delta\_N)$$. This gives

$$
\begin{align\*}
\frac{1}{1 + \frac{1}{24N}} &< 1 + \delta\_N < \left(1 + \frac{1}{12N}\right)^2 \\
\frac{1}{1 + \frac{1}{24N}} - 1 &< \delta\_N < \left(1 + \frac{1}{12N}\right)^2 - 1 \\
-\frac{1}{24N+1} &< \delta\_N < \frac{1}{6N} + \frac{1}{144N^2}.
\end{align\*}
$$

We can assert that $$\delta\_N=O(1/N)$$.

## Exercise 27

According to Table 4.4, $$H\_{1000}=7.4854709$$.

The specific bounds implied by the absolute formula are

$$
7.4749709 \le \hat H\_{1000} \le 7.4949709
$$

The specific bounds implied by the relative formula are

$$
0 \le \hat H\_{1000} \le 16.9077553
$$

The LHS is truncated to zero as a negative lower bound is meaningless. The RHS is huge, since the assumed constant is too large. In a relative formula, we expect $$h(N)=o(1)$$ that should decay rapidly even for small problem sizes.

{% hint style="info" %}
Recall that $$O(h(N))$$ with a constant $$|C|<10$$ covers the range $$±Ch(N)$$.
{% endhint %}

## Exercise 28

The exact value is $$T\_{10}=16,796$$.

The specific bounds implied by the relative formula are

$$
0 \le \hat T\_{10} \le 37,416.
$$

## Exercise 29

Let’s split the original recurrence at the threshold value $$M>0$$

$$
f(N)=\sum\_{k\ge 0} a\_k N^{-k}
\=\sum\_{0\le k< M} a\_k N^{-k}+\underbrace{\sum\_{k\ge M} a\_k N^{-k}}\_{R\_M(N)}.
$$

Since $$f(N)$$ converges then $$R\_M(N)$$ converges, too. We can rearrange the remainder term as

$$
R\_M(N) = \sum\_{j\ge 0} a\_{M+j}N^{-(M+j)} = N^{-M}\underbrace{\sum\_{j\ge 0} a\_{M+j}N^{-j}}\_{C}= C\cdot N^{-M}.
$$

Hence, $$R\_M(N)=O(N^{-M})$$, which concludes the proof.

## Exercise 30

It’s not hard to see from Table 3.3 that the terms of our sum can be represented with an EGF

$$
A(z)=\frac{1}{1-\cfrac{z}{N}}=\frac{N}{N-z}=\sum\_{k\ge0} \frac{k!}{N^k} \frac{z^k}{k!} = \sum\_{k\ge0}\left(\frac{z}{N}\right)^k.
$$

This gives

$$
\begin{align\*}
\sum\_{k\ge0} \frac{k!}{N^k}  &= \sum\_{k\ge0} \frac{1}{N^k} \underbrace{\int\_0^\infty e^{-z} z^k , dz}*{k!=\Gamma(k+1)} \\
&= \int\_0^\infty e^{-z} \sum*{k\ge0} \left(\frac{z}{N}\right)^k , dz \\
&= \int\_0^\infty e^{-z}\frac{N}{N - z} , dz \\
&= N e^{-N} \operatorname{Ei}(N) = f(N).
\end{align\*}
$$

The derivation uses the [gamma function](https://en.wikipedia.org/wiki/Gamma_function) and [exponential integral](https://en.wikipedia.org/wiki/Exponential_integral) to craft the "closed-form" solution. The last line follows after employing the substitution $$u=z-N$$.

## Exercise 31

There is a complementary $$\omega$$-notation for the opposite direction with respect to the o-notation. It denotes a strict lower bound (grows strictly faster than). We use it in our list below:

* $$e^N=O(N^2)$$ is false, since $$\lim\_{N \to \infty}\frac{e^N}{N^2}=\infty$$. We can say that $$e^N=\omega(N^2)$$.
* $$e^N=O(2^N)$$ is false (see [Exercise 9](#exercise-9)). We can say that $$e^N=\omega(2^N)$$.
* $$2^{-N}$$ is exponentially small, so $$2^{-N}=O(1/N^{10})$$ is true (see [Exercise 7](#exercise-7)).
* $$N^{\ln N}=O(e^{\ln^2 N})$$ is true, since $$N^{\ln N}=e^{\ln {N^{\ln N}}}=e^{\ln^2 N}$$.

## Exercise 32

The Taylor expansion of $$e^x$$ in terms of $$x=1/(N+1)$$ is

$$
e^{\frac{1}{N+1}}=1+\frac{1}{N+1}+\frac{1}{2(N+1)^2}+O\bigg(\frac{1}{(N+1)^3}\bigg).
$$

We can provide a more streamlined version in terms of $$1/N$$ (see also [Exercise 23](#exercise-23)). The book shows the expansion of $$1/(N+1)=1/N-1/N^2+O(N^{-3})$$. We need to replace the factors involving $$1/(N+1)$$ in the expansion of $$e^{1/(N+1)}$$ with the given expansion of $$1/(N+1)$$. We get

$$
\boxed{e^{\frac{1}{N+1}} = 1 + \frac{1}{N} - \frac{1}{2N^2} + O\left(N^{-3}\right)}.
$$

## Exercise 33

We can expand the number of terms for $$H\_N$$ from Table 4.6. The first approximation to within $$O(1/N)$$ is

$$
\begin{align\*}
(H\_N)^2 &= \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \\
&= \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+O(1/N).
\end{align\*}
$$

The second approximation to within $$o(1/N)$$ simply includes $$\gamma/N$$ that was previously absorbed into $$O(1/N)$$. Thus,

$$
(H\_N)^2 =  \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+\frac{\gamma}{N}+o(1/N).
$$

## Exercise 34

$$
\begin{align\*}
\cot x &= \frac{\cos x}{\sin x} = \frac{1-x^2/2+x^4/24+O(x^6)}{x-x^3/6+x^5/120+O(x^7)} \\
&= (1-x^2/2+x^4/24+O(x^6)),\frac{1}{x-x^3/6+x^5/120+O(x^7)} \\
&= \frac{1-x^2/2+x^4/24+O(x^6)}{x},\frac{1}{1-x^2/6+x^4/120+O(x^6)} \\
&= \frac{1-x^2/2+x^4/24+O(x^6)}{x},(1+x^2/6-x^4/120+x^4/36+O(x^6)) \\
&= \frac{1-x^2/3-x^4/45}{x} \\
&= 1/x-x/3-x^3/45+O(x^4).
\end{align\*}
$$

## Exercise 35

$$
\begin{align\*}
\frac{x}{e^x-1} &= -\frac{x}{1-e^x} \\
&= -\frac{x}{1-(1+x+x^2/2+x^3/6+x^4/24+x^5/120+O(x^6))} \\
&= \frac{1}{1+x/2+x^2/6+x^3/24+x^4/120+O(x^5)} \\
&= 1-x/2-x^2/6-x^3/24-x^4/120+(-x/2-x^2/6-x^3/24)^2+(-x/2-x^2/6)^3+(-x/2)^4+O(x^5) \\
&= 1-x/2+x^2/12-x^4/720+O(x^5).
\end{align\*}
$$

## Exercise 36

A full treatment of this topic is available in the paper [Asymptotic Expansions of Central Binomial\
Coefficients and Catalan Numbers](https://cs.uwaterloo.ca/journals/JIS/VOL17/Elezovic/elezovic5.pdf).

## Exercise 37

$$
\begin{align\*}
\frac{1}{N+1}\binom{3N}{N} &= \exp\bigg{\ln ((3N)!)-\ln (N!)-\ln ((2N)!)-\ln(N+1)\bigg}  \\
&= \exp\bigg{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\
&\qquad -\left(N+\frac{1}{2}\right)\ln N+N-\ln \sqrt{2\pi}+O(1/N) \\
&\qquad -\left(2N+\frac{1}{2}\right)\ln (2N)+2N-\ln \sqrt{2\pi}+O(1/N) \\
&\qquad -\ln N+O(1/N)\bigg} \\
&= \exp\bigg{\left(3N+\frac{1}{2}\right)\ln 3-\left(2N+\frac{1}{2}\right)\ln 2-\frac{3}{2}\ln N-\ln \sqrt{2\pi}+O(1/N)\bigg} \\
&= \frac{27^N}{4^N}\cdot\frac{\sqrt 3}{2\sqrt{\pi} N^{3/2}}\left(1+O(1/N)\right).
\end{align\*}
$$

## Exercise 38

$$
\begin{align\*}
\frac{(3N!)}{(N!)^3} &= \exp\bigg{\ln ((3N)!)-3\ln (N!)\bigg}  \\
&= \exp\bigg{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\
&\qquad -3\left(N+\frac{1}{2}\right)\ln N+3N-3\ln \sqrt{2\pi}+O(1/N)\bigg} \\
&= \exp\bigg{\left(3N+\frac{1}{2}\right)\ln 3-\ln N-2\ln \sqrt{2\pi}+O(1/N)\bigg} \\
&= 27^N\frac{\sqrt 3}{2\pi N}\left(1+O(1/N)\right).
\end{align\*}
$$

## Exercise 39

$$
\begin{align\*}
\left(1-\frac{\lambda}{N}\right)^N &= \exp\bigg{N\ln \left(1-\frac{\lambda}{N}\right)\bigg}  \\
&= \exp\bigg{N\left(-\frac{\lambda}{N}+O(1/N^2)\right)\bigg} \\
&= \exp\bigg{-\lambda+O(1/N)\bigg} \\
&= e^{-\lambda}+O(1/N).
\end{align\*}
$$

Based on the asymptotic expansion, the approximate value of the expression is $$e^{-\lambda}$$ for large $$N$$.

## Exercise 40

$$
\begin{align\*}
\left(1-\frac{\ln N}{N}\right)^N &= \exp\bigg{N\ln \left(1-\frac{\ln N}{N}\right)\bigg}  \\
&= \exp\bigg{N\left(-\frac{\ln N}{N}-\frac{\ln^2 N}{2N^2}-\frac{\ln^3 N}{3N^3}+O(\ln^4 N/N^4)\right)\bigg} \\
&= \exp\bigg{-\ln N-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}+O(\ln^4 N/N^3)\bigg} \\
&= \frac{1}{N}\exp\bigg{-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}\bigg}(1+O(\ln^4 N/N^3)) \\
&= \frac{1}{N}-\frac{\ln^2 N}{2N^2}+\frac{\ln^4 N}{8N^3}+o(\ln^4 N/N^3).
\end{align\*}
$$

The middle term in the penultimate line must be expanded as $$1+x+x^2/2$$, where $$x$$ is the argument to the exponential function. We must retain the three asymptotically largest components.

## Exercise 41

When interest is compounded daily at a 10% annual rate on a $10,000 principal, after 365 days, the amount is $$$10,000(1+0.1/365)^{365}$$. Instead of computing this value directly, let’s apply the result from [Exercise 39](#exercise-39). Here, we have $$\lambda=-0.1$$, so the total amount increases to $$\approx $10,000 \times e^{0.1}=$11,051.7$$. This is about $51.7 more than what would be paid if interest were paid once a year.

{% hint style="info" %}
The exact formula gives ≈$11,051.6, which is $0.1 less than our quick calculation. This showcases the advantages of using asymptotic approximations.
{% endhint %}

## Exercise 42

$$
\begin{align\*}
\exp\bigg{1+\frac{1}{N}+O(1/N^2)\bigg} &= e \cdot \exp\bigg{\frac{1}{N}+O(1/N^2)\bigg} \\
&= e\left(1+\frac{1}{N}+O(1/N^2)\right) \\
&= e+\frac{e}{N}+O(1/N^2).
\end{align\*}
$$

## Exercise 43

$$
\begin{align\*}
\ln\left(\sin\left(\frac{1}{N!}\right)\right) &= \ln\left(\frac{1}{N!} \left(1+O\left(\frac{1}{(N!)^2}\right)\right)\right) \\
&= \ln\left(\frac{1}{N!}\right) + \ln\left(1+O\left(\frac{1}{(N!)^2}\right)\right) \\
&= -\ln (N!) + O\left(\frac{1}{(N!)^2}\right) \\
&= -N \ln N + N - \ln \sqrt N - \ln \sqrt{2\pi} - \frac{1}{12N} + O\left(\frac{1}{N^2}\right).
\end{align\*}
$$

## Exercise 44

$$
\begin{align\*}
\sin(\tan(1/N)) &= \sin\left(\frac{1}{N}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right)\right) \\
&= \frac{1}{N}+\frac{1}{3N^3}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \\
&= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}.
\end{align\*}
$$

$$
\begin{align\*}
\tan(\sin(1/N)) &= \tan\left(\frac{1}{N}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right)\right) \\
&= \frac{1}{N}-\frac{1}{6N^3}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right) \\
&= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}.
\end{align\*}
$$

At this point, we could say that the order of growth of $$\sin(\tan(1/N)) –\tan(\sin(1/N))$$ is $$O(N^{-5})$$, but this would be a loose bound. Since the first two terms match, we must expand until we find a difference. Let $$x=1/N$$ and leverage WolframAlpha to perform the hard work.

$$
\begin{align\*}
\text{series sin(tan(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{55}{1008}x^7 + O(x^9) \\
\text{series tan(sin(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{107}{5040}x^7 + O(x^9).
\end{align\*}
$$

Apparently, we have a mismatch at the 4th term, thus the order of growth of $$\sin(\tan(1/N)) –\tan(\sin(1/N))$$ is $$O(N^{-7})$$.

## Exercise 45

$$
\begin{align\*}
T\_N &= \frac{4^N}{\sqrt {\pi N^3}}(1+O(1/N)) \\
H\_N &= \ln N+\gamma+O(1/N).
\end{align\*}
$$

By substituting $$N=T\_N$$ into the expansion of $$H\_N$$, we get

$$
H\_{T\_N} = 2N\ln 2-\ln \sqrt {\pi N^3}+\gamma+O(1/N).
$$

## Exercise 46

This exercise is about the [asymptotic expansion of the Lambert W function](https://en.wikipedia.org/wiki/Lambert_W_function#Asymptotic_expansions). The cited article contains the full expansion in terms of substitutions $$L\_1$$ and $$L\_2$$. We depict here the steps leading to the that expression using those symbols.

We derive the basic equation marked with an asterisk

$$
\begin{align\*}
n &= a\_n e^{a\_n} \\
\ln n &= \ln a\_n + a\_n \\
a\_n &= L\_1 - \ln a\_n \qquad \text{(*)}.
\end{align*}
$$

The first iteration of bootstrapping gives $$a\_n \sim L\_1$$. The second iteration gives $$a\_n \sim L\_1 - L\_2$$. The third iteration gives

$$
\begin{align\*}
a\_n &\sim L\_1 - \ln(L\_1 - L\_2) \\
&= L\_1 - \ln \left( L\_1 \left( 1 - \frac{L\_2}{L\_1} \right) \right) \\
&= L\_1 - L\_2 - \ln \left( 1 - \frac{L\_2}{L\_1} \right) \\
&\sim L\_1 - L\_2 + \frac{L\_2}{L\_1}.
\end{align\*}
$$

By carrying out one more iteration, we get the equation from the cited article to within the requested accuracy. For the sake of completeness, here is the identity without using symbols $$L\_1$$ and $$L\_2$$

$$
\boxed{a\_n = \ln n - \ln \ln n + \frac{\ln \ln n}{\ln n} + \frac{(\ln \ln n)^2 - 2\ln \ln n}{2(\ln n)^2} + O\left( \frac{1}{(\log n)^3} \right)}.
$$

## Exercise 47 🌟

{% hint style="success" %}
This exercise generalizes the reversion method presented in the book.
{% endhint %}

We need to provide the reversion of the power series *y* = *c*<sub>0</sub> + *c*<sub>1</sub>*x* + *c*<sub>2</sub>*x*<sup>2</sup> + *c*<sub>3</sub>*x*<sup>3</sup> + *O*(*x*<sup>4</sup>). By applying the hint from the book, this equation transform into $$z = x + c'\_2x^2 + c'\_3x^3 + O(x^4)$$. We already know the reversion of this form from the book:

$$
x=z-c'\_2z^2+(2c'^2\_2-c'\_3)z^3+O(z^4).
$$

Now, we just need to express it again in terms of $$y$$ and the original coefficients

$$
\boxed{x=\frac{y-c\_0}{c\_1} - \frac{c\_2}{c\_1^3}(y-c\_0)^2 + \frac{2c\_2^2 - c\_1 c\_3}{c\_1^5}(y-c\_0)^3 + O((y-c\_0)^4)}.
$$

{% hint style="info" %}
The expansion of $$x$$ is in powers of the small quantity $$y - c\_0$$, not $$y$$ itself (unless $$c\_0 = 0$$). Using purely $$y$$ is incorrect when $$c\_0 \neq 0$$ because $$y$$ doesn’t tend to zero.
{% endhint %}

## Exercise 48

$$\sum\_{k\ge 1} 1/k^2=\pi^2/6$$ is known as the [Basel problem](https://en.wikipedia.org/wiki/Basel_problem). According to the [direct comparison test](https://en.wikipedia.org/wiki/Direct_comparison_test), the sum $$\sum\_{k\ge 1} 1/(k^2H\_k)$$ also converges to some constant $$C$$. The tail $$R\_N$$ can be approximated using the asymptotic expansion of $$H\_N\sim\ln N$$ (see Table 4.6). This gives

$$
R\_N=\sum\_{k>N} \frac{1}{k^2H\_k}<\sum\_{k>N} \frac{1}{k^2\ln k} \approx \int\_N^\infty \frac{1}{x^2 \ln x} , dx.
$$

We can estimate $$R\_N$$ by using integration by parts to find the leading asymptotic term. Let $$u = (\ln x)^{-1}$$ and $$dv = x^{-2} dx$$. Then $$du = -x^{-1}(\ln x)^{-2} dx$$ and $$v = -x^{-1}$$.

$$
\begin{align\*}
\int\_N^\infty \frac{1}{x^2 \ln x} dx &= -\frac{1}{x \ln x} \bigg|\_N^\infty - \int\_N^\infty \frac{1}{x^2 (\ln x)^2} dx \\
&\approx \frac{1}{N \ln N} - \frac{1}{\ln N}\int\_N^\infty \frac{1}{x^2 \ln x} dx \\
&\therefore \boxed{R\_N=\frac{1}{N \ln N} +O\left(\frac{1}{N \log^2 N}\right)}.
\end{align\*}
$$

This gives

$$
\sum\_{k=1}^{N} \frac{1}{k^{2} H\_{k}} =C -\frac{1}{N \ln N} + O\left(\frac{1}{N \log^2 N}\right),
$$

so that $$1+1/6+1/16.5+1/(3\ln 3) \approx 1.53$$ is already a good approximation to the constant $$C$$, just by taking the first 3 terms (together with a correction factor). It is a trivial matter to calculate the value of this constant to any reasonable desired accuracy.

## Exercise 49

$$
\sum\_{k=0}^{N} \frac{1}{F\_k} = \psi - R\_N.
$$

The infinite sum converges to the [reciprocal Fibonacci constant](https://en.wikipedia.org/wiki/Reciprocal_Fibonacci_constant) $$\psi$$. Fibonacci numbers grow exponentially, $$F\_k \sim\phi^k / \sqrt{5}$$ where $$\phi$$ is the golden ratio. Consequently, the terms in the tail decay exponentially and the tail sum can be approximated by a geometric series. Thus, $$R\_N=O(\phi^{-N})$$.

## Exercise 50

$$
\begin{align\*}
\sum\_{k=0}^{N} \frac{2^k}{2^k+1} &= \sum\_{k=0}^{N} \left(1 - \frac{1}{2^k+1}\right) \\
&= N+1 - \sum\_{k=0}^{N} \frac{1}{2^k+1} \\
&= N+1 - \left(\underbrace{\sum\_{k\ge0} \frac{1}{2^k+1}}*{C}-\underbrace{\sum*{k>N} \frac{1}{2^k+1}}\_{R\_N}\right).
\end{align\*}
$$

The tail can be bounded analogously to the example given in the book pertaining to tries. Therefore, $$R\_N=O(2^{-N})$$.

## Exercise 51

Here, we need to use the tail, since the terms in a finite sum are rapidly increasing.

$$
\sum\_{k=0}^{N} 2^{k^2}=2^{N^2} \left(1 + O\left(2^{-2N}\right)\right).
$$

## Exercise 52

$$
\begin{align\*}
\sum\_{k=1}^N k^t &= \sum\_{k=0}^N k^t \\
&= \frac{N^{t+1}}{t+1} + \frac{N^t}{2} + \sum\_{i=1}^{\lfloor t/2 \rfloor} \frac{B\_{2i}}{(2i)!} t^{\underline{2i-1}} N^{,t-2i+1}.
\end{align\*}
$$

Notice the usage of the [falling factorial](https://en.wikipedia.org/wiki/Falling_and_rising_factorials) (see also Chapter 3 of the book).

## Exercise 53

Let $$h(x)=\frac{1}{1+x}$$, so $$h^{(n)}(x)=(-1)^n\frac{n!}{(1+x)^{n+1}}$$. This gives

$$
\begin{align\*}
\sum\_{0\le k\le N} h(k/N) &\sim N\int\_0^1 h(x),dx+\frac{h(0)+h(1)}{2}+\sum\_{i\ge1}\frac{B\_{2i}}{(2i)!}\frac{1}{N^{2i-1}};h^{(2i-1)}(x)\Big|*0^1 \\
&= N\ln 2+\frac{3}{4}+\sum*{i\ge1} \frac{B\_{2i}}{2i},\frac{1-2^{-2i}}{N^{2i-1}} \\
&= N\ln 2+\frac34+\frac{1}{16N}-\frac{1}{128N^{3}}+\cdots \\
&= N\ln 2+\frac34+O(N^{-1}).
\end{align\*}
$$

## Exercise 54

Let $$h(x)=\frac{1}{1+x^2}$$, so $$h'(x)=-\frac{2x}{(1+x^{2})^{2}}$$. We don’t need higher derivatives, since only the first Bernoulli correction is required. This gives

$$
\begin{align\*}
\sum\_{0\le k\le N} h(k/N) &\sim N\int\_0^1 h(x),dx+\frac{h(0)+h(1)}{2}+\sum\_{i\ge1}\frac{B\_{2i}}{(2i)!}\frac{1}{N^{2i-1}};h^{(2i-1)}(x)\Big|\_0^1 \\
&= \frac{\pi N}{4}+\frac{3}{4}-\frac{1}{24N}+O(N^{-3}).
\end{align\*}
$$

The definite integral can be computed using WolframAlpha by issuing `Integrate[1/(1+x^2),{x,0,1}]`.

## Exercise 55

Let’s use the formula from the book

$$
\gamma = H\_n - \ln n - \frac{1}{2n} + \frac{1}{12n^2} - \frac{1}{120n^4}.
$$

The following Python script dumps the value of $$\gamma$$ accurate to 10 decimal places. By uncommenting (adding) additional correction terms, we can attain the same accuracy in third of the time, since we only need to take seven summands from the harmonic sum.

{% code expandable="true" %}

```python
from decimal import Decimal, getcontext
getcontext().prec = 20

def calculate_gamma(n):
    h_n = sum(Decimal(1) / Decimal(i) for i in range(1, n + 1))
    ln_n = Decimal(n).ln()
    # Euler-Maclaurin correction terms from the book.
    term1 = Decimal(1) / (Decimal(2) * n)
    term2 = Decimal(1) / (Decimal(12) * n ** 2)
    term3 = Decimal(1) / (Decimal(120) * n ** 4)
    # These are higher-order terms that can be included for speeding things up.
    term4 = Decimal(1) / (Decimal(252) * n ** 6)
    term5 = Decimal(1) / (Decimal(240) * n ** 8)

    # Uncomment the last two terms for faster convergence.
    return h_n - ln_n - term1 + term2 - term3  #+ term4 - term5

# Reduce this to 7 if you have uncommented the last two terms for faster convergence.
n_val = 21
result = calculate_gamma(n_val)
print(f"Euler's Constant (γ) ≈ {result:.10f}")
```

{% endcode %}

The code prints

```
Euler's Constant (γ) ≈ 0.5772156649
```

{% hint style="info" %}
Wikipedia has a full description of the [Euler’s constant](https://en.wikipedia.org/wiki/Euler's_constant).
{% endhint %}

## Exercise 56

We’ve already mentioned in [Exercise 48](#exercise-48) that $$\sum\_{k=1}^\infty \frac{1}{k^2}=\zeta(2)=\pi^2/6$$. Let’s take $$f(x)=1/x^2$$ and compute the Euler-Maclaurin constant

$$
C\_f=\lim\_{N \to \infty} \left(\sum\_{1 \le k \le N} \frac{1}{k^2} -\int\_1^N\frac{dx}{x^2}-\frac{f(N)}{2}\right)=\frac{\pi^2}{6}-1.
$$

Employing Theorem 4.3 gives

$$
\begin{align\*}
H\_N^{(2)} &= \sum\_{1 \le k \le N}\frac{1}{k^2} \\
&= \int\_1^N\frac{dx}{x^2}+\frac{1}{2N^2}+C\_f + \sum\_{i>0} \frac{B\_{2i}}{(2i)!} f^{(2i-1)}(N)  \\
&= -\frac{1}{N} +1+ \frac{1}{2N^2} +\frac{\pi^2}{6}-1+ \sum\_{i>0} \frac{B\_{2i}}{(2i)!} \left( -\frac{(2i)!}{N^{2i+1}} \right)  \\
&= \frac{\pi^2}{6}-\frac{1}{N} + \frac{1}{2N^2} - \sum\_{i>0} \frac{B\_{2i}}{N^{2i+1}}  \\
&\sim \frac{\pi^2}{6}-\frac{1}{N} + \frac{1}{2N^2} - \frac{1}{6N^3} + \dots
\end{align\*}
$$

## Exercise 57

This is a variation of the previous exercise. The infinite sum converges to the [Apéry's constant](https://en.wikipedia.org/wiki/Ap%C3%A9ry's_constant). Without repeating the technical details, we get

$$
H\_N^{(3)} = \zeta(3) - \frac{1}{2N^2} +O(N^{-3}).
$$

{% hint style="info" %}
All cases are perfectly covered in the article about [asymptotic expansions of the generalized harmonic numbers](https://math.stackexchange.com/a/1583465/753413) on StackExchange for Mathematics.
{% endhint %}

## Exercise 58

Using Theorem 4.3, as done in [Exercise 56](#exercise-56), we obtain the following asymptotic estimates

$$
\begin{align\*}
\sum\_{k=1}^{N} \sqrt{k} &= \frac{2}{3} N^{3/2} + \frac{1}{2} N^{1/2} + \zeta(-1/2) + \frac{1}{24} N^{-1/2} + O(N^{-2}) \\\[4pt]
\sum\_{k=1}^{N} \frac{1}{\sqrt{k}} &= 2\sqrt{N} + \frac{1}{2} N^{-1/2} + \zeta(1/2) - \frac{1}{24} N^{-3/2} + O(N^{-2}) \\\[4pt]
\sum\_{k=1}^{N} \frac{1}{\sqrt\[3]{k}} &= \frac{3}{2} N^{2/3} + \frac{1}{2} N^{-1/3} + \zeta(1/3) - \frac{1}{36} N^{-4/3} + O(N^{-2}).
\end{align\*}
$$

The fractional arguments of the Riemann zeta function do not have known simple closed forms, like in [Exercise 56](#exercise-56); they’re typically evaluated numerically.

## Exercise 59

The main idea is presented in the article about the [asymptotic expansion of the alternating harmonic numbers](https://en.wikipedia.org/wiki/Harmonic_series_\(mathematics\)#Alternating_harmonic_series) on Wikipedia. In both subtasks, we separate the terms based on the parity of $$N$$.

### Expansion of $$\sum\_{1 \le k \le N} \frac{(-1)^k}{k}$$

Let’s start by assuming $$N=2n$$. This gives

$$
\begin{align\*}
S\_{2n} &= \sum\_{k=1}^{2n} \frac{(-1)^k}{k} \\
&= -\sum\_{k=1}^{2n} \frac{(-1)^{k+1}}{k} \qquad \text{(form used on Wikipedia)} \\
&= \sum\_{k=1}^n \frac{1}{2k} - \sum\_{k=1}^n \frac{1}{2k-1} \\
&= \sum\_{k=1}^n \frac{1}{2k} - \left( H\_{2n} - \sum\_{k=1}^n \frac{1}{2k} \right) \\
&= 2 \cdot \frac{1}{2} H\_n - H\_{2n} \\
&= H\_n - H\_{2n} \\
&\sim -\ln 2 + \frac{1}{4n} - \frac{1}{16n^2} + \dots \\
&=  -\ln 2 + \frac{1}{2N} - \frac{1}{4N^2} + \dots
\end{align\*}
$$

When $$N=2n-1$$, we have

$$
S\_{2n-1} = S\_{2n} - \frac{(-1)^{2n}}{2n} = S\_{2n} - \frac{1}{2n}.
$$

### Expansion of $$\sum\_{1 \le k \le N} \frac{(-1)^k}{\sqrt k}$$

Let’s start by assuming $$N=2n$$. This gives (see also the previous exercise about the expansion of $$H\_n^{(1/2)}$$)

$$
\begin{align\*}
S\_{2n} &= \sum\_{k=1}^{2n} \frac{(-1)^k}{\sqrt{k}} \\
&= \sum\_{k=1}^n \frac{1}{\sqrt{2k}} - \sum\_{k=1}^n \frac{1}{\sqrt{2k-1}} \\
&= \frac{1}{\sqrt{2}}H\_n^{(1/2)} - \left( H\_{2n}^{(1/2)} - \frac{1}{\sqrt{2}}H\_n^{(1/2)} \right) \\
&= \sqrt{2}H\_n^{(1/2)} - H\_{2n}^{(1/2)} \\
&\sim (\sqrt{2}-1)\zeta(1/2) + \frac{1}{2\sqrt{2n}}-\cdots \\
&= (\sqrt{2}-1)\zeta(1/2) + \frac{1}{2\sqrt{N}}-\cdots
\end{align\*}
$$

When $$N=2n-1$$, we have

$$
S\_{2n-1} = S\_{2n} - \frac{1}{\sqrt{2n}}.
$$

{% hint style="info" %}
The alternating zeta function is known as the [Dirichlet eta function](https://en.wikipedia.org/wiki/Dirichlet_eta_function).
{% endhint %}

## Exercise 60

The [solution](https://math.stackexchange.com/a/3781710/753413) is available on StackExchange for Mathematics. In the cited post, we should assume $$z \ge 1$$, based on the identity from the book involving the generalized binomial coefficient.

## Exercise 61

Replace the asymptotic estimate used in the derivation of Theorem 4.5 with the inequality $$-\ln(1+x)\ge-x$$, valid for $$x \ge 0$$. See the proof of the corollary of Theorem 4.5 for context.

## Exercise 62

The proofs of the relative bounds for the Ramanujan Q- and R-distributions are literally the same. Stirling’s formula for $$\ln N!$$ is available in Table 4.6 in the book.

$$
\begin{align\*}
\ln \frac{N!}{(N-k)! N^k} &= \ln N! - \ln(N-k)! - k \ln N \\
&= \left\[ \left(N+\frac{1}{2}\right)\ln N - N+\ln \sqrt{2\pi}+ O(1/N) \right] \\
&\qquad - \left\[ \left(N-k+\frac{1}{2}\right)\ln(N-k) - (N-k) +\ln \sqrt{2\pi}+ O(1/(N-k))\right] \\
&\qquad- k \ln N  \\
&= \left\[ \left(N+\frac{1}{2}\right)\ln N - N \right] - \left\[ \left(N-k+\frac{1}{2}\right)\left(\ln N + \ln\left(1-\frac{k}{N}\right)\right) - N + k \right] - k \ln N  \\
&= -k - \left(N-k+\frac{1}{2}\right)\ln\left(1-\frac{k}{N}\right) \\
&= -k - \left(N-k+\frac{1}{2}\right)\left(-\frac{k}{N} - \frac{k^2}{2N^2}+O\left(\frac{k^3}{N^3}\right)\right)  \\
&= -\frac{k^2}{2N}+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right) \\
&\therefore \frac{N!}{(N-k)! N^k}=exp\left{-\frac{k^2}{2N}+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right)\right}=e^{-k^2/(2N)}\left(1+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right)\right).
\end{align\*}
$$

Notice that the above approximation is valid only for $$k=o(N^{2/3})$$. Under this regime, both Stirling error terms, $$O(1/N)$$ and $$O(1/(N-k))$$, "vanish" for large $$N$$. Furthermore, the $$O(k^4/N^3)$$ term is absorbed into $$O(k^3/N^2)$$.

The R-distribution is handled in an analogous fashion starting with

$$
\ln \frac{N!N^k}{(N+k)!} = \ln N! - \ln(N+k)! + k \ln N .
$$

## Exercise 63

The best approach here is to simply reuse a high-quality implementation of the PMF for the binomial distribution. The following Python snippet demonstrates the typical usage scenario.

```python
from scipy.stats import binom

# Create a binomial distribution object to reuse for different values of k.
dist = binom(n=9, p=0.5)

# Get probability for specific value of k.
print(dist.pmf(3))
```

## Exercise 64

If $$k=\sqrt N+O(1)$$ then we cannot reach the required accuracy by relying on Theorem 4.6 alone. We must go back to the proof from the book and add more terms to every component in the product. The augmented central binomial coefficient is

$$
\frac{1}{2^{2N}}\binom{2N}{N}= \frac{1}{\sqrt{\pi N}}\left(1 - \frac{1}{8N} + O\left(\frac{1}{N^2}\right)\right).
$$

The extended product of the Ramanujan Q- and R-distributions is

$$
\begin{align\*}
\frac{N!}{(N-k)!},\frac{N!}{(N+k)!} &= \exp\left{\sum\_{1 \le j < k} \ln\left(1 - \frac{j}{N}\right) - \sum\_{1 \le j \le k} \ln\left(1 + \frac{j}{N}\right)\right} \\
&= \exp\left{\sum\_{j=1}^k \left\[ \ln\left(1 - \frac{j}{N}\right) - \ln\left(1 + \frac{j}{N}\right) \right] - \ln\left(1 - \frac{k}{N}\right)\right} \\
&= \exp\left{\sum\_{j=1}^k \left( -\frac{2j}{N} - \frac{2j^3}{3N^3} + O\left(\frac{j^5}{N^5}\right) \right) - \ln\left(1 - \frac{k}{N}\right)\right} \\
&= \exp\left{-\frac{2}{N}\left(\frac{k^2+k}{2}\right) - \frac{2}{3N^3}\left(\frac{k^4+2k^3+k^2}{4}\right) + O\left(\frac{1}{N^2}\right) - \ln\left(1 - \frac{k}{N}\right)\right} \\
&= \exp\left{-\frac{2}{N}\left(\frac{k^2+k}{2}\right) - \frac{2}{3N^3}\left(\frac{k^4+2k^3+k^2}{4}\right)

* \frac{k}{N} + \frac{k^2}{2N^2} + \frac{k^3}{3N^3} + O\left(\frac{1}{N^2}\right)\right} \\
  &= \exp\left{-\frac{k^2}{N} - \frac{k^4}{6N^3} + \frac{k^2}{2N^2} - \frac{k^2}{6N^3} + O\left(\frac{1}{N^2}\right)\right} \\
  &= \exp\left{-\frac{k^2}{N} - \frac{k^4}{6N^3} + \frac{k^2}{2N^2} + O\left(\frac{1}{N^2}\right)\right} \\
  &= \exp\left{-1 - \frac{2\delta}{\sqrt{N}} + \frac{1/3 - \delta^2}{N} + \frac{\delta}{3N^{3/2}} + O\left(\frac{1}{N^2}\right)\right} \\
  &= e^{-1} \left( 1 - \frac{2\delta}{\sqrt{N}} + \frac{\delta^2 + 1/3}{N} + \frac{2\delta^3 - \delta}{3N^{3/2}} + O\left(\frac{1}{N^2}\right) \right).
  \end{align\*}
  $$

The last two steps follow from substituting $$k=\sqrt N + \delta$$, where $$\delta$$ is the displacement (hidden constant inside the O-notation) and expanding $$e^x$$. Multiplying together the two components, we get

$$
\frac{1}{2^{2N}}\binom{2N}{N-k} = \frac{e^{-1}}{\sqrt{\pi N}} \left( 1 - \frac{2\delta}{\sqrt{N}} + \frac{\delta^2 + 5/24}{N} + \frac{8\delta^3 - \delta}{12 N^{3/2}} + O\left(\frac{1}{N^2}\right) \right).
$$

To get an absolute error, we would just distribute the leading term all the way through in the formula above and simplify.

## Exercise 65

<figure><img src="https://2832731860-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FNHadf4C2mc4N7KCVeYg8%2Fuploads%2FAimX66YvaYKJNs3MV98v%2Fexercise_4_65.png?alt=media&#x26;token=f2799fc7-8434-45a9-918e-bfb75dae92a0" alt=""><figcaption></figcaption></figure>

## Exercise 66

This is the case when the binomial distribution can be efficiently approximated by a normal distribution centered at $$pN$$ (mean). For example, the book already uses such an approximation $$\sim 1/\sqrt {\pi N/2}$$ with $$p=1/2$$ in Figure 4.3. More generally, the estimated probability is

$$
\sim \frac{1}{\sqrt {2 \pi N pq}}.
$$

## Exercise 67

The paper [The Normal Approximation to the Binomial Distribution](https://scipp-legacy.pbsci.ucsc.edu/~haber/ph116C/NormalApprox.pdf) shows a full derivation by employing the hint from the book (see section 2); the shift factor $$k$$ is denoted as $$\delta$$. At the end, we get a formula similar to the one from Theorem 4.6.

{% hint style="info" %}
More formally, this is known as the [De Moivre–Laplace theorem](https://en.wikipedia.org/wiki/De_Moivre%E2%80%93Laplace_theorem).
{% endhint %}

## Exercise 68 🌟

{% hint style="success" %}
This exercise demonstrates the principles of reuse within mathematics. Rather than constructing analyses from the ground up, it presents a systematic approach to modifying existing work to accommodate varying conditions.
{% endhint %}

Theorem 4.7 isn’t appropriate here because the mean $$\mu=pN=\lambda \sqrt N$$ isn’t bounded as $$N \to \infty$$. The previous exercise handles the case when $$p$$ is fixed. Nonetheless, we can tweak the method from the cited paper to arrive at a different approximation. In other words, the method itself needs to be adapted to a different asymptotic regime.

We list here the recipe for arriving at the final formula:

1. Our starting point with Stirling's formula is precisely equation (1) in the notes.
2. The $$k$$ in this exercise is analogous to $$\delta$$ in the paper. We set $$x = Np + \delta$$. The standard deviation in our new regime is $$\sigma = \sqrt{Npq}=\sqrt{Np(1-p)} \sim \sqrt{\lambda \sqrt{N}} = \sqrt \lambda N^{1/4}$$. So, allowing $$\delta=O(N^{1/4})$$ is exactly the same as allowing $$x$$ to deviate by a few standard deviations.
3. The notes expand $$\ln(Np/x)$$ and $$\ln(Nq/(N-x))$$ in powers of $$\delta/(Np)$$ and $$\delta/(Nq)$$. Nonetheless, $$\delta/(Np) \gg \delta/(Nq)$$, hence $$\delta/(Nq)$$ term's higher-order corrections are negligible compared to the former (only the one contributing to the exponent of $$e$$ matters). Observe that the largest correction $$O(\delta^3/(Np)^2)$$ cannot hide $$p$$, since it isn’t a constant anymore. At any rate, this boils down to $$O(\delta^3/N)$$.
4. The square root pre-factor is\
   $$\sqrt{\frac{n}{2\pi (np+\delta)(nq-\delta)}} = \frac{1}{\sqrt{2\pi npq}} \left(1 + \frac{\delta}{np}\right)^{-1/2} \left(1 - \frac{\delta}{nq}\right)^{-1/2}$$\
   Using the Taylor expansion $$(1 + u)^{-1/2} \sim 1 - \frac{1}{2}u$$ for small $$u$$, we get that the dominant correction term becomes $$O(\delta/(Np))=O(\delta/\sqrt N)$$.

After incorporating all previously mentioned changes, we obtain (using $$k$$ instead of $$\delta)$$

$$
P{X=\mu+k}=\frac{1}{\sqrt{2\pi} \sigma} , e^{-\frac{k^2}{2\sigma^2}} \left(1 + O(1/\sqrt N)+ O(k/\sqrt N)+O(k^3/N)\right).
$$

If $$k=O(\sigma)$$ then the relative error is $$O(N^{-1/4})$$. If $$k = O(N^{1/3})$$ then the error jumps to $$O(1)$$, so we must have at least $$k=o(N^{1/3})$$. Finally, the baseline error term $$O(N^{-1/2})$$ covers the edge case $$k=0$$.

## Exercise 69

This is a variation of the previous exercise, so we don’t repeat all the steps again; the methodology is the same. Let $$\mu = Np = \lambda N / \ln N$$ and $$\sigma^2 = Npq = \mu(1-p) = \mu(1 - \lambda/\ln N)\sim \mu$$. The following asymptotic holds:

$$
P{X=\mu+k}=\frac{1}{\sqrt{2\pi} \sigma} , e^{-\frac{k^2}{2\sigma^2}} \left(1 + O\left(\frac{\ln N}{N}\right)+ O\left(\frac{k\ln N}{N}\right)+O\left(\frac{k^3\ln^2 N}{N^2}\right)\right).
$$

If $$k=O(\sigma)$$ then the relative error is $$O\left(\sqrt{\frac{\ln N}{N}}\right)$$. Notice, as in the previous exercise, the explicit baseline error term that covers $$k=0$$.

## Exercise 70

Let $$x=k/\sqrt N$$ so that $$dk=\sqrt N dx$$. This gives

$$
\sum\_{1 \le k \le k\_0}e^{-k^2/(2N)}O\left(\frac{k}{N}\right)=\sqrt N \int\_0^\infty e^{-x^2/2}O\left(\frac{x}{\sqrt N}\right)dx+O(1)=O\left(\underbrace{\int\_0^\infty xe^{-x^2/2}dx}\_{=1}\right)+O(1)=O(1).
$$

$$
\sum\_{1 \le k \le k\_0}e^{-k^2/(2N)}O\left(\frac{k^3}{N^2}\right)=\sqrt N \int\_0^\infty e^{-x^2/2}O\left(\frac{x^3}{\sqrt N}\right)dx+O(1)=O\left(\underbrace{\int\_0^\infty x^3e^{-x^2/2}dx}\_{=2}\right)+O(1)=O(1).
$$

The definite integrals can be easily computed in WolframAlpha.

## Exercise 71

Let’s follow the hint from the book’s [website](https://aofa.cs.princeton.edu/40asymptotic/) for this exercise. This gives

$$
\begin{align\*}
p\_k=\frac{(N−k)^k(N−k)!}{N!} &= \prod\_{0 \le j < k} \frac{N-k}{N-j} \qquad \text{(see Table 4.11)} \\
&= \exp\left{\sum\_{0 \le j < k} \bigg(\ln (N-k)-\ln (N-j)\bigg)\right} \\
&= \exp\left{\sum\_{0 \le j < k} \bigg(\ln N(1-k/N)-\ln N(1-j/N)\bigg)\right} \\
&= \exp\left{\sum\_{0 \le j < k} \bigg(\ln N+\ln(1-k/N)-\ln N-\ln (1-j/N)\bigg)\right} \\
&= \exp\left{k\ln(1-k/N)-\sum\_{0 \le j < k} \ln (1-j/N)\right} \\
&= \exp\left{k\left(-\frac{k}{N}+O\left(\frac{k^2}{N^2}\right)\right)+\sum\_{0 \le j < k} \left(\frac{j}{N}+O\left(\frac{j^2}{N^2}\right)\right)\right} \\
&= \exp\left{-\frac{k^2}{N}+O\left(\frac{k^3}{N^2}\right)+ \frac{k(k-1)}{2N}+O\left(\frac{k^3}{N^2}\right)\right} \\
&= \boxed{e^{−k^2/(2N)}(1+O(k/N)+O(k^3/N^2))}.
\end{align\*}
$$

Notice that $$p\_k$$ is strictly decreasing as $$k$$ grows and has the same asymptotic approxmation as the Q-distribution. Hence, by the same arguments as in the proof of Theorem 4.8, together with the solution of the previous exercise, we can confirm the validity of the approximation of $$P(N)$$.

## Exercise 72

The paper [On Ramanujan's Q-function](https://algo.inria.fr/flajolet/Publications/FlGrKiPr95.pdf) by Flajolet et al. elaborates in detail the developments around Q- and R- distributions and provides the proof of this exercise.

## Exercise 73

Assume $$N$$ is even for simplicity (odd $$N$$ gives the same asymptotic behavior). Let $$k = N/2 ± \delta$$ with $$\delta = o(N^{2/3})$$. We know that most of the probability mass will be concentrated around the mean. We can now apply Theorem 4.6 to get

$$
\begin{align\*}
\binom{N}{k} &\sim \frac{2^N}{\sqrt{\frac{\pi N}{2}}} e^{-2\delta^2/N} \\
\binom{N}{k}^3 &\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} e^{-6\delta^2/N}.
\end{align\*}
$$

Here, we are only working with binomial coefficients rather than distribution. Therefore, we don’t divide by the normalization factor. By leveraging the Laplace method, similarly as in Theorem 4.8, we can uniformly bundle the diminishing tails (far away from the mean) into a single formula.

$$
\begin{align\*}
\sum\_{k} \binom{N}{k}^3 &\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \sum\_{\delta} e^{-6\delta^2/N} \\
&\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \int\_{-\infty}^{\infty} e^{-6\delta^2/N} d\delta \\
&= \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \cdot \sqrt{\frac{\pi N}{6}} \\
&= \frac{2^{3N+1}}{\sqrt{3} \pi N}.
\end{align\*}
$$

{% hint style="info" %}
For the generalized version of this exercise refer to the article [An Asymptotic Formula for Binomial Sums](https://www.sciencedirect.com/science/article/pii/S0022314X96900724) (see the beginning of section 4). A sequence of values of the sum for different $$N$$ is known as the [Franel numbers.](https://mathworld.wolfram.com/FranelNumber.html)
{% endhint %}

## Exercise 74

Theorem 7.4 (Involutions) from the book provides all the details.

{% hint style="info" %}
Wikipedia also offers a summary of the problem, including the required asymptotic estimate, known as [telephone numbers](https://en.wikipedia.org/wiki/Telephone_number_\(mathematics\)#Summation_formula_and_approximation).
{% endhint %}

## Exercise 75 🌟

{% hint style="success" %}
This exercise demonstrates in detail the Laplace method for sums. There is a related concept called the [saddle-point method](https://www.physics.ucla.edu/~kevinchen/physics/saddle.pdf), that shares the same underlying principle of focusing on areas that asymptotically contribute the most.
{% endhint %}

To derive the asymptotic estimate for our sum, we use Stirling's approximation (see Table 4.6) to transform the discrete sum into a continuous integral dominated by a single "peak". This is the main idea behind the Laplace method. We start with dissecting the summand

$$
\binom{N-k}{k} = \frac{(N-k)!}{k! (N-2k)!}.
$$

Observe that it’s nonzero for $$0 \le k \le N/2$$, so let $$k=\alpha N$$ with $$0 \le \alpha \le 1/2$$. Substituting the approximation for each factorial and simplifying, we get

$$
\begin{align\*}
\binom{N-\alpha N}{\alpha N} \sim \frac{1}{\sqrt{2\pi N}} \sqrt{\frac{1-\alpha}{\alpha(1-2\alpha)}} \cdot \left\[ \frac{(1-\alpha)^{1-\alpha}}{\alpha^\alpha (1-2\alpha)^{1-2\alpha}} \right]^N \\
\binom{N-\alpha N}{\alpha N}^2 \sim \frac{1}{2\pi N} \left( \frac{1-\alpha}{\alpha(1-2\alpha)} \right) \cdot \exp\left( 2N \cdot f(\alpha) \right),
\end{align\*}
$$

where

$$
f(\alpha) = (1-\alpha)\ln(1-\alpha) - \alpha\ln\alpha - (1-2\alpha)\ln(1-2\alpha).
$$

We can find the maximum $$\alpha\_0$$ by typing in WolframAlpha

```wolfram
maximize (1-\alpha)\ln(1-\alpha) - \alpha\ln\alpha - (1-2\alpha)\ln(1-2\alpha)
```

It outputs $$\alpha\_0=1/2-1/(2\sqrt 5)$$. At this specific point, $$e^{f(\alpha\_0)} = \phi$$ (golden ratio). To see this, we should just raise $$e$$ to the maximum value reported by WolframAlpha. The square of this value is $$\phi^2$$.

Since the function $$\binom{N-k}{k}^2$$ drops rapidly from $$\alpha\_0$$, we can approximate the sum using the Euler-Maclaurin formula. We expand $$2N f(\alpha)$$ as a Taylor series around $$\alpha\_0$$

$$
2N f(\alpha) \approx 2N f(\alpha\_0) + N f''(\alpha\_0)(\alpha - \alpha\_0)^2.
$$

With the help of WolframAlpha, we can compute symbolically the second derivative and evaluate it at $$\alpha\_0$$

```
D[(1-\alpha)\ln(1-\alpha) - \alpha\ln\alpha - (1-2\alpha)\ln(1-2\alpha),{\alpha,2}]
```

This gives

$$
f''(\alpha\_0) = -\frac{\sqrt{5}}{\alpha\_0(1-\alpha\_0)(1-2\alpha\_0)} = -5\sqrt{5}.
$$

The sum is then approximated by the integral

$$
\sum\_k \binom{N-k}{k}^2 \sim \frac{1}{2\pi N} \frac{1-\alpha\_0}{\alpha\_0(1-2\alpha\_0)} \phi^{2N} \int\_{-\infty}^{\infty} e^{-5\sqrt{5}N (\alpha-\alpha\_0)^2} N d\alpha= \frac{\phi^{2N+2}}{2 \sqrt{\pi N \sqrt{5}}}.
$$

In essence, the Laplace method relies on the fact that as $$N \to \infty$$, the function becomes so incredibly "sharp" at the peak $$\alpha\_0$$ that the tails of the distribution become negligible. Therefore, the error introduced by extending the integration limits to $$\pm \infty$$ is exponentially small. For example, just at some $$\epsilon>0$$ distance from $$\alpha\_0$$ the contribution decreases to only $$\phi^{2N} \cdot e^{-N(5\sqrt{5})\epsilon^2}$$ (see also [Exercise 10](#exercise-10)).

## Exercise 76

We’ve a flipped Figure 4.7, where the first $$\lg N$$ terms are negligible, whilst the rest contribute around 1. Using the same bounds as in Theorem 4.10, we get

$$
\sum\_{0 \le j \le N} \left(1-\frac{1}{2^j}\right)^N = N - \lg N +O(\log \log N).
$$

As noted in Theorem 4.10, the error bound can be tightened all the way down to $$O(1)$$.

## Exercise 77

The first $$\sqrt\[t]{N}$$ terms are significant, whilst the rest drops very quickly to zero. For large $$N$$, we can approximate this sum with an integral over the whole possible range for $$j$$. This gives

$$
\begin{align\*}
\sum\_{j \ge 0} \left(1-e^{-N/j^t}\right) &\sim \int\_{0}^{\infty} \left(1 - e^{-N/x^t}\right) dx \\
&= \int\_{\infty}^{0} (1 - e^{-u}) \left( -\frac{1}{t} N^{1/t} u^{-1 - 1/t} \right) du \\
&= \frac{N^{1/t}}{t} \int\_{0}^{\infty} (1 - e^{-u}) u^{-1 - 1/t} du.
\end{align\*}
$$

The second line follows after the substitution $$u = N/x^t$$, so that

$$
dx = -\frac{1}{t} N^{1/t} u^{-1 - 1/t} du.
$$

The last integral can be evaluated using integration by parts. Let $$\alpha = 1/t$$. Notice that $$t>1$$ implies $$0<\alpha<1$$. Using $$v = (1 - e^{-u})$$ and $$dw = u^{-1-\alpha}du$$ gives

$$
\int\_{0}^{\infty} u^{-1-\alpha} (1 - e^{-u}) du =  \frac{u^{-\alpha}}{-\alpha} (1 - e^{-u}) \bigg|*{0}^{\infty} - \int*{0}^{\infty} \frac{u^{-\alpha}}{-\alpha} e^{-u} du= \frac{1}{\alpha}\Gamma(1 - \alpha).
$$

When $$u \to 0$$ then $$u^{-\alpha}(1 - e^{-u}) \sim u^{-\alpha} \cdot u = u^{1-\alpha} \to 0$$. The second integral is the [gamma function](https://en.wikipedia.org/wiki/Gamma_function). This gives

$$
\sum\_{j \ge 0} \left(1-e^{-N/j^t}\right) \sim \sqrt\[t]N ,, \Gamma\left(1 - \frac{1}{t}\right).
$$
