book-sectionChapter 4

Exercise 1

NN+1<1 for all N0    NN+1=O(1).\frac{N}{N+1}<1 \text{ for all }N\ge 0 \implies \frac{N}{N+1}=O(1).

Expanding 2N2^N and N!N! gives

limN2NN!=limN2×2××21×2××N=0,\lim_{N \to \infty} \frac{2^N}{N!}=\lim_{N \to \infty} \frac{2\times2\times\cdots\times2}{1\times2\times\cdots \times N}=0,

thus 2N=o(N!)2^N=o(N!).

Clearly, limNe1/N=e0=1\lim_{N \to \infty} e^{1/N}=e^0=1, so eN1\sqrt[N]{e} \sim 1.

Exercise 2 🌟

circle-check

We start by factoring N/(N+1)=11/(N+1)N/(N+1)=1-1/(N+1). Let’s show that the second summand is O(1/N)O(1/N)

1/(N+1)1/N=NN+1<1 for all N0,\left|\frac{-1/(N+1)}{1/N}\right|=\frac{N}{N+1}<1 \text{ for all }N\ge 0,

therefore, N/(N+1)=1+O(1/N)N/(N+1)=1+O(1/N). We also have

limNN/(N+1)11/N=1,\lim_{N \to \infty} \frac{N/(N+1)}{1-1/N}=1,

hence, N/(N+1)11/NN/(N+1) \sim1-1/N.

Exercise 3

limNNαNβ=limN1Nβα=0,when α<β.\lim_{N \to \infty} \frac{N^\alpha}{N^\beta}=\lim_{N \to \infty} \frac{1}{N^{\beta-\alpha}}=0, \qquad \text{when }\alpha<\beta.

Therefore, Nα=o(Nβ)N^\alpha=o(N^\beta) for α<β\alpha<\beta.

Exercise 4

The O- and o-notations provide ways to express upper bounds (with o being the stronger assertion). Consequently, g(N)=o(f(N))    g(N)=O(f(N))g(N)=o(f(N)) \implies g(N)=O(f(N)). By combining the results from the previous two exercises, for r0r\ge0 fixed, we can conclude that (see also section 4.3 of the book)

1±k/N=1+O(1/N) for 0kr,(1+O(1/N))r=1+O(1/N).1±k/N =1+O(1/N) \text{ for } 0 \le k \le r ,\qquad (1+O(1/N))^r=1+O(1/N).

This gives

(Nr)=N(N1)(Nr+1)r!=Nrr!k=0r1(1kN)=Nrr!(1+O ⁣(1N))r=Nrr!(1+O ⁣(1N))(Nr)=Nrr!+O(Nr1).\begin{align*} \binom{N}{r} &= \frac{N(N-1)\cdots(N-r+1)}{r!} \\ &= \frac{N^r}{r!}\prod_{k=0}^{r-1}\Big(1-\frac{k}{N}\Big) \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big)^r \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big) \\ &\therefore \boxed{\binom{N}{r} = \frac{N^r}{r!}+O(N^{r-1})}. \end{align*}

Similarly, we have

(N+rr)=(N+r)(N+r1)(N+1)r!=Nrr!k=1r(1+kN)=Nrr!(1+O ⁣(1N))r=Nrr!(1+O ⁣(1N))(N+rr)=Nrr!+O(Nr1).\begin{align*} \binom{N+r}{r} &= \frac{(N+r)(N+r-1)\cdots(N+1)}{r!} \\ &= \frac{N^r}{r!}\prod_{k=1}^{r}\Big(1+\frac{k}{N}\Big) \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big)^r \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big) \\ &\therefore \boxed{\binom{N+r}{r} = \frac{N^r}{r!}+O(N^{r-1})}. \end{align*}

Alternative Proof

Instead of relying on basic algebraic manipulations involving the O-notation, we can represent the numerator of (Nr)\binom{N}{r} as a polynomial P(N)=N(N1)(Nr+1)P(N)=N(N-1)\cdots(N-r+1). Since there are rr terms in this product, multiplying the NN from each term gives us a leading factor of NrN^r. To find the next term (the coefficient of Nr1N^{r-1}), we sum the constant terms from each factor

Sum of constants=(0+1+2++(r1))=(r1)r2.\text{Sum of constants} = -(0 + 1 + 2 + \dots + (r-1)) = -\frac{(r-1)r}{2}.

Thus, the numerator expands to

P(N)=Nrr(r1)2Nr1+(lower order terms).P(N) = N^r - \frac{r(r-1)}{2}N^{r-1} + \dots (\text{lower order terms}).

Dividing by r!r! gives

(Nr)=Nrr!r(r1)2r!Nr1+\binom{N}{r} = \frac{N^r}{r!} - \frac{r(r-1)}{2 \cdot r!}N^{r-1} + \dots

The first term is our leading term. All subsequent terms form a polynomial of degree r1r-1. Therefore, their sum is O(Nr1)O(N^{r-1}) (see also the previous exercise). An analogous reasoning applies for the case (N+rr)\binom{N+r}{r}.

Exercise 5

limNlogNNϵ=LHo^pitallimN1/NϵNϵ1=limN1ϵNϵ=0logN=o(Nϵ).\begin{align*} \lim_{N \to \infty} \frac{\log N}{N^\epsilon} &\overset{L'Hôpital}{=} \lim_{N \to \infty} \frac{1/N}{\epsilon N^{\epsilon - 1}} \\ &= \lim_{N \to \infty} \frac{1}{\epsilon N^\epsilon} = 0 \\ &\therefore \boxed{\log N = o(N^\epsilon)}. \end{align*}

Exercise 6

limN12+lnN=012+lnN=o(1).\begin{align*} &\lim_{N \to \infty} \frac{1}{2+\ln N} = 0 \\ &\therefore \boxed{\frac{1}{2+\ln N} = o(1)}. \end{align*}

The cosine function oscillates between -1 and 1, thus as NN \to \infty we have 1/(2+cosN)[1/3,1]1/(2+\cos N) \in[1/3,1]. Clearly, it’s bounded from above, but doesn’t tend to 0. Therefore, it is O(1)O(1) but not o(1)o(1).

Exercise 7

circle-exclamation
limNeNϵNM=limNNMeNϵ=limNelnNMeNϵ=limNeMlnNeNϵ=0.\begin{align*} \lim_{N \to \infty} \frac{e^{-N^\epsilon}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{e^{N^\epsilon}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{N^\epsilon}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{N^\epsilon}} = 0. \end{align*}

The last line follows from Exercise 5, thus eNϵe^{-N^\epsilon} is exponentially small.

Exercise 8

limNelog2NNM=limNNMelog2N=limNelnNMelog2N=limNeMlnNelog2N=0.\begin{align*} \lim_{N \to \infty} \frac{e^{-\log^2 N}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{e^{\log^2 N}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\log^2 N}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{\log^2 N}} = 0. \end{align*}

In the last line MlnN=o(log2N)M\ln N = o(\log^2 N), since after cancelling one logarithm on both sides, we are left with a constant term "against" a logarithm that grows to infinity. Observe that we don’t care about the base of a logarithm on the RHS, as it’s just a constant multiple of a natural logarithm. Therefore, elog2Ne^{-\log^2 N} is exponentially small.

limN(logN)logNNM=limNNM(logN)logN=limNelnNMeln(logN)logN=limNeMlnNelogN(lnlogN)=0.\begin{align*} \lim_{N \to \infty} \frac{(\log N)^{-\log N}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{(\log N)^{\log N}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\log N)^{\log N}}}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{\log N (\ln {\log N})}} = 0. \end{align*}

By employing the same reasoning as previously, we see that now loglogn\log \log n grows faster than any constant. Therefore, (logN)logN(\log N)^{-\log N} is exponentially small.

Exercise 9 🌟

circle-check
limN(α/β)NNM=limNNM(β/α)N=limNelnNMeln(β/α)N=limNeMlnNeNln(β/α)=0,\begin{align*} \lim_{N \to \infty} \frac{(\alpha/\beta)^N}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{(\beta/\alpha)^N} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\beta/\alpha)^N}}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{N\ln {(\beta/\alpha)}}} = 0, \end{align*}

so αN\alpha^N is exponentially small relative to βN\beta^N (see also Exercise 5).

The absolute error is αN\alpha^N, whilst the relative error is αN/(αN+βN)\alpha^N/(\alpha^N+\beta^N). We have

  • For N=10N=10 the absolute error is ≈2.6 and the relative error is ≈29.53%.

  • For N=100N=100 the absolute error is ≈13,781 and the relative error is ≈0.02%.

Exercise 10

circle-exclamation

Let f(N)f(N) be exponentially small, meaning for every M>0M > 0, f(N)=o(NM)f(N) = o(N^{-M}). For any polynomialg(N)=O(Nd)g(N)=O(N^d), where d0d \ge 0 is the degree of the polynomial, we can choose M=M+dM'=M+d, such that f(N)g(N)=o(NM)O(Nd)=o(NM)f(N)g(N)=o(N^{-M'})O(N^d)=o(N^{-M}). This concludes the proof.

Exercise 11

Chapter 2 of the book explains the Master Theorem with a note that floors/ceilings can be ignored without breaking the asymptotic result. The term an/2+O(1)a_{\lfloor n/2 \rfloor}+O(1) implies that every time we recurse, we introduce a small discrepancy O(1)O(1) due to potential rounding errors. This gives

an=2an/2+f(n)=2(an/2+O(1))+f(n)=2an/2+O(1)error+f(n).\begin{align*} a_n &= 2a_{n/2} + f(n) \\ &= 2(a_{\lfloor n/2 \rfloor} + O(1)) + f(n) \\ &= 2a_{\lfloor n/2 \rfloor} + \underbrace{O(1)}_{\text{error}} + f(n). \end{align*}

In a binary recursion tree, where every internal node has 2 children, the number of nodes doubles at every level. The sum of this error across the whole tree is proportional to the number of nodes in the tree

Total Error=k=0lgn2kO(1)=O(n).\text{Total Error} = \sum_{k=0}^{\lg n} 2^k \cdot O(1) = O(n).

Consequently, an=(Ideal Solution)+O(n)a_n=(\text{Ideal Solution}) + O(n), where ideal solution denotes the case when nn is a power of two. We’ll derive these ideal solutions and see that the extra O(n)O(n) error has no significance on them.

  1. an=2an/2+O(n)a_n = 2a_{n/2} + O(n) essentially falls under case 2 of the Master Theorem. We have lgn\lg n levels in the recursion tree taking O(n)O(n) work per level. Therefore, an=O(nlogn)\boxed{a_n=O(n\log n)}. We cannot use Θ\Theta-notation because the term O(n)O(n) allows for the driving function to be 0 or even negative (see the next exercise).

  2. an=2an/2+o(n)a_n = 2a_{n/2} + o(n) can also be handled using the summation approach. We have lgn\lg n levels in the recursion tree taking o(n)o(n) work per level. Therefore, an=o(nlogn)\boxed{a_n=o(n\log n)}.

  3. an2an/2+na_n \sim 2a_{n/2} + n means that an=2an/2+n+o(n)a_n = 2a_{n/2} + n + o(n). Dividing by nn and letting bn=an/nb_n = a_n/n yields bn=bn/2+1+o(1)b_n = b_{n/2} + 1 + o(1). Iterating this recurrence gives bn=b0+lgn+o(lgn)b_n = b_0 + \lg n + o(\lg n), so an=nlgn+o(nlgn)a_n = n \lg n + o(n \lg n). Therefore, annlgn\boxed{a_n \sim n \lg n}.

Exercise 12

This is a continuation of the previous exercise, where the first case is already covered:

  • an=2an/2+Θ(n)a_n = 2a_{n/2} + \Theta(n) entails that an=Θ(nlogn)\boxed{a_n=\Theta(n\log n)}.

  • an=2an/2+Ω(n)a_n = 2a_{n/2} + \Omega(n) entails that an=Ω(nlogn)\boxed{a_n=\Omega(n\log n)}.

Exercise 13

Both a(x)a(x) and b(x)b(x) grow like xα x^\alpha, and the additive perturbation cc in the argument of b(x)b(x) becomes asymptotically negligible. To see this, we just need to iterate the recurrences until hitting the base cases.

For a(x)a(x) the recursion unfolds as

a(x)=f(x)+f(x/β)+f(x/β2)++f(x/βk),a(x) = f(x) + f(x/\beta) + f(x/\beta^2) + \cdots + f(x/\beta^k),

where x/βk<1x/\beta^k < 1 eventually, so the sum terminates. Since f(x)=xαf(x) = x^\alpha, each term is (x/βi)α(x/\beta^i)^\alpha, and the sum behaves like a convergent infinite geometric series as xx \to \infty

a(x)=xαi=0logβxβαiβαβα1xα.a(x) = x^\alpha \sum_{i=0}^{\log_\beta x} \beta^{-\alpha i} \sim \frac{\beta^\alpha}{\beta^\alpha-1}x^\alpha.

For b(x)b(x) the recurrence is slightly perturbed

b(x)=f(x)+f(x/β+c)+f(x/β2+c(1+1/β))+b(x) = f(x) + f(x/\beta + c) + f(x/\beta^2 + c(1 + 1/\beta)) + \cdots

But asymptotically x/βi+O(1)x/βix/\beta^i + O(1) \sim x/\beta^i , so each term in the sum is still (x/βi)α\sim (x/\beta^i)^\alpha, and the same geometric series argument applies. Because α>0\alpha > 0, the geometric series is "head-dominated" (the first few terms contribute the most to the sum). The errors introduced by the perturbation cc accumulate at the tail end of the series where the terms are small. See the next section of why/how f(x)=xαf(x)=x^\alpha belongs to the class of functions that permits this reasoning.

Extending the Class of Functions

In order to apply the above logic to a broader class of functions f(x)f(x), we would need the following to hold:

  • The term f(x/βi)f(x/\beta^i) behaves like βαif(x)\beta^{-\alpha i}f(x), so we can factor f(x)f(x) out of the sum, just like we did with xαx^\alpha.

  • The perturbation cc disappears for large xx, since we expect f(x+c)f(x)1\frac{f(x+c)}{f(x)} \to 1 for this class of functions.

If f(x)f(x) is a positive non-decreasing regularly varying function arrow-up-rightwith index ρR\rho \in \R (see Definition 4 in the cited paper), meaning

limxf(γx)f(x)=γρ for any γ>0,\lim_{x \to \infty} \frac{f(\gamma x)}{f(x)} = \gamma^\rho \text{ for any } \gamma > 0,

then we may assume the previously listed properties. This covers most functions polynomial in xx and logx\log x. For example, f(x)=x2logxf(x)=x^2\log x satisfies this, since

f(γx)=(γx)2log(γx)=γ2x2(logγ+logx)γ2x2logx.f(\gamma x) = (\gamma x)^2 \log(\gamma x) = \gamma^2 x^2 (\log \gamma + \log x) \sim \gamma^2 x^2 \log x.

Let’s employ the squeeze theoremarrow-up-right to prove that we can perturb the arguments without impacting the asymptotic behavior. For any ϵ>0\epsilon > 0, we can choose xx large enough, such that the perturbed argument x+cx+c is eventually "sandwiched" between two constant scalings of xx. The rest follows from ff being a positive non-decreasing regularly varying function.

x(1ϵ)<x+c<x(1+ϵ)f(x(1ϵ))f(x+c)f(x(1+ϵ))f(x(1ϵ))f(x)f(x+c)f(x)f(x(1+ϵ))f(x)limxf(x(1ϵ))f(x)lim infxf(x+c)f(x)lim supxf(x+c)f(x)limxf(x(1+ϵ))f(x)(1ϵ)ρlim infxf(x+c)f(x)lim supxf(x+c)f(x)(1+ϵ)ρlimxf(x+c)f(x)=1    f(x+c)f(x). x(1 - \epsilon) < x + c < x(1 + \epsilon) \\[0.2cm] f(x(1 - \epsilon)) \leq f(x + c) \leq f(x(1 + \epsilon)) \\[0.2cm] \frac{f(x(1 - \epsilon))}{f(x)} \leq \frac{f(x + c)}{f(x)} \leq \frac{f(x(1 + \epsilon))}{f(x)} \\[0.2cm] \lim_{x \to \infty} \frac{f(x(1 - \epsilon))}{f(x)} \leq \liminf_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \limsup_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \lim_{x \to \infty} \frac{f(x(1 + \epsilon))}{f(x)} \\[0.2cm] (1 - \epsilon)^\rho \leq \liminf_{x \to \infty} \frac{f(x+c)}{f(x)} \leq \limsup_{x \to \infty} \frac{f(x+c)}{f(x)} \leq (1 + \epsilon)^\rho \\[0.2cm] \therefore \lim_{x \to \infty} \frac{f(x+c)}{f(x)} = 1 \implies f(x+c) \sim f(x).

The conclusion follows by letting ϵ0\epsilon \to 0.

circle-info

A full generalization of this exercise is part of the Akra-Bazzi methodarrow-up-right, which should be included in the toolbox of techniques for finding asymptotic approximations of recurrences.

Exercise 14

According to Exercise 3.17

f(z)g(z)=112z.\frac{f(z)}{g(z)} = \frac{1}{1 – 2z}.

Thus, β=2\beta=2, ν=1\nu=1, g(1/2)=2g'(1/2)=-2 and f(1/2)=1f(1/2)=1, so the approximate solution is an2na_n \sim 2^n as before.

circle-exclamation
f(z)g(z)=z(12z)2.\frac{f(z)}{g(z)}=\frac{z}{(1-2z)^2}.

Thus, β=2\beta=2, ν=2\nu=2, g(1/2)=8g''(1/2)=8 and f(1/2)=1/2f(1/2)=1/2, so the approximate solution is ann2n1a_n \sim n2^{n-1} as before.

Exercise 15

According to Exercise 3.18

f(z)g(z)=z2(1z)2(1+z).\frac{f(z)}{g(z)}=\frac{z^2}{(1-z)^2(1+z)}.

We have two poles of the same modulus, so we must take the one of highest multiplicity. Thus, β=1\beta=1, ν=2\nu=2, g(1)=4g''(1)=4 and f(1)=1f(1)=1, so the approximate solution is ann/2a_n \sim n/2 as expected.

Exercise 16

According to Exercise 3.20

f(z)g(z)=z2(1z)3.\frac{f(z)}{g(z)}= \frac{z^2}{(1-z)^3}.

Thus, β=1\beta=1, ν=3\nu=3, g(3)(1)=6g^{(3)}(1)=-6 and f(1)=1f(1)=1, so the approximate solution is ann2/2a_n \sim n^2/2 as expected.

Exercise 17 🌟

circle-check

According to the fundamental theorem of algebraarrow-up-right a degree tt polynomial has exactly tt roots, counted with multiplicity.

Showing that the Roots are Distinct

We can rewrite P(z)=ztzt1z1P(z)=z^t – z^{t–1} – \dots – z – 1 as

P(z)=ztk=0t1zk=ztzt1z1=zt+12zt+1z1=Q(z)z1.P(z) = z^t - \sum_{k=0}^{t-1} z^k = z^t - \frac{z^t - 1}{z - 1} = \frac{z^{t+1} - 2z^t + 1}{z - 1}=\frac{Q(z)}{z-1}.

Observe that P(1)=1t0P(1) =1-t\neq 0 for t>1t>1, thus z=1z=1 isn’t a root. Consequently, the roots of P(z)P(z) are exactly the roots of Q(z)Q(z), excluding z=1z=1. A polynomial Q(z)Q(z) has multiple roots only if Q(z)Q(z) and its derivative Q(z)Q'(z) share a root.

Q(z)=(t+1)zt2tzt1=zt1[(t+1)z2t].Q'(z) = (t+1)z^t - 2tz^{t-1} = z^{t-1} [ (t+1)z - 2t ].

The unique roots of the derivative are z=0z=0 and z=2tt+1z = \frac{2t}{t+1}. Apparently Q(0)=1Q(0)=1, so z=0z=0 isn’t shared. For the other candidate, we have

Q(2tt+1)=(2tt+1)t+12(2tt+1)t+1=(2tt+1)t[2tt+12]+1=1(2tt+1)t2t+1.\begin{align*} Q\left(\frac{2t}{t+1}\right) &= \left(\frac{2t}{t+1}\right)^{t+1} - 2\left(\frac{2t}{t+1}\right)^t + 1 \\ &= \left(\frac{2t}{t+1}\right)^t \left[ \frac{2t}{t+1} - 2 \right] + 1 \\ &= 1 - \left(\frac{2t}{t+1}\right)^t \frac{2}{t+1}. \end{align*}

We would need (2tt+1)t2t+1=1\left(\frac{2t}{t+1}\right)^t \frac{2}{t+1} = 1. But for all t>1t>1, 2tt+1>1\frac{2t}{t+1} > 1, so the power term grows exponentially. Therefore, all roots of P(z)P(z) are distinct.

Showing that Exactly One Root has Modulus > 1

Because the coefficients of P(z)P(z) are real, the singleton dominant root must be a real number. We can verify that P(1)=1t<0P(1) = 1-t < 0 and P(2)=1>0P(2) = 1 > 0, hence by the intermediate value theoremarrow-up-right, there is a real root in (1,2)(1,2).

By using Rouché's theoremarrow-up-right on the unit disk, we show that the remaining t1t-1 roots of P(z)P(z) lie inside the unit circle z1|z| \le 1; this theorem relates the number of zeros of two functions, f(z)f(z) and f(z)+g(z)f(z)+g(z), inside a region. Let’s rearrange the terms of Q(z)Q(z)

2ztf(z)+zt+1+1g(z).\underbrace{ - 2z^t }_{f(z)} +\underbrace{z^{t+1}+ 1}_{g(z)}.

We focus our attention on a circle of radius z=1+ϵ|z| = 1 + \epsilon (where ϵ\epsilon is a very small positive number). The magnitudes of our functions on the contour line are:

  • f(z)=2zt=2(1+ϵ)t2+2tϵ|f(z)| = |-2z^t| = 2(1+\epsilon)^t \approx 2 + 2t\epsilon

  • g(z)zt+1+1=(1+ϵ)t+1+11+(1+(t+1)ϵ)=2+(t+1)ϵ|g(z)| \le |z|^{t+1} + 1 = (1+\epsilon)^{t+1} + 1 \approx 1 + (1 + (t+1)\epsilon) = 2 + (t+1)\epsilon

Since t>1t > 1, we have 2t>t+12t > t+1, thus f(z)>g(z)|f(z)| > |g(z)|. By Rouché's theorem, the polynomial Q(z)=f(z)+g(z)Q(z) = f(z) + g(z) has the same number of roots inside the circle z=1+ϵ|z| = 1+\epsilon as f(z)f(z).

f(z)f(z) has a root of multiplicity tt at the origin (z=0z=0). Therefore, Q(z)Q(z) has exactly tt roots strictly inside the circle z<1+ϵ|z| < 1+\epsilon. Consequently, P(z)P(z) has exactly t1t-1 roots inside the same disk; recall that we must exclude z=1z=1, which was counted for Q(z)Q(z).

Exercise 18

We need to apply the result from the previous exercise. Since the recurrence formula reflects P(z)=ztzt1z1P(z)=z^t – z^{t–1} – \dots – z – 1, we know there is a single root β\beta (approaches 2 as tt \to \infty) of highest modulus. According to Theorem 4.1, FN[t]CβNF_N^{[t]} \sim C\beta^N. Wikipedia describes this variant as nn-acci sequencearrow-up-right and provides a precise expression for the leading term.

Exercise 19

According to Exercise 3.55

f(z)g(z)=1(1zd1)(1zd2)××(1zdt).\frac{f(z)}{g(z)}= \frac{1}{(1-z^{d_1})(1-z^{d_2})\times \dots \times (1-z^{d_t})}.

Thus, β=1\beta=1, ν=t\nu=t and f(1)=1f(1)=1. We need to compute g(t)(1)g^{(t)}(1).

Each factor can be written as

1zdi=(1z)hi(z),hi(z)=zdi1+zdi2++1.1 - z^{d_i} = (1 - z)h_i(z), \qquad h_i(z)=z^{d_i-1}+z^{d_i-2}+\dots+1.

Apparently, hi(1)=dih_i(1)=d_i and g(z)=(1z)th(z)g(z)=(1-z)^th(z), where h(z)=i=1thi(z)h(z)=\prod_{i=1}^th_i(z). Consequently, h(1)=i=1tdih(1)=\prod_{i=1}^td_i. By the general Leibniz rulearrow-up-right we have

g(t)(z)=k=0t(tk)dkdzk(1z)th(tk)(z).g^{(t)}(z) = \sum_{k=0}^{t} \binom{t}{k} \frac{d^k}{dz^k}(1-z)^t \cdot h^{(t-k)}(z).

At z=1z = 1 all terms with k<tk<t vanish because dkdzk(1z)t \frac{d^k}{dz^k}(1-z)^t still contains a factor of 1z1-z. For k=tk=t we have dtdzt(1z)t=(1)tt! \frac{d^t}{dz^t}(1-z)^t = (-1)^tt! . Thus,

g(t)(1)=(tt)(1)tt!h(1)=(1)tt!i=1tdi.g^{(t)}(1) = \binom{t}{t}(-1)^t \,t! \, h(1) = (-1)^t\,t! \prod_{i=1}^{t} d_i.

By inserting all components into the formula from Theorem 4.1, we get the required identity of this exercise.

Exercise 20

The Python script below generates the extended table. It uses pandasarrow-up-right to produce formatted output.

The code displays the following table:

Exercise 21

Tool support is useful, once the underlying mechanisms are properly understood, to save time and reduce a chance for an error. For example, we can type in WolframAlpha the following command

It produces the requested expansion

x+12x2+23x3+O(x4).-x + \dfrac{1}{2}x^{2} + \dfrac{2}{3}x^{3} + O(x^{4}).

Exercise 22

ln(Nα+Nβ)=lnNα+ln(1+1Nαβx0)=αlnN+1Nαβ12N2(αβ)+13N3(αβ)+O(1N4(αβ)).\begin{align*} \ln(N^\alpha+N^\beta) &= \ln N^\alpha + \ln \left(1+\underbrace{\frac{1}{N^{\alpha-\beta}}}_{x \to 0}\right) \\ &= \alpha\ln N + \frac{1}{N^{\alpha-\beta}}-\frac{1}{2N^{2(\alpha-\beta)}}+\frac{1}{3N^{3(\alpha-\beta)}}+O\left(\frac{1}{N^{4(\alpha-\beta)}}\right). \end{align*}

Exercise 23 🌟

circle-check

Let’s start with a Taylor expansion of the expression in terms of x=1/(N1)x=1/(N-1)

NN1lnNN1=(1+1N1x)ln(1+1N1)=1N1+12(N1)216(N1)3+O(1(N1)4)(1+x)(xx2/2+x3/3+O(x4))=x+x2/2x3/6+O(x4).\begin{align*} \frac{N}{N-1}\ln \frac{N}{N-1} &= \left(1+\underbrace{\frac{1}{N-1}}_{x}\right)\,\ln \left(1+\frac{1}{N-1}\right) \\ &= \frac{1}{N-1}+\frac{1}{2(N-1)^2}-\frac{1}{6(N-1)^3}+O\left(\frac{1}{(N-1)^4}\right) \qquad (1+x)(x-x^2/2+x^3/3+O(x^4))=x+x^2/2-x^3/6+O(x^4). \end{align*}

A more streamlined version would be stated in terms of 1/N1/N, since N1NN-1 \sim N. By taking x=1/Nx=1/N in the geometric series, we get 1/(N1)=1/N+1/N2+1/N3+O(N4)1/(N-1)=1/N+1/N^2+1/N^3+O(N^{-4}). We need to replace the factors involving 1/(N1)1/(N-1) in the above series with the given expansion of 1/(N1)1/(N-1). This gives

NN1lnNN1=1N+32N2+116N3+O(1N4).\boxed{\frac{N}{N-1}\ln \frac{N}{N-1} = \frac{1}{N}+\frac{3}{2N^2}+\frac{11}{6N^3}+O\left(\frac{1}{N^4}\right)}.

Exercise 24

Let x=0.1x=0.1, so ln0.9=ln(10.1)\ln 0.9=\ln (1-0.1). By using Table 4.2 and expanding each function to within O(x5)O(x^5), we get

2+2x+x2/2+x3/2+x4/3+O(x5).2+2x+x^2/2+x^3/2+x^4/3+O(x^5).

We should be careful when summing up the terms of a logarithm, due to subtraction and negative value of xx. The computed value is 2.20553, which is to within 10410^{-4}.

Exercise 25

Observe that 9801=992=(1001)29801=99^2=(100-1)^2, so

19801=1(1001)2=1104(10.01)2.\frac{1}{9801}=\frac{1}{(100-1)^2}=\frac{1}{10^4(1-0.01)^2}.

Table 4.2 has an expansion of the geometric series. By differentiating both sides of that identity, we get

1(1x)2=k=1kxk1.\frac{1}{(1-x)^2} = \sum_{k=1}^{\infty} k x^{k-1}.

Now, we substitute xx into our derived series

19801=1104×[1+2(0.01)+3(0.01)2+4(0.01)3+]=0.0001020304ıˊ47484950\begin{align*} \frac{1}{9801} &= \frac{1}{10^4} \times \left[ 1 + 2(0.01) + 3(0.01)^2 + 4(0.01)^3 + \dots \right] \\ &= 0.00|01|02|03|04í\dots|47|48|49|50|\dots \end{align*}

Vertical bars are used to delineate slots populated with successive natural numbers. Each slot accommodates two digits, enabling this sequence to progress up to the number 97. In other words, we can predict digits in this manner within 100196100^{-196}. The pattern breaks at the slot with 98, as illustrated below:

The carry from the 100th term impacts the slots containing 98 and 99.

We can generalize this to generate sequences of integers padded to nn digits

1(10n1)2=1102n(110n)2.\frac{1}{(10^n - 1)^2}=\frac{1}{10^{2n}(1 - 10^{-n})^2}.

For example, 1/81=0.0123456791/81=0.012345679\dots (8 is missing due to the carry-over from 9).

Exercise 26

If we apply the nonasymptotic version of Stirling’s formula to describe (2NN)\binom{2N}{N}, we get

(2NN)=(2N)!(N!)2=4NπN  1+θ2N24N(1+θN12N)2.\binom{2N}{N} =\frac{(2N)!}{(N!)^2} =\frac{4^N}{\sqrt{\pi N}}\;\frac{1+\dfrac{\theta_{2N}}{24N}}{\bigg(1+\dfrac{\theta_N}{12N}\bigg)^2}.

Our equation becomes

N4N1(2NN)RN=NπN4AN  (1+θN12N)21+θ2N24NEN,\underbrace{\frac{N4^{N-1}}{\binom{2N}{N}}}_{R_N} =\underbrace{\frac{N\sqrt{\pi N}}{4}}_{A_N}\;\underbrace{\frac{\bigg(1+\dfrac{\theta_N}{12N}\bigg)^2}{1+\dfrac{\theta_{2N}}{24N}}}_{E_N},

where RN,AN,ENR_N,A_N,E_N represent the reference, approximate and error terms, respectively.

Since 0<θN<10<\theta_N<1 and 0<θ2N<10<\theta_{2N}<1, we have the following bounds

1<(1+θN12N)2<(1+112N)2,1<1+θ2N24N<1+124N.1 < \big(1+\tfrac{\theta_N}{12N}\big)^2 < \Big(1+\tfrac{1}{12N}\Big)^2, \qquad 1 < 1+\tfrac{\theta_{2N}}{24N} < 1+\tfrac{1}{24N}.

This gives

AN11+124N<RN<AN(1+112N)2.A_N\cdot\frac{1}{1+\dfrac{1}{24N}} <R_N < A_N\cdot\Bigg(1+\dfrac{1}{12N}\Bigg)^2.

Let the relative error δN\delta_N be defined such that RN=AN(1+δN)R_N = A_N(1 + \delta_N). This gives

11+124N<1+δN<(1+112N)211+124N1<δN<(1+112N)21124N+1<δN<16N+1144N2.\begin{align*} \frac{1}{1 + \frac{1}{24N}} &< 1 + \delta_N < \left(1 + \frac{1}{12N}\right)^2 \\ \frac{1}{1 + \frac{1}{24N}} - 1 &< \delta_N < \left(1 + \frac{1}{12N}\right)^2 - 1 \\ -\frac{1}{24N+1} &< \delta_N < \frac{1}{6N} + \frac{1}{144N^2}. \end{align*}

We can assert that δN=O(1/N)\delta_N=O(1/N).

Exercise 27

According to Table 4.4, H1000=7.4854709H_{1000}=7.4854709.

The specific bounds implied by the absolute formula are

7.4749709H^10007.49497097.4749709 \le \hat H_{1000} \le 7.4949709

The specific bounds implied by the relative formula are

0H^100016.90775530 \le \hat H_{1000} \le 16.9077553

The LHS is truncated to zero as a negative lower bound is meaningless. The RHS is huge, since the assumed constant is too large. In a relative formula, we expect h(N)=o(1)h(N)=o(1) that should decay rapidly even for small problem sizes.

circle-info

Recall that O(h(N))O(h(N)) with a constant C<10|C|<10 covers the range ±Ch(N)±Ch(N).

Exercise 28

The exact value is T10=16,796T_{10}=16,796.

The specific bounds implied by the relative formula are

0T^1037,416.0 \le \hat T_{10} \le 37,416.

Exercise 29

Let’s split the original recurrence at the threshold value M>0M>0

f(N)=k0akNk=0k<MakNk+kMakNkRM(N).f(N)=\sum_{k\ge 0} a_k N^{-k} =\sum_{0\le k< M} a_k N^{-k}+\underbrace{\sum_{k\ge M} a_k N^{-k}}_{R_M(N)}.

Since f(N)f(N) converges then RM(N)R_M(N) converges, too. We can rearrange the remainder term as

RM(N)=j0aM+jN(M+j)=NMj0aM+jNjC=CNM.R_M(N) = \sum_{j\ge 0} a_{M+j}N^{-(M+j)} = N^{-M}\underbrace{\sum_{j\ge 0} a_{M+j}N^{-j}}_{C}= C\cdot N^{-M}.

Hence, RM(N)=O(NM)R_M(N)=O(N^{-M}), which concludes the proof.

Exercise 30

It’s not hard to see from Table 3.3 that the terms of our sum can be represented with an EGF

A(z)=11zN=NNz=k0k!Nkzkk!=k0(zN)k.A(z)=\frac{1}{1-\cfrac{z}{N}}=\frac{N}{N-z}=\sum_{k\ge0} \frac{k!}{N^k} \frac{z^k}{k!} = \sum_{k\ge0}\left(\frac{z}{N}\right)^k.

This gives

k0k!Nk=k01Nk0ezzkdzk!=Γ(k+1)=0ezk0(zN)kdz=0ezNNzdz=NeNEi(N)=f(N).\begin{align*} \sum_{k\ge0} \frac{k!}{N^k} &= \sum_{k\ge0} \frac{1}{N^k} \underbrace{\int_0^\infty e^{-z} z^k \, dz}_{k!=\Gamma(k+1)} \\ &= \int_0^\infty e^{-z} \sum_{k\ge0} \left(\frac{z}{N}\right)^k \, dz \\ &= \int_0^\infty e^{-z}\frac{N}{N - z} \, dz \\ &= N e^{-N} \operatorname{Ei}(N) = f(N). \end{align*}

The derivation uses the gamma functionarrow-up-right and exponential integralarrow-up-right to craft the "closed-form" solution. The last line follows after employing the substitution u=zNu=z-N.

Exercise 31

There is a complementary ω\omega-notation for the opposite direction with respect to the o-notation. It denotes a strict lower bound (grows strictly faster than). We use it in our list below:

  • eN=O(N2)e^N=O(N^2) is false, since limNeNN2=\lim_{N \to \infty}\frac{e^N}{N^2}=\infty. We can say that eN=ω(N2)e^N=\omega(N^2).

  • eN=O(2N)e^N=O(2^N) is false (see Exercise 9). We can say that eN=ω(2N)e^N=\omega(2^N).

  • 2N2^{-N} is exponentially small, so 2N=O(1/N10)2^{-N}=O(1/N^{10}) is true (see Exercise 7).

  • NlnN=O(eln2N)N^{\ln N}=O(e^{\ln^2 N}) is true, since NlnN=elnNlnN=eln2NN^{\ln N}=e^{\ln {N^{\ln N}}}=e^{\ln^2 N}.

Exercise 32

The Taylor expansion of exe^x in terms of x=1/(N+1)x=1/(N+1) is

e1N+1=1+1N+1+12(N+1)2+O(1(N+1)3).e^{\frac{1}{N+1}}=1+\frac{1}{N+1}+\frac{1}{2(N+1)^2}+O\bigg(\frac{1}{(N+1)^3}\bigg).

We can provide a more streamlined version in terms of 1/N1/N (see also Exercise 23). The book shows the expansion of 1/(N+1)=1/N1/N2+O(N3)1/(N+1)=1/N-1/N^2+O(N^{-3}). We need to replace the factors involving 1/(N+1)1/(N+1) in the expansion of e1/(N+1)e^{1/(N+1)} with the given expansion of 1/(N+1)1/(N+1). We get

e1N+1=1+1N12N2+O(N3).\boxed{e^{\frac{1}{N+1}} = 1 + \frac{1}{N} - \frac{1}{2N^2} + O\left(N^{-3}\right)}.

Exercise 33

We can expand the number of terms for HNH_N from Table 4.6. The first approximation to within O(1/N)O(1/N) is

(HN)2=(lnN+γ+12N+O(N2))(lnN+γ+12N+O(N2))=ln2N+2γlnN+γ2+lnNN+O(1/N).\begin{align*} (H_N)^2 &= \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \\ &= \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+O(1/N). \end{align*}

The second approximation to within o(1/N)o(1/N) simply includes γ/N\gamma/N that was previously absorbed into O(1/N)O(1/N). Thus,

(HN)2=ln2N+2γlnN+γ2+lnNN+γN+o(1/N).(H_N)^2 = \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+\frac{\gamma}{N}+o(1/N).

Exercise 34

cotx=cosxsinx=1x2/2+x4/24+O(x6)xx3/6+x5/120+O(x7)=(1x2/2+x4/24+O(x6))1xx3/6+x5/120+O(x7)=1x2/2+x4/24+O(x6)x11x2/6+x4/120+O(x6)=1x2/2+x4/24+O(x6)x(1+x2/6x4/120+x4/36+O(x6))=1x2/3x4/45x=1/xx/3x3/45+O(x4).\begin{align*} \cot x &= \frac{\cos x}{\sin x} = \frac{1-x^2/2+x^4/24+O(x^6)}{x-x^3/6+x^5/120+O(x^7)} \\ &= (1-x^2/2+x^4/24+O(x^6))\,\frac{1}{x-x^3/6+x^5/120+O(x^7)} \\ &= \frac{1-x^2/2+x^4/24+O(x^6)}{x}\,\frac{1}{1-x^2/6+x^4/120+O(x^6)} \\ &= \frac{1-x^2/2+x^4/24+O(x^6)}{x}\,(1+x^2/6-x^4/120+x^4/36+O(x^6)) \\ &= \frac{1-x^2/3-x^4/45}{x} \\ &= 1/x-x/3-x^3/45+O(x^4). \end{align*}

Exercise 35

xex1=x1ex=x1(1+x+x2/2+x3/6+x4/24+x5/120+O(x6))=11+x/2+x2/6+x3/24+x4/120+O(x5)=1x/2x2/6x3/24x4/120+(x/2x2/6x3/24)2+(x/2x2/6)3+(x/2)4+O(x5)=1x/2+x2/12x4/720+O(x5).\begin{align*} \frac{x}{e^x-1} &= -\frac{x}{1-e^x} \\ &= -\frac{x}{1-(1+x+x^2/2+x^3/6+x^4/24+x^5/120+O(x^6))} \\ &= \frac{1}{1+x/2+x^2/6+x^3/24+x^4/120+O(x^5)} \\ &= 1-x/2-x^2/6-x^3/24-x^4/120+(-x/2-x^2/6-x^3/24)^2+(-x/2-x^2/6)^3+(-x/2)^4+O(x^5) \\ &= 1-x/2+x^2/12-x^4/720+O(x^5). \end{align*}

Exercise 36

A full treatment of this topic is available in the paper Asymptotic Expansions of Central Binomial Coefficients and Catalan Numbersarrow-up-right.

Exercise 37

1N+1(3NN)=exp{ln((3N)!)ln(N!)ln((2N)!)ln(N+1)}=exp{(3N+12)ln(3N)3N+ln2π+O(1/N)(N+12)lnN+Nln2π+O(1/N)(2N+12)ln(2N)+2Nln2π+O(1/N)lnN+O(1/N)}=exp{(3N+12)ln3(2N+12)ln232lnNln2π+O(1/N)}=27N4N32πN3/2(1+O(1/N)).\begin{align*} \frac{1}{N+1}\binom{3N}{N} &= \exp\bigg\{\ln ((3N)!)-\ln (N!)-\ln ((2N)!)-\ln(N+1)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\left(N+\frac{1}{2}\right)\ln N+N-\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\left(2N+\frac{1}{2}\right)\ln (2N)+2N-\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\ln N+O(1/N)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln 3-\left(2N+\frac{1}{2}\right)\ln 2-\frac{3}{2}\ln N-\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= \frac{27^N}{4^N}\cdot\frac{\sqrt 3}{2\sqrt{\pi} N^{3/2}}\left(1+O(1/N)\right). \end{align*}

Exercise 38

(3N!)(N!)3=exp{ln((3N)!)3ln(N!)}=exp{(3N+12)ln(3N)3N+ln2π+O(1/N)3(N+12)lnN+3N3ln2π+O(1/N)}=exp{(3N+12)ln3lnN2ln2π+O(1/N)}=27N32πN(1+O(1/N)).\begin{align*} \frac{(3N!)}{(N!)^3} &= \exp\bigg\{\ln ((3N)!)-3\ln (N!)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -3\left(N+\frac{1}{2}\right)\ln N+3N-3\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln 3-\ln N-2\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= 27^N\frac{\sqrt 3}{2\pi N}\left(1+O(1/N)\right). \end{align*}

Exercise 39

(1λN)N=exp{Nln(1λN)}=exp{N(λN+O(1/N2))}=exp{λ+O(1/N)}=eλ+O(1/N).\begin{align*} \left(1-\frac{\lambda}{N}\right)^N &= \exp\bigg\{N\ln \left(1-\frac{\lambda}{N}\right)\bigg\} \\ &= \exp\bigg\{N\left(-\frac{\lambda}{N}+O(1/N^2)\right)\bigg\} \\ &= \exp\bigg\{-\lambda+O(1/N)\bigg\} \\ &= e^{-\lambda}+O(1/N). \end{align*}

Based on the asymptotic expansion, the approximate value of the expression is eλe^{-\lambda} for large NN.

Exercise 40

(1lnNN)N=exp{Nln(1lnNN)}=exp{N(lnNNln2N2N2ln3N3N3+O(ln4N/N4))}=exp{lnNln2N2Nln3N3N2+O(ln4N/N3)}=1Nexp{ln2N2Nln3N3N2}(1+O(ln4N/N3))=1Nln2N2N2+ln4N8N3+o(ln4N/N3).\begin{align*} \left(1-\frac{\ln N}{N}\right)^N &= \exp\bigg\{N\ln \left(1-\frac{\ln N}{N}\right)\bigg\} \\ &= \exp\bigg\{N\left(-\frac{\ln N}{N}-\frac{\ln^2 N}{2N^2}-\frac{\ln^3 N}{3N^3}+O(\ln^4 N/N^4)\right)\bigg\} \\ &= \exp\bigg\{-\ln N-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}+O(\ln^4 N/N^3)\bigg\} \\ &= \frac{1}{N}\exp\bigg\{-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}\bigg\}(1+O(\ln^4 N/N^3)) \\ &= \frac{1}{N}-\frac{\ln^2 N}{2N^2}+\frac{\ln^4 N}{8N^3}+o(\ln^4 N/N^3). \end{align*}

The middle term in the penultimate line must be expanded as 1+x+x2/21+x+x^2/2, where xx is the argument to the exponential function. We must retain the three asymptotically largest components.

Exercise 41

When interest is compounded daily at a 10% annual rate on a $10,000 principal, after 365 days, the amount is $10,000(1+0.1/365)365\$10,000(1+0.1/365)^{365}. Instead of computing this value directly, let’s apply the result from Exercise 39. Here, we have λ=0.1\lambda=-0.1, so the total amount increases to $10,000×e0.1=$11,051.7\approx \$10,000 \times e^{0.1}=\$11,051.7. This is about $51.7 more than what would be paid if interest were paid once a year.

circle-info

The exact formula gives ≈$11,051.6, which is $0.1 less than our quick calculation. This showcases the advantages of using asymptotic approximations.

Exercise 42

exp{1+1N+O(1/N2)}=eexp{1N+O(1/N2)}=e(1+1N+O(1/N2))=e+eN+O(1/N2).\begin{align*} \exp\bigg\{1+\frac{1}{N}+O(1/N^2)\bigg\} &= e \cdot \exp\bigg\{\frac{1}{N}+O(1/N^2)\bigg\} \\ &= e\left(1+\frac{1}{N}+O(1/N^2)\right) \\ &= e+\frac{e}{N}+O(1/N^2). \end{align*}

Exercise 43

ln(sin(1N!))=ln(1N!(1+O(1(N!)2)))=ln(1N!)+ln(1+O(1(N!)2))=ln(N!)+O(1(N!)2)=NlnN+NlnNln2π112N+O(1N2).\begin{align*} \ln\left(\sin\left(\frac{1}{N!}\right)\right) &= \ln\left(\frac{1}{N!} \left(1+O\left(\frac{1}{(N!)^2}\right)\right)\right) \\ &= \ln\left(\frac{1}{N!}\right) + \ln\left(1+O\left(\frac{1}{(N!)^2}\right)\right) \\ &= -\ln (N!) + O\left(\frac{1}{(N!)^2}\right) \\ &= -N \ln N + N - \ln \sqrt N - \ln \sqrt{2\pi} - \frac{1}{12N} + O\left(\frac{1}{N^2}\right). \end{align*}

Exercise 44

sin(tan(1/N))=sin(1N+13N3+O(1N5))=1N+13N316N3+O(1N5)=1N+16N3+O(1N5)1N.\begin{align*} \sin(\tan(1/N)) &= \sin\left(\frac{1}{N}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right)\right) \\ &= \frac{1}{N}+\frac{1}{3N^3}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \\ &= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}. \end{align*}
tan(sin(1/N))=tan(1N16N3+O(1N5))=1N16N3+13N3+O(1N5)=1N+16N3+O(1N5)1N.\begin{align*} \tan(\sin(1/N)) &= \tan\left(\frac{1}{N}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right)\right) \\ &= \frac{1}{N}-\frac{1}{6N^3}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right) \\ &= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}. \end{align*}

At this point, we could say that the order of growth of sin(tan(1/N))tan(sin(1/N))\sin(\tan(1/N)) –\tan(\sin(1/N)) is O(N5)O(N^{-5}), but this would be a loose bound. Since the first two terms match, we must expand until we find a difference. Let x=1/Nx=1/N and leverage WolframAlpha to perform the hard work.

series sin(tan(x)) to order 7=x+16x3140x5551008x7+O(x9)series tan(sin(x)) to order 7=x+16x3140x51075040x7+O(x9).\begin{align*} \text{series sin(tan(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{55}{1008}x^7 + O(x^9) \\ \text{series tan(sin(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{107}{5040}x^7 + O(x^9). \end{align*}

Apparently, we have a mismatch at the 4th term, thus the order of growth of sin(tan(1/N))tan(sin(1/N))\sin(\tan(1/N)) –\tan(\sin(1/N)) is O(N7)O(N^{-7}).

Exercise 45

TN=4NπN3(1+O(1/N))HN=lnN+γ+O(1/N).\begin{align*} T_N &= \frac{4^N}{\sqrt {\pi N^3}}(1+O(1/N)) \\ H_N &= \ln N+\gamma+O(1/N). \end{align*}

By substituting N=TNN=T_N into the expansion of HNH_N, we get

HTN=2Nln2lnπN3+γ+O(1/N).H_{T_N} = 2N\ln 2-\ln \sqrt {\pi N^3}+\gamma+O(1/N).

Exercise 46

This exercise is about the asymptotic expansion of the Lambert W functionarrow-up-right. The cited article contains the full expansion in terms of substitutions L1L_1 and L2L_2. We depict here the steps leading to the that expression using those symbols.

We derive the basic equation marked with an asterisk

n=aneanlnn=lnan+anan=L1lnan(*).\begin{align*} n &= a_n e^{a_n} \\ \ln n &= \ln a_n + a_n \\ a_n &= L_1 - \ln a_n \qquad \text{(*)}. \end{align*}

The first iteration of bootstrapping gives anL1a_n \sim L_1. The second iteration gives anL1L2a_n \sim L_1 - L_2. The third iteration gives

anL1ln(L1L2)=L1ln(L1(1L2L1))=L1L2ln(1L2L1)L1L2+L2L1.\begin{align*} a_n &\sim L_1 - \ln(L_1 - L_2) \\ &= L_1 - \ln \left( L_1 \left( 1 - \frac{L_2}{L_1} \right) \right) \\ &= L_1 - L_2 - \ln \left( 1 - \frac{L_2}{L_1} \right) \\ &\sim L_1 - L_2 + \frac{L_2}{L_1}. \end{align*}

By carrying out one more iteration, we get the equation from the cited article to within the requested accuracy. For the sake of completeness, here is the identity without using symbols L1L_1 and L2L_2

an=lnnlnlnn+lnlnnlnn+(lnlnn)22lnlnn2(lnn)2+O(1(logn)3).\boxed{a_n = \ln n - \ln \ln n + \frac{\ln \ln n}{\ln n} + \frac{(\ln \ln n)^2 - 2\ln \ln n}{2(\ln n)^2} + O\left( \frac{1}{(\log n)^3} \right)}.

Exercise 47 🌟

circle-check

We need to provide the reversion of the power series y = c0 + c1x + c2x2 + c3x3 + O(x4). By applying the hint from the book, this equation transform into z=x+c2x2+c3x3+O(x4)z = x + c'_2x^2 + c'_3x^3 + O(x^4). We already know the reversion of this form from the book:

x=zc2z2+(2c22c3)z3+O(z4).x=z-c'_2z^2+(2c'^2_2-c'_3)z^3+O(z^4).

Now, we just need to express it again in terms of yy and the original coefficients

x=yc0c1c2c13(yc0)2+2c22c1c3c15(yc0)3+O((yc0)4).\boxed{x=\frac{y-c_0}{c_1} - \frac{c_2}{c_1^3}(y-c_0)^2 + \frac{2c_2^2 - c_1 c_3}{c_1^5}(y-c_0)^3 + O((y-c_0)^4)}.
circle-info

The expansion of xx is in powers of the small quantity yc0y - c_0, not yy itself (unless c0=0c_0 = 0). Using purely yy is incorrect when c00c_0 \neq 0 because yy doesn’t tend to zero.

Exercise 48

k11/k2=π2/6\sum_{k\ge 1} 1/k^2=\pi^2/6 is known as the Basel problemarrow-up-right. According to the direct comparison testarrow-up-right, the sum k11/(k2Hk)\sum_{k\ge 1} 1/(k^2H_k) also converges to some constant CC. The tail RNR_N can be approximated using the asymptotic expansion of HNlnNH_N\sim\ln N (see Table 4.6). This gives

RN=k>N1k2Hk<k>N1k2lnkN1x2lnxdx.R_N=\sum_{k>N} \frac{1}{k^2H_k}<\sum_{k>N} \frac{1}{k^2\ln k} \approx \int_N^\infty \frac{1}{x^2 \ln x} \, dx.

We can estimate RNR_N by using integration by parts to find the leading asymptotic term. Let u=(lnx)1u = (\ln x)^{-1} and dv=x2dxdv = x^{-2} dx. Then du=x1(lnx)2dxdu = -x^{-1}(\ln x)^{-2} dx and v=x1v = -x^{-1}.

N1x2lnxdx=1xlnxNN1x2(lnx)2dx1NlnN1lnNN1x2lnxdxRN=1NlnN+O(1Nlog2N).\begin{align*} \int_N^\infty \frac{1}{x^2 \ln x} dx &= -\frac{1}{x \ln x} \bigg|_N^\infty - \int_N^\infty \frac{1}{x^2 (\ln x)^2} dx \\ &\approx \frac{1}{N \ln N} - \frac{1}{\ln N}\int_N^\infty \frac{1}{x^2 \ln x} dx \\ &\therefore \boxed{R_N=\frac{1}{N \ln N} +O\left(\frac{1}{N \log^2 N}\right)}. \end{align*}

This gives

k=1N1k2Hk=C1NlnN+O(1Nlog2N),\sum_{k=1}^{N} \frac{1}{k^{2} H_{k}} =C -\frac{1}{N \ln N} + O\left(\frac{1}{N \log^2 N}\right),

so that 1+1/6+1/16.5+1/(3ln3)1.531+1/6+1/16.5+1/(3\ln 3) \approx 1.53 is already a good approximation to the constant CC, just by taking the first 3 terms (together with a correction factor). It is a trivial matter to calculate the value of this constant to any reasonable desired accuracy.

Exercise 49

k=0N1Fk=ψRN.\sum_{k=0}^{N} \frac{1}{F_k} = \psi - R_N.

The infinite sum converges to the reciprocal Fibonacci constantarrow-up-right ψ\psi. Fibonacci numbers grow exponentially, Fkϕk/5F_k \sim\phi^k / \sqrt{5} where ϕ\phi is the golden ratio. Consequently, the terms in the tail decay exponentially and the tail sum can be approximated by a geometric series. Thus, RN=O(ϕN)R_N=O(\phi^{-N}).

Exercise 50

k=0N2k2k+1=k=0N(112k+1)=N+1k=0N12k+1=N+1(k012k+1Ck>N12k+1RN).\begin{align*} \sum_{k=0}^{N} \frac{2^k}{2^k+1} &= \sum_{k=0}^{N} \left(1 - \frac{1}{2^k+1}\right) \\ &= N+1 - \sum_{k=0}^{N} \frac{1}{2^k+1} \\ &= N+1 - \left(\underbrace{\sum_{k\ge0} \frac{1}{2^k+1}}_{C}-\underbrace{\sum_{k>N} \frac{1}{2^k+1}}_{R_N}\right). \end{align*}

The tail can be bounded analogously to the example given in the book pertaining to tries. Therefore, RN=O(2N)R_N=O(2^{-N}).

Exercise 51

Here, we need to use the tail, since the terms in a finite sum are rapidly increasing.

k=0N2k2=2N2(1+O(22N)).\sum_{k=0}^{N} 2^{k^2}=2^{N^2} \left(1 + O\left(2^{-2N}\right)\right).

Exercise 52

k=1Nkt=k=0Nkt=Nt+1t+1+Nt2+i=1t/2B2i(2i)!t2i1Nt2i+1.\begin{align*} \sum_{k=1}^N k^t &= \sum_{k=0}^N k^t \\ &= \frac{N^{t+1}}{t+1} + \frac{N^t}{2} + \sum_{i=1}^{\lfloor t/2 \rfloor} \frac{B_{2i}}{(2i)!} t^{\underline{2i-1}} N^{\,t-2i+1}. \end{align*}

Notice the usage of the falling factorialarrow-up-right (see also Chapter 3 of the book).

Exercise 53

Let h(x)=11+xh(x)=\frac{1}{1+x}, so h(n)(x)=(1)nn!(1+x)n+1h^{(n)}(x)=(-1)^n\frac{n!}{(1+x)^{n+1}}. This gives

0kNh(k/N)N01h(x)dx+h(0)+h(1)2+i1B2i(2i)!1N2i1  h(2i1)(x)01=Nln2+34+i1B2i2i122iN2i1=Nln2+34+116N1128N3+=Nln2+34+O(N1).\begin{align*} \sum_{0\le k\le N} h(k/N) &\sim N\int_0^1 h(x)\,dx+\frac{h(0)+h(1)}{2}+\sum_{i\ge1}\frac{B_{2i}}{(2i)!}\frac{1}{N^{2i-1}}\;h^{(2i-1)}(x)\Big|_0^1 \\ &= N\ln 2+\frac{3}{4}+\sum_{i\ge1} \frac{B_{2i}}{2i}\,\frac{1-2^{-2i}}{N^{2i-1}} \\ &= N\ln 2+\frac34+\frac{1}{16N}-\frac{1}{128N^{3}}+\cdots \\ &= N\ln 2+\frac34+O(N^{-1}). \end{align*}

Exercise 54

Let h(x)=11+x2h(x)=\frac{1}{1+x^2}, so h(x)=2x(1+x2)2h'(x)=-\frac{2x}{(1+x^{2})^{2}}. We don’t need higher derivatives, since only the first Bernoulli correction is required. This gives

0kNh(k/N)N01h(x)dx+h(0)+h(1)2+i1B2i(2i)!1N2i1  h(2i1)(x)01=πN4+34124N+O(N3).\begin{align*} \sum_{0\le k\le N} h(k/N) &\sim N\int_0^1 h(x)\,dx+\frac{h(0)+h(1)}{2}+\sum_{i\ge1}\frac{B_{2i}}{(2i)!}\frac{1}{N^{2i-1}}\;h^{(2i-1)}(x)\Big|_0^1 \\ &= \frac{\pi N}{4}+\frac{3}{4}-\frac{1}{24N}+O(N^{-3}). \end{align*}

The definite integral can be computed using WolframAlpha by issuing Integrate[1/(1+x^2),{x,0,1}].

Exercise 55

Let’s use the formula from the book

γ=Hnlnn12n+112n21120n4.\gamma = H_n - \ln n - \frac{1}{2n} + \frac{1}{12n^2} - \frac{1}{120n^4}.

The following Python script dumps the value of γ\gamma accurate to 10 decimal places. By uncommenting (adding) additional correction terms, we can attain the same accuracy in third of the time, since we only need to take seven summands from the harmonic sum.

The code prints

circle-info

Wikipedia has a full description of the Euler’s constantarrow-up-right.

Exercise 56

We’ve already mentioned in Exercise 48 that k=11k2=ζ(2)=π2/6\sum_{k=1}^\infty \frac{1}{k^2}=\zeta(2)=\pi^2/6. Let’s take f(x)=1/x2f(x)=1/x^2 and compute the Euler-Maclaurin constant

Cf=limN(1kN1k21Ndxx2f(N)2)=π261.C_f=\lim_{N \to \infty} \left(\sum_{1 \le k \le N} \frac{1}{k^2} -\int_1^N\frac{dx}{x^2}-\frac{f(N)}{2}\right)=\frac{\pi^2}{6}-1.

Employing Theorem 4.3 gives

HN(2)=1kN1k2=1Ndxx2+12N2+Cf+i>0B2i(2i)!f(2i1)(N)=1N+1+12N2+π261+i>0B2i(2i)!((2i)!N2i+1)=π261N+12N2i>0B2iN2i+1π261N+12N216N3+\begin{align*} H_N^{(2)} &= \sum_{1 \le k \le N}\frac{1}{k^2} \\ &= \int_1^N\frac{dx}{x^2}+\frac{1}{2N^2}+C_f + \sum_{i>0} \frac{B_{2i}}{(2i)!} f^{(2i-1)}(N) \\ &= -\frac{1}{N} +1+ \frac{1}{2N^2} +\frac{\pi^2}{6}-1+ \sum_{i>0} \frac{B_{2i}}{(2i)!} \left( -\frac{(2i)!}{N^{2i+1}} \right) \\ &= \frac{\pi^2}{6}-\frac{1}{N} + \frac{1}{2N^2} - \sum_{i>0} \frac{B_{2i}}{N^{2i+1}} \\ &\sim \frac{\pi^2}{6}-\frac{1}{N} + \frac{1}{2N^2} - \frac{1}{6N^3} + \dots \end{align*}

Exercise 57

This is a variation of the previous exercise. The infinite sum converges to the Apéry's constantarrow-up-right. Without repeating the technical details, we get

HN(3)=ζ(3)12N2+O(N3).H_N^{(3)} = \zeta(3) - \frac{1}{2N^2} +O(N^{-3}).
circle-info

All cases are perfectly covered in the article about asymptotic expansions of the generalized harmonic numbersarrow-up-right on StackExchange for Mathematics.

Exercise 58

Using Theorem 4.3, as done in Exercise 56, we obtain the following asymptotic estimates

k=1Nk=23N3/2+12N1/2+ζ(1/2)+124N1/2+O(N2)k=1N1k=2N+12N1/2+ζ(1/2)124N3/2+O(N2)k=1N1k3=32N2/3+12N1/3+ζ(1/3)136N4/3+O(N2).\begin{align*} \sum_{k=1}^{N} \sqrt{k} &= \frac{2}{3} N^{3/2} + \frac{1}{2} N^{1/2} + \zeta(-1/2) + \frac{1}{24} N^{-1/2} + O(N^{-2}) \\[4pt] \sum_{k=1}^{N} \frac{1}{\sqrt{k}} &= 2\sqrt{N} + \frac{1}{2} N^{-1/2} + \zeta(1/2) - \frac{1}{24} N^{-3/2} + O(N^{-2}) \\[4pt] \sum_{k=1}^{N} \frac{1}{\sqrt[3]{k}} &= \frac{3}{2} N^{2/3} + \frac{1}{2} N^{-1/3} + \zeta(1/3) - \frac{1}{36} N^{-4/3} + O(N^{-2}). \end{align*}

The fractional arguments of the Riemann zeta function do not have known simple closed forms, like in Exercise 56; they’re typically evaluated numerically.

Exercise 59

The main idea is presented in the article about the asymptotic expansion of the alternating harmonic numbersarrow-up-right on Wikipedia. In both subtasks, we separate the terms based on the parity of NN.

Expansion of 1kN(1)kk\sum_{1 \le k \le N} \frac{(-1)^k}{k}

Let’s start by assuming N=2nN=2n. This gives

S2n=k=12n(1)kk=k=12n(1)k+1k(form used on Wikipedia)=k=1n12kk=1n12k1=k=1n12k(H2nk=1n12k)=212HnH2n=HnH2nln2+14n116n2+=ln2+12N14N2+\begin{align*} S_{2n} &= \sum_{k=1}^{2n} \frac{(-1)^k}{k} \\ &= -\sum_{k=1}^{2n} \frac{(-1)^{k+1}}{k} \qquad \text{(form used on Wikipedia)} \\ &= \sum_{k=1}^n \frac{1}{2k} - \sum_{k=1}^n \frac{1}{2k-1} \\ &= \sum_{k=1}^n \frac{1}{2k} - \left( H_{2n} - \sum_{k=1}^n \frac{1}{2k} \right) \\ &= 2 \cdot \frac{1}{2} H_n - H_{2n} \\ &= H_n - H_{2n} \\ &\sim -\ln 2 + \frac{1}{4n} - \frac{1}{16n^2} + \dots \\ &= -\ln 2 + \frac{1}{2N} - \frac{1}{4N^2} + \dots \end{align*}

When N=2n1N=2n-1, we have

S2n1=S2n(1)2n2n=S2n12n.S_{2n-1} = S_{2n} - \frac{(-1)^{2n}}{2n} = S_{2n} - \frac{1}{2n}.

Expansion of 1kN(1)kk\sum_{1 \le k \le N} \frac{(-1)^k}{\sqrt k}

Let’s start by assuming N=2nN=2n. This gives (see also the previous exercise about the expansion of Hn(1/2)H_n^{(1/2)})

S2n=k=12n(1)kk=k=1n12kk=1n12k1=12Hn(1/2)(H2n(1/2)12Hn(1/2))=2Hn(1/2)H2n(1/2)(21)ζ(1/2)+122n=(21)ζ(1/2)+12N\begin{align*} S_{2n} &= \sum_{k=1}^{2n} \frac{(-1)^k}{\sqrt{k}} \\ &= \sum_{k=1}^n \frac{1}{\sqrt{2k}} - \sum_{k=1}^n \frac{1}{\sqrt{2k-1}} \\ &= \frac{1}{\sqrt{2}}H_n^{(1/2)} - \left( H_{2n}^{(1/2)} - \frac{1}{\sqrt{2}}H_n^{(1/2)} \right) \\ &= \sqrt{2}H_n^{(1/2)} - H_{2n}^{(1/2)} \\ &\sim (\sqrt{2}-1)\zeta(1/2) + \frac{1}{2\sqrt{2n}}-\cdots \\ &= (\sqrt{2}-1)\zeta(1/2) + \frac{1}{2\sqrt{N}}-\cdots \end{align*}

When N=2n1N=2n-1, we have

S2n1=S2n12n.S_{2n-1} = S_{2n} - \frac{1}{\sqrt{2n}}.
circle-info

The alternating zeta function is known as the Dirichlet eta functionarrow-up-right.

Exercise 60

The solutionarrow-up-right is available on StackExchange for Mathematics. In the cited post, we should assume z1z \ge 1, based on the identity from the book involving the generalized binomial coefficient.

Exercise 61

Replace the asymptotic estimate used in the derivation of Theorem 4.5 with the inequality ln(1+x)x-\ln(1+x)\ge-x, valid for x0x \ge 0. See the proof of the corollary of Theorem 4.5 for context.

Exercise 62

The proofs of the relative bounds for the Ramanujan Q- and R-distributions are literally the same. Stirling’s formula for lnN!\ln N! is available in Table 4.6 in the book.

lnN!(Nk)!Nk=lnN!ln(Nk)!klnN=[(N+12)lnNN+ln2π+O(1/N)][(Nk+12)ln(Nk)(Nk)+ln2π+O(1/(Nk))]klnN=[(N+12)lnNN][(Nk+12)(lnN+ln(1kN))N+k]klnN=k(Nk+12)ln(1kN)=k(Nk+12)(kNk22N2+O(k3N3))=k22N+O(kN)+O(k3N2)N!(Nk)!Nk=exp{k22N+O(kN)+O(k3N2)}=ek2/(2N)(1+O(kN)+O(k3N2)).\begin{align*} \ln \frac{N!}{(N-k)! N^k} &= \ln N! - \ln(N-k)! - k \ln N \\ &= \left[ \left(N+\frac{1}{2}\right)\ln N - N+\ln \sqrt{2\pi}+ O(1/N) \right] \\ &\qquad - \left[ \left(N-k+\frac{1}{2}\right)\ln(N-k) - (N-k) +\ln \sqrt{2\pi}+ O(1/(N-k))\right] \\ &\qquad- k \ln N \\ &= \left[ \left(N+\frac{1}{2}\right)\ln N - N \right] - \left[ \left(N-k+\frac{1}{2}\right)\left(\ln N + \ln\left(1-\frac{k}{N}\right)\right) - N + k \right] - k \ln N \\ &= -k - \left(N-k+\frac{1}{2}\right)\ln\left(1-\frac{k}{N}\right) \\ &= -k - \left(N-k+\frac{1}{2}\right)\left(-\frac{k}{N} - \frac{k^2}{2N^2}+O\left(\frac{k^3}{N^3}\right)\right) \\ &= -\frac{k^2}{2N}+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right) \\ &\therefore \frac{N!}{(N-k)! N^k}=exp\left\{-\frac{k^2}{2N}+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right)\right\}=e^{-k^2/(2N)}\left(1+O\left(\frac{k}{N}\right)+O\left(\frac{k^3}{N^2}\right)\right). \end{align*}

Notice that the above approximation is valid only for k=o(N2/3)k=o(N^{2/3}). Under this regime, both Stirling error terms, O(1/N)O(1/N) and O(1/(Nk))O(1/(N-k)), "vanish" for large NN. Furthermore, the O(k4/N3)O(k^4/N^3) term is absorbed into O(k3/N2)O(k^3/N^2).

The R-distribution is handled in an analogous fashion starting with

lnN!Nk(N+k)!=lnN!ln(N+k)!+klnN.\ln \frac{N!N^k}{(N+k)!} = \ln N! - \ln(N+k)! + k \ln N .

Exercise 63

The best approach here is to simply reuse a high-quality implementation of the PMF for the binomial distribution. The following Python snippet demonstrates the typical usage scenario.

Exercise 64

If k=N+O(1)k=\sqrt N+O(1) then we cannot reach the required accuracy by relying on Theorem 4.6 alone. We must go back to the proof from the book and add more terms to every component in the product. The augmented central binomial coefficient is

122N(2NN)=1πN(118N+O(1N2)).\frac{1}{2^{2N}}\binom{2N}{N}= \frac{1}{\sqrt{\pi N}}\left(1 - \frac{1}{8N} + O\left(\frac{1}{N^2}\right)\right).

The extended product of the Ramanujan Q- and R-distributions is

N!(Nk)!N!(N+k)!=exp{1j<kln(1jN)1jkln(1+jN)}=exp{j=1k[ln(1jN)ln(1+jN)]ln(1kN)}=exp{j=1k(2jN2j33N3+O(j5N5))ln(1kN)}=exp{2N(k2+k2)23N3(k4+2k3+k24)+O(1N2)ln(1kN)}=exp{2N(k2+k2)23N3(k4+2k3+k24)+kN+k22N2+k33N3+O(1N2)}=exp{k2Nk46N3+k22N2k26N3+O(1N2)}=exp{k2Nk46N3+k22N2+O(1N2)}=exp{12δN+1/3δ2N+δ3N3/2+O(1N2)}=e1(12δN+δ2+1/3N+2δ3δ3N3/2+O(1N2)).\begin{align*} \frac{N!}{(N-k)!}\,\frac{N!}{(N+k)!} &= \exp\left\{\sum_{1 \le j < k} \ln\left(1 - \frac{j}{N}\right) - \sum_{1 \le j \le k} \ln\left(1 + \frac{j}{N}\right)\right\} \\ &= \exp\left\{\sum_{j=1}^k \left[ \ln\left(1 - \frac{j}{N}\right) - \ln\left(1 + \frac{j}{N}\right) \right] - \ln\left(1 - \frac{k}{N}\right)\right\} \\ &= \exp\left\{\sum_{j=1}^k \left( -\frac{2j}{N} - \frac{2j^3}{3N^3} + O\left(\frac{j^5}{N^5}\right) \right) - \ln\left(1 - \frac{k}{N}\right)\right\} \\ &= \exp\left\{-\frac{2}{N}\left(\frac{k^2+k}{2}\right) - \frac{2}{3N^3}\left(\frac{k^4+2k^3+k^2}{4}\right) + O\left(\frac{1}{N^2}\right) - \ln\left(1 - \frac{k}{N}\right)\right\} \\ &= \exp\left\{-\frac{2}{N}\left(\frac{k^2+k}{2}\right) - \frac{2}{3N^3}\left(\frac{k^4+2k^3+k^2}{4}\right) + \frac{k}{N} + \frac{k^2}{2N^2} + \frac{k^3}{3N^3} + O\left(\frac{1}{N^2}\right)\right\} \\ &= \exp\left\{-\frac{k^2}{N} - \frac{k^4}{6N^3} + \frac{k^2}{2N^2} - \frac{k^2}{6N^3} + O\left(\frac{1}{N^2}\right)\right\} \\ &= \exp\left\{-\frac{k^2}{N} - \frac{k^4}{6N^3} + \frac{k^2}{2N^2} + O\left(\frac{1}{N^2}\right)\right\} \\ &= \exp\left\{-1 - \frac{2\delta}{\sqrt{N}} + \frac{1/3 - \delta^2}{N} + \frac{\delta}{3N^{3/2}} + O\left(\frac{1}{N^2}\right)\right\} \\ &= e^{-1} \left( 1 - \frac{2\delta}{\sqrt{N}} + \frac{\delta^2 + 1/3}{N} + \frac{2\delta^3 - \delta}{3N^{3/2}} + O\left(\frac{1}{N^2}\right) \right). \end{align*}

The last two steps follow from substituting k=N+δk=\sqrt N + \delta, where δ\delta is the displacement (hidden constant inside the O-notation) and expanding exe^x. Multiplying together the two components, we get

122N(2NNk)=e1πN(12δN+δ2+5/24N+8δ3δ12N3/2+O(1N2)).\frac{1}{2^{2N}}\binom{2N}{N-k} = \frac{e^{-1}}{\sqrt{\pi N}} \left( 1 - \frac{2\delta}{\sqrt{N}} + \frac{\delta^2 + 5/24}{N} + \frac{8\delta^3 - \delta}{12 N^{3/2}} + O\left(\frac{1}{N^2}\right) \right).

To get an absolute error, we would just distribute the leading term all the way through in the formula above and simplify.

Exercise 65

Exercise 66

This is the case when the binomial distribution can be efficiently approximated by a normal distribution centered at pNpN (mean). For example, the book already uses such an approximation 1/πN/2\sim 1/\sqrt {\pi N/2} with p=1/2p=1/2 in Figure 4.3. More generally, the estimated probability is

12πNpq.\sim \frac{1}{\sqrt {2 \pi N pq}}.

Exercise 67

The paper The Normal Approximation to the Binomial Distributionarrow-up-right shows a full derivation by employing the hint from the book (see section 2); the shift factor kk is denoted as δ\delta. At the end, we get a formula similar to the one from Theorem 4.6.

circle-info

More formally, this is known as the De Moivre–Laplace theoremarrow-up-right.

Exercise 68 🌟

circle-check

Theorem 4.7 isn’t appropriate here because the mean μ=pN=λN\mu=pN=\lambda \sqrt N isn’t bounded as NN \to \infty. The previous exercise handles the case when pp is fixed. Nonetheless, we can tweak the method from the cited paper to arrive at a different approximation. In other words, the method itself needs to be adapted to a different asymptotic regime.

We list here the recipe for arriving at the final formula:

  1. Our starting point with Stirling's formula is precisely equation (1) in the notes.

  2. The kk in this exercise is analogous to δ\delta in the paper. We set x=Np+δx = Np + \delta. The standard deviation in our new regime is σ=Npq=Np(1p)λN=λN1/4\sigma = \sqrt{Npq}=\sqrt{Np(1-p)} \sim \sqrt{\lambda \sqrt{N}} = \sqrt \lambda N^{1/4}. So, allowing δ=O(N1/4)\delta=O(N^{1/4}) is exactly the same as allowing xx to deviate by a few standard deviations.

  3. The notes expand ln(Np/x) \ln(Np/x) and ln(Nq/(Nx))\ln(Nq/(N-x)) in powers of δ/(Np)\delta/(Np) and δ/(Nq) \delta/(Nq). Nonetheless, δ/(Np)δ/(Nq)\delta/(Np) \gg \delta/(Nq), hence δ/(Nq) \delta/(Nq) term's higher-order corrections are negligible compared to the former (only the one contributing to the exponent of ee matters). Observe that the largest correction O(δ3/(Np)2)O(\delta^3/(Np)^2) cannot hide pp, since it isn’t a constant anymore. At any rate, this boils down to O(δ3/N)O(\delta^3/N).

  4. The square root pre-factor is n2π(np+δ)(nqδ)=12πnpq(1+δnp)1/2(1δnq)1/2\sqrt{\frac{n}{2\pi (np+\delta)(nq-\delta)}} = \frac{1}{\sqrt{2\pi npq}} \left(1 + \frac{\delta}{np}\right)^{-1/2} \left(1 - \frac{\delta}{nq}\right)^{-1/2} Using the Taylor expansion (1+u)1/2112u(1 + u)^{-1/2} \sim 1 - \frac{1}{2}u for small uu, we get that the dominant correction term becomes O(δ/(Np))=O(δ/N)O(\delta/(Np))=O(\delta/\sqrt N).

After incorporating all previously mentioned changes, we obtain (using kk instead of δ)\delta)

P{X=μ+k}=12πσek22σ2(1+O(1/N)+O(k/N)+O(k3/N)).P\{X=\mu+k\}=\frac{1}{\sqrt{2\pi} \sigma} \, e^{-\frac{k^2}{2\sigma^2}} \left(1 + O(1/\sqrt N)+ O(k/\sqrt N)+O(k^3/N)\right).

If k=O(σ)k=O(\sigma) then the relative error is O(N1/4)O(N^{-1/4}). If k=O(N1/3)k = O(N^{1/3}) then the error jumps to O(1)O(1), so we must have at least k=o(N1/3)k=o(N^{1/3}). Finally, the baseline error term O(N1/2)O(N^{-1/2}) covers the edge case k=0k=0.

Exercise 69

This is a variation of the previous exercise, so we don’t repeat all the steps again; the methodology is the same. Let μ=Np=λN/lnN\mu = Np = \lambda N / \ln N and σ2=Npq=μ(1p)=μ(1λ/lnN)μ\sigma^2 = Npq = \mu(1-p) = \mu(1 - \lambda/\ln N)\sim \mu. The following asymptotic holds:

P{X=μ+k}=12πσek22σ2(1+O(lnNN)+O(klnNN)+O(k3ln2NN2)).P\{X=\mu+k\}=\frac{1}{\sqrt{2\pi} \sigma} \, e^{-\frac{k^2}{2\sigma^2}} \left(1 + O\left(\frac{\ln N}{N}\right)+ O\left(\frac{k\ln N}{N}\right)+O\left(\frac{k^3\ln^2 N}{N^2}\right)\right).

If k=O(σ)k=O(\sigma) then the relative error is O(lnNN)O\left(\sqrt{\frac{\ln N}{N}}\right). Notice, as in the previous exercise, the explicit baseline error term that covers k=0k=0.

Exercise 70

Let x=k/Nx=k/\sqrt N so that dk=Ndxdk=\sqrt N dx. This gives

1kk0ek2/(2N)O(kN)=N0ex2/2O(xN)dx+O(1)=O(0xex2/2dx=1)+O(1)=O(1).\sum_{1 \le k \le k_0}e^{-k^2/(2N)}O\left(\frac{k}{N}\right)=\sqrt N \int_0^\infty e^{-x^2/2}O\left(\frac{x}{\sqrt N}\right)dx+O(1)=O\left(\underbrace{\int_0^\infty xe^{-x^2/2}dx}_{=1}\right)+O(1)=O(1).
1kk0ek2/(2N)O(k3N2)=N0ex2/2O(x3N)dx+O(1)=O(0x3ex2/2dx=2)+O(1)=O(1).\sum_{1 \le k \le k_0}e^{-k^2/(2N)}O\left(\frac{k^3}{N^2}\right)=\sqrt N \int_0^\infty e^{-x^2/2}O\left(\frac{x^3}{\sqrt N}\right)dx+O(1)=O\left(\underbrace{\int_0^\infty x^3e^{-x^2/2}dx}_{=2}\right)+O(1)=O(1).

The definite integrals can be easily computed in WolframAlpha.

Exercise 71

Let’s follow the hint from the book’s websitearrow-up-right for this exercise. This gives

pk=(Nk)k(Nk)!N!=0j<kNkNj(see Table 4.11)=exp{0j<k(ln(Nk)ln(Nj))}=exp{0j<k(lnN(1k/N)lnN(1j/N))}=exp{0j<k(lnN+ln(1k/N)lnNln(1j/N))}=exp{kln(1k/N)0j<kln(1j/N)}=exp{k(kN+O(k2N2))+0j<k(jN+O(j2N2))}=exp{k2N+O(k3N2)+k(k1)2N+O(k3N2)}=ek2/(2N)(1+O(k/N)+O(k3/N2)).\begin{align*} p_k=\frac{(N−k)^k(N−k)!}{N!} &= \prod_{0 \le j < k} \frac{N-k}{N-j} \qquad \text{(see Table 4.11)} \\ &= \exp\left\{\sum_{0 \le j < k} \bigg(\ln (N-k)-\ln (N-j)\bigg)\right\} \\ &= \exp\left\{\sum_{0 \le j < k} \bigg(\ln N(1-k/N)-\ln N(1-j/N)\bigg)\right\} \\ &= \exp\left\{\sum_{0 \le j < k} \bigg(\ln N+\ln(1-k/N)-\ln N-\ln (1-j/N)\bigg)\right\} \\ &= \exp\left\{k\ln(1-k/N)-\sum_{0 \le j < k} \ln (1-j/N)\right\} \\ &= \exp\left\{k\left(-\frac{k}{N}+O\left(\frac{k^2}{N^2}\right)\right)+\sum_{0 \le j < k} \left(\frac{j}{N}+O\left(\frac{j^2}{N^2}\right)\right)\right\} \\ &= \exp\left\{-\frac{k^2}{N}+O\left(\frac{k^3}{N^2}\right)+ \frac{k(k-1)}{2N}+O\left(\frac{k^3}{N^2}\right)\right\} \\ &= \boxed{e^{−k^2/(2N)}(1+O(k/N)+O(k^3/N^2))}. \end{align*}

Notice that pkp_k is strictly decreasing as kk grows and has the same asymptotic approxmation as the Q-distribution. Hence, by the same arguments as in the proof of Theorem 4.8, together with the solution of the previous exercise, we can confirm the validity of the approximation of P(N)P(N).

Exercise 72

The paper On Ramanujan's Q-functionarrow-up-right by Flajolet et al. elaborates in detail the developments around Q- and R- distributions and provides the proof of this exercise.

Exercise 73

Assume NN is even for simplicity (odd NN gives the same asymptotic behavior). Let k=N/2±δk = N/2 ± \delta with δ=o(N2/3)\delta = o(N^{2/3}). We know that most of the probability mass will be concentrated around the mean. We can now apply Theorem 4.6 to get

(Nk)2NπN2e2δ2/N(Nk)323N(πN2)3/2e6δ2/N.\begin{align*} \binom{N}{k} &\sim \frac{2^N}{\sqrt{\frac{\pi N}{2}}} e^{-2\delta^2/N} \\ \binom{N}{k}^3 &\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} e^{-6\delta^2/N}. \end{align*}

Here, we are only working with binomial coefficients rather than distribution. Therefore, we don’t divide by the normalization factor. By leveraging the Laplace method, similarly as in Theorem 4.8, we can uniformly bundle the diminishing tails (far away from the mean) into a single formula.

k(Nk)323N(πN2)3/2δe6δ2/N23N(πN2)3/2e6δ2/Ndδ=23N(πN2)3/2πN6=23N+13πN.\begin{align*} \sum_{k} \binom{N}{k}^3 &\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \sum_{\delta} e^{-6\delta^2/N} \\ &\sim \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \int_{-\infty}^{\infty} e^{-6\delta^2/N} d\delta \\ &= \frac{2^{3N}}{(\frac{\pi N}{2})^{3/2}} \cdot \sqrt{\frac{\pi N}{6}} \\ &= \frac{2^{3N+1}}{\sqrt{3} \pi N}. \end{align*}
circle-info

For the generalized version of this exercise refer to the article An Asymptotic Formula for Binomial Sumsarrow-up-right (see the beginning of section 4). A sequence of values of the sum for different NN is known as the Franel numbers.arrow-up-right

Exercise 74

Wikipedia offers a comprehensive summary of the problem, including the required asymptotic estimate, known as telephone numbersarrow-up-right.

Exercise 75 🌟

circle-check

To derive the asymptotic estimate for our sum, we use Stirling's approximation (see Table 4.6) to transform the discrete sum into a continuous integral dominated by a single "peak". This is the main idea behind the Laplace method. We start with dissecting the summand

(Nkk)=(Nk)!k!(N2k)!.\binom{N-k}{k} = \frac{(N-k)!}{k! (N-2k)!}.

Observe that it’s nonzero for 0kN/20 \le k \le N/2, so let k=αNk=\alpha N with 0α1/20 \le \alpha \le 1/2. Substituting the approximation for each factorial and simplifying, we get

(NαNαN)12πN1αα(12α)[(1α)1ααα(12α)12α]N(NαNαN)212πN(1αα(12α))exp(2Nf(α)),\begin{align*} \binom{N-\alpha N}{\alpha N} \sim \frac{1}{\sqrt{2\pi N}} \sqrt{\frac{1-\alpha}{\alpha(1-2\alpha)}} \cdot \left[ \frac{(1-\alpha)^{1-\alpha}}{\alpha^\alpha (1-2\alpha)^{1-2\alpha}} \right]^N \\ \binom{N-\alpha N}{\alpha N}^2 \sim \frac{1}{2\pi N} \left( \frac{1-\alpha}{\alpha(1-2\alpha)} \right) \cdot \exp\left( 2N \cdot f(\alpha) \right), \end{align*}

where

f(α)=(1α)ln(1α)αlnα(12α)ln(12α).f(\alpha) = (1-\alpha)\ln(1-\alpha) - \alpha\ln\alpha - (1-2\alpha)\ln(1-2\alpha).

We can find the maximum α0\alpha_0 by typing in WolframAlpha

It outputs α0=1/21/(25)\alpha_0=1/2-1/(2\sqrt 5). At this specific point, ef(α0)=ϕe^{f(\alpha_0)} = \phi (golden ratio). To see this, we should just raise ee to the maximum value reported by WolframAlpha. The square of this value is ϕ2\phi^2.

Since the function (Nkk)2\binom{N-k}{k}^2 drops rapidly from α0\alpha_0, we can approximate the sum using the Euler-Maclaurin formula. We expand 2Nf(α)2N f(\alpha) as a Taylor series around α0\alpha_0

2Nf(α)2Nf(α0)+Nf(α0)(αα0)2.2N f(\alpha) \approx 2N f(\alpha_0) + N f''(\alpha_0)(\alpha - \alpha_0)^2.

With the help of WolframAlpha, we can compute symbolically the second derivative and evaluate it at α0\alpha_0

This gives

f(α0)=5α0(1α0)(12α0)=55.f''(\alpha_0) = -\frac{\sqrt{5}}{\alpha_0(1-\alpha_0)(1-2\alpha_0)} = -5\sqrt{5}.

The sum is then approximated by the integral

k(Nkk)212πN1α0α0(12α0)ϕ2Ne55N(αα0)2Ndα=ϕ2N+22πN5. \sum_k \binom{N-k}{k}^2 \sim \frac{1}{2\pi N} \frac{1-\alpha_0}{\alpha_0(1-2\alpha_0)} \phi^{2N} \int_{-\infty}^{\infty} e^{-5\sqrt{5}N (\alpha-\alpha_0)^2} N d\alpha= \frac{\phi^{2N+2}}{2 \sqrt{\pi N \sqrt{5}}}.

In essence, the Laplace method relies on the fact that as NN \to \infty, the function becomes so incredibly "sharp" at the peak α0\alpha_0 that the tails of the distribution become negligible. Therefore, the error introduced by extending the integration limits to ±\pm \infty is exponentially small. For example, just at some ϵ>0\epsilon>0 distance from α0\alpha_0 the contribution decreases to only ϕ2NeN(55)ϵ2\phi^{2N} \cdot e^{-N(5\sqrt{5})\epsilon^2} (see also Exercise 10).

Exercise 76

We’ve a flipped Figure 4.7, where the first lgN\lg N terms are negligible, whilst the rest contribute around 1. Using the same bounds as in Theorem 4.10, we get

0jN(112j)N=NlgN+O(loglogN).\sum_{0 \le j \le N} \left(1-\frac{1}{2^j}\right)^N = N - \lg N +O(\log \log N).

As noted in Theorem 4.10, the error bound can be tightened all the way down to O(1)O(1).

Exercise 77

The first Nt\sqrt[t]{N} terms are significant, whilst the rest drops very quickly to zero. For large NN, we can approximate this sum with an integral over the whole possible range for jj. This gives

j0(1eN/jt)0(1eN/xt)dx=0(1eu)(1tN1/tu11/t)du=N1/tt0(1eu)u11/tdu.\begin{align*} \sum_{j \ge 0} \left(1-e^{-N/j^t}\right) &\sim \int_{0}^{\infty} \left(1 - e^{-N/x^t}\right) dx \\ &= \int_{\infty}^{0} (1 - e^{-u}) \left( -\frac{1}{t} N^{1/t} u^{-1 - 1/t} \right) du \\ &= \frac{N^{1/t}}{t} \int_{0}^{\infty} (1 - e^{-u}) u^{-1 - 1/t} du. \end{align*}

The second line follows after the substitution u=N/xtu = N/x^t, so that

dx=1tN1/tu11/tdu.dx = -\frac{1}{t} N^{1/t} u^{-1 - 1/t} du.

The last integral can be evaluated using integration by parts. Let α=1/t\alpha = 1/t. Notice that t>1t>1 implies 0<α<10<\alpha<1. Using v=(1eu)v = (1 - e^{-u}) and dw=u1αdudw = u^{-1-\alpha}du gives

0u1α(1eu)du=uαα(1eu)00uααeudu=1αΓ(1α).\int_{0}^{\infty} u^{-1-\alpha} (1 - e^{-u}) du = \frac{u^{-\alpha}}{-\alpha} (1 - e^{-u}) \bigg|_{0}^{\infty} - \int_{0}^{\infty} \frac{u^{-\alpha}}{-\alpha} e^{-u} du= \frac{1}{\alpha}\Gamma(1 - \alpha).

When u0u \to 0 then uα(1eu)uαu=u1α0u^{-\alpha}(1 - e^{-u}) \sim u^{-\alpha} \cdot u = u^{1-\alpha} \to 0. The second integral is the gamma functionarrow-up-right. This gives

j0(1eN/jt)NtΓ(11t). \sum_{j \ge 0} \left(1-e^{-N/j^t}\right) \sim \sqrt[t]N \,\, \Gamma\left(1 - \frac{1}{t}\right).

Last updated