book-sectionChapter 4

Exercise 1

NN+1<1 for all N0    NN+1=O(1).\frac{N}{N+1}<1 \text{ for all }N\ge 0 \implies \frac{N}{N+1}=O(1).

Expanding 2N2^N and N!N! gives

limN2NN!=limN2×2××21×2××N=0,\lim_{N \to \infty} \frac{2^N}{N!}=\lim_{N \to \infty} \frac{2\times2\times\cdots\times2}{1\times2\times\cdots \times N}=0,

thus 2N=o(N!)2^N=o(N!).

Clearly, limNe1/N=e0=1\lim_{N \to \infty} e^{1/N}=e^0=1, so eN1\sqrt[N]{e} \sim 1.

Exercise 2 🌟

This example demonstrates that g(N)=O(f(N))g(N)=O(f(N)) can encompass functions that are asymptotically negative. Certain references, like CLRS's Introduction to Algorithms, restrict g(N)g(N) to nonnegative functions and employ a specific notation, O', to remove this limitation within O-notation.

We start by factoring N/(N+1)=1+(1/(N+1))N/(N+1)=1+(-1/(N+1)). Let’s show that the second summand is O(1/N)O(1/N)

1/(N+1)1/N=NN+1<1 for all N0,\left|\frac{-1/(N+1)}{1/N}\right|=\frac{N}{N+1}<1 \text{ for all }N\ge 0,

therefore, N/(N+1)=1+O(1/N)N/(N+1)=1+O(1/N). We also have

limNN/(N+1)11/N=1,\lim_{N \to \infty} \frac{N/(N+1)}{1-1/N}=1,

hence, N/(N+1)11/NN/(N+1) \sim1-1/N.

Exercise 3

limNNαNβ=limN1Nβα=0,when α<β.\lim_{N \to \infty} \frac{N^\alpha}{N^\beta}=\lim_{N \to \infty} \frac{1}{N^{\beta-\alpha}}=0, \qquad \text{when }\alpha<\beta.

Therefore, Nα=o(Nβ)N^\alpha=o(N^\beta) for α<β\alpha<\beta.

Exercise 4

The O- and o-notations provide ways to express upper bounds (with o being the stronger assertion). Consequently, g(N)=o(f(N))    g(N)=O(f(N))g(N)=o(f(N)) \implies g(N)=O(f(N)). By combining the results from the previous two exercises, for r0r\ge0 fixed, we can conclude that (see also section 4.3 of the book)

1±k/N=1+O(1/N) for 0kr,(1+O(1/N))r=1+O(1/N).1±k/N =1+O(1/N) \text{ for } 0 \le k \le r ,\qquad (1+O(1/N))^r=1+O(1/N).

This gives

(Nr)=N(N1)(Nr+1)r!=Nrr!k=0r1(1kN)=Nrr!(1+O ⁣(1N))(Nr)=Nrr!+O(Nr1).\begin{align*} \binom{N}{r} &= \frac{N(N-1)\cdots(N-r+1)}{r!} \\ &= \frac{N^r}{r!}\prod_{k=0}^{r-1}\Big(1-\frac{k}{N}\Big) \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big) \\ &\therefore \boxed{\binom{N}{r} = \frac{N^r}{r!}+O(N^{r-1})}. \end{align*}

Similarly, we have

(N+rr)=(N+r)(N+r1)(N+1)r!=Nrr!k=1r(1+kN)=Nrr!(1+O ⁣(1N))(N+rr)=Nrr!+O(Nr1).\begin{align*} \binom{N+r}{r} &= \frac{(N+r)(N+r-1)\cdots(N+1)}{r!} \\ &= \frac{N^r}{r!}\prod_{k=1}^{r}\Big(1+\frac{k}{N}\Big) \\ &= \frac{N^r}{r!}\Big(1+O\!\Big(\frac{1}{N}\Big)\Big) \\ &\therefore \boxed{\binom{N+r}{r} = \frac{N^r}{r!}+O(N^{r-1})}. \end{align*}

Alternative Proof

Instead of relying on basic algebraic manipulations involving the O-notation, we can represent the numerator of (Nr)\binom{N}{r} as a polynomial P(N)=N(N1)(Nr+1)P(N)=N(N-1)\cdots(N-r+1). Since there are rr terms in this product, multiplying the NN from each term gives us a leading term of NrN^r. To find the next term (the coefficient of Nr1N^{r-1}), we sum the constant terms from each factor

Sum of constants=(0+1+2++(r1))=(r1)r2.\text{Sum of constants} = -(0 + 1 + 2 + \dots + (r-1)) = -\frac{(r-1)r}{2}.

Thus, the numerator expands to

P(N)=Nrr(r1)2Nr1+(lower order terms).P(N) = N^r - \frac{r(r-1)}{2}N^{r-1} + \dots (\text{lower order terms}).

Dividing by r!r! gives

(Nr)=Nrr!r(r1)2r!Nr1+\binom{N}{r} = \frac{N^r}{r!} - \frac{r(r-1)}{2 \cdot r!}N^{r-1} + \dots

The first term is our leading term. All subsequent terms form a polynomial of degree r1r-1. Therefore, their sum is O(Nr1)O(N^{r-1}) (see also the previous exercise). An analogous reasoning applies for the case (N+rr)\binom{N+r}{r}.

Exercise 5

limNlogNNϵ=LHo^pitallimN1/NϵNϵ1=limN1ϵNϵ=0logN=o(Nϵ).\begin{align*} \lim_{N \to \infty} \frac{\log N}{N^\epsilon} &\overset{L'Hôpital}{=} \lim_{N \to \infty} \frac{1/N}{\epsilon N^{\epsilon - 1}} \\ &= \lim_{N \to \infty} \frac{1}{\epsilon N^\epsilon} = 0 \\ &\therefore \boxed{\log N = o(N^\epsilon)}. \end{align*}

Exercise 6

limN12+lnN=012+lnN=o(1).\begin{align*} &\lim_{N \to \infty} \frac{1}{2+\ln N} = 0 \\ &\therefore \boxed{\frac{1}{2+\ln N} = o(1)}. \end{align*}

The cosine function oscillates between -1 and 1, thus as NN \to \infty we have 1/(2+cosN)[1/3,1]1/(2+\cos N) \in[1/3,1]. Clearly, it’s bounded from above, but doesn’t tend to 0. Therefore, it is O(1)O(1) but not o(1)o(1).

Exercise 7

circle-exclamation
limNeNϵNM=limNNMeNϵ=limNelnNMeNϵ=limNeMlnNeNϵ=0.\begin{align*} \lim_{N \to \infty} \frac{e^{-N^\epsilon}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{e^{N^\epsilon}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{N^\epsilon}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{N^\epsilon}} = 0. \end{align*}

The last line follows from Exercise 5, thus eNϵe^{-N^\epsilon} is exponentially small.

Exercise 8

limNelog2NNM=limNNMelog2N=limNelnNMelog2N=limNeMlnNelog2N=0.\begin{align*} \lim_{N \to \infty} \frac{e^{-\log^2 N}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{e^{\log^2 N}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\log^2 N}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{\log^2 N}} = 0. \end{align*}

In the last line MlnN=o(log2N)M\ln N = o(\log^2 N), since after cancelling one logarithm on both sides, we are left with a constant term "against" a logarithm that grows to infinity. Observe that we don’t care about the base of a logarithm on the RHS, as it’s just a constant multiple of a natural logarithm. Therefore, elog2Ne^{-\log^2 N} is exponentially small.

limN(logN)logNNM=limNNM(logN)logN=limNelnNMeln(logN)logN=limNeMlnNelogN(lnlogN)=0.\begin{align*} \lim_{N \to \infty} \frac{(\log N)^{-\log N}}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{(\log N)^{\log N}} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\log N)^{\log N}}}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{\log N (\ln {\log N})}} = 0. \end{align*}

By employing the same reasoning as above, we see that now lnlogn\ln \log n grows faster than any constant. Therefore, (logN)logN(\log N)^{-\log N} is also exponentially small.

Exercise 9 🌟

This exercise shows why focusing on an asymptotically most significant term is beneficial. As the calculation shows, it’s a reliable estimator of the performance for sufficiently large problem sizes.

limN(α/β)NNM=limNNM(β/α)N=limNelnNMeln(β/α)N=limNeMlnNeNln(β/α)=0,\begin{align*} \lim_{N \to \infty} \frac{(\alpha/\beta)^N}{N^{-M}} &= \lim_{N \to \infty} \frac{N^M}{(\beta/\alpha)^N} \\ &= \lim_{N \to \infty} \frac{e^{\ln N^M}}{e^{\ln {(\beta/\alpha)^N}}} \\ &= \lim_{N \to \infty} \frac{e^{M\ln N}}{e^{N\ln {(\beta/\alpha)}}} = 0, \end{align*}

so αN\alpha^N is exponentially small relative to βN\beta^N (see Exercise 5).

The absolute error is αN\alpha^N, whilst the relative error is αN/(αN+βN)\alpha^N/(\alpha^N+\beta^N). We have

  • For N=10N=10 the absolute error is ≈2.6 and the relative error is ≈29.53%.

  • For N=100N=100 the absolute error is ≈13,781 and the relative error is ≈0.02%.

Exercise 10

circle-exclamation

Let f(N)f(N) be exponentially small, meaning for every M>0M > 0, f(N)=o(NM)f(N) = o(N^{-M}). For any polynomialg(N)=O(Nd)g(N)=O(N^d), where d0d \ge 0 is the degree of the polynomial, we can choose M=M+dM'=M+d, such that f(N)g(N)=o(NM)O(Nd)=o(NM)f(N)g(N)=o(N^{-M'})O(N^d)=o(N^{-M}). This concludes the proof.

Exercise 11 🌟

Chapter 2 of the book explains the Master Theorem with a note that floors/ceilings can be ignored without breaking the asymptotic result. The term an/2+O(1)a_{\lfloor n/2 \rfloor}+O(1) implies that every time we recurse, we introduce a small constant error O(1)O(1) due to potential rounding errors. This gives

an=2an/2+f(n)=2(an/2+O(1))+f(n)=2an/2+O(1)error+f(n).\begin{align*} a_n &= 2a_{n/2} + f(n) \\ &= 2(a_{\lfloor n/2 \rfloor} + O(1)) + f(n) \\ &= 2a_{\lfloor n/2 \rfloor} + \underbrace{O(1)}_{\text{error}} + f(n). \end{align*}

In a binary recursion tree, where every internal node has 2 children, the number of nodes doubles at every level. The sum of this error across the whole tree is proportional to the number of nodes in the tree

Total Error=k=0lgn2kO(1)=O(n).\text{Total Error} = \sum_{k=0}^{\lg n} 2^k \cdot O(1) = O(n).

Consequently, an=(Ideal Solution)+O(n)a_n=(\text{Ideal Solution}) + O(n), where ideal solution denotes the case when nn is a power of two. We’ll derive these ideal solutions and see that the extra O(n)O(n) error has no significance on them.

  1. an=2an/2+O(n)a_n = 2a_{n/2} + O(n) essentially falls under case 2 of the Master Theorem. We have lgn\lg n levels in the recursion tree taking O(n)O(n) work per level. Therefore, an=O(nlogn)\boxed{a_n=O(n\log n)}. We cannot use Θ\Theta-notation because the term O(n)O(n) allows for the driving function to be 0 or even negative (see the next exercise).

  2. an=2an/2+o(n)a_n = 2a_{n/2} + o(n) can also be handled using the summation approach. We have lgn\lg n levels in the recursion tree taking o(n)o(n) work per level. Therefore, an=o(nlogn)\boxed{a_n=o(n\log n)}.

  3. an2an/2+na_n \sim 2a_{n/2} + n means that an=2an/2+n+o(n)a_n = 2a_{n/2} + n + o(n). Dividing by nn and letting bn=an/nb_n = a_n/n yields bn=bn/2+1+o(1)b_n = b_{n/2} + 1 + o(1). Iterating this recurrence gives bn=b0+lgn+o(lgn)b_n = b_0 + \lg n + o(\lg n), so an=nlgn+o(nlgn)a_n = n \lg n + o(n \lg n). Therefore, annlgn\boxed{a_n \sim n \lg n}.

circle-info

Handling the base cases demands Θ(n)\Theta(n) time, so for cases 1 and 2 we also have a lower bound an=Ω(n)a_n=\Omega(n).

Exercise 12

This is a continuation of the previous exercise, where the first case is already covered:

  • an=2an/2+Θ(n)a_n = 2a_{n/2} + \Theta(n) entails that an=Θ(nlogn)\boxed{a_n=\Theta(n\log n)}.

  • an=2an/2+Ω(n)a_n = 2a_{n/2} + \Omega(n) entails that an=Ω(nlogn)\boxed{a_n=\Omega(n\log n)}.

Exercise 13

Both a(x)a(x) and b(x)b(x) grow like xα x^\alpha, and the additive perturbation cc in the argument of b(x)b(x) becomes asymptotically negligible. To see this, we just need to iterate the recurrences until hitting the base cases.

For a(x)a(x) the recursion unfolds as

a(x)=f(x)+f(x/β)+f(x/β2)++f(x/βk),a(x) = f(x) + f(x/\beta) + f(x/\beta^2) + \cdots + f(x/\beta^k),

where x/βk<1x/\beta^k < 1 eventually, so the sum terminates. Since f(x)=xαf(x) = x^\alpha, each term is (x/βi)α(x/\beta^i)^\alpha, and the sum behaves like a convergent infinite geometric series as xx \to \infty

a(x)=xαi=0logβxβαiβαβα1xα.a(x) = x^\alpha \sum_{i=0}^{\log_\beta x} \beta^{-\alpha i} \sim \frac{\beta^\alpha}{\beta^\alpha-1}x^\alpha.

For b(x)b(x) the recurrence is slightly perturbed

b(x)=f(x)+f(x/β+c)+f(x/β2+c(1+1/β))+b(x) = f(x) + f(x/\beta + c) + f(x/\beta^2 + c(1 + 1/\beta)) + \cdots

But asymptotically x/βi+O(1)x/βix/\beta^i + O(1) \sim x/\beta^i , so each term in the sum is still (x/βi)α\sim (x/\beta^i)^\alpha, and the same geometric series argument applies.

A generalization of this exercise is part of the Akra-Bazzi methodarrow-up-right, which should be included in the toolbox of techniques for finding asymptotic approximations of recurrences.

Exercise 14 🌟

According to Exercise 3.17

f(z)g(z)=112z.\frac{f(z)}{g(z)} = \frac{1}{1 – 2z}.

Thus, β=2\beta=2, ν=1\nu=1, g(1/2)=2g'(1/2)=-2 and f(1/2)=1f(1/2)=1, so the approximate solution is an2na_n \sim 2^n as before.

circle-exclamation
f(z)g(z)=z(12z)2.\frac{f(z)}{g(z)}=\frac{z}{(1-2z)^2}.

Thus, β=2\beta=2, ν=2\nu=2, g(1/2)=8g''(1/2)=8 and f(1/2)=1/2f(1/2)=1/2, so the approximate solution is ann2n1a_n \sim n2^{n-1} as before.

Exercise 15 🌟

According to Exercise 3.18

f(z)g(z)=z2(1z)2(1+z).\frac{f(z)}{g(z)}=\frac{z^2}{(1-z)^2(1+z)}.

We have two poles of same modulus, so we must take the one of highest multiplicity. Thus, β=1\beta=1, ν=2\nu=2, g(1)=4g''(1)=4 and f(1)=1f(1)=1, so the approximate solution is ann/2a_n \sim n/2 as expected.

Exercise 16 🌟

According to Exercise 3.20

f(z)g(z)=z2(1z)3.\frac{f(z)}{g(z)}= \frac{z^2}{(1-z)^3}.

Thus, β=1\beta=1, ν=3\nu=3, g(3)(1)=6g^{(3)}(1)=-6 and f(1)=1f(1)=1, so the approximate solution is ann2/2a_n \sim n^2/2 as expected.

Exercise 17

According to the fundamental theorem of algebraarrow-up-right a degree tt polynomial has exactly tt roots, counted with multiplicity.

Showing that the Roots are Distinct

We can rewrite P(z)=ztzt1z1P(z)=z^t – z^{t–1} – \dots – z – 1 as

P(z)=ztk=0t1zk=ztzt1z1=zt+12zt+1z1=Q(z)z1.P(z) = z^t - \sum_{k=0}^{t-1} z^k = z^t - \frac{z^t - 1}{z - 1} = \frac{z^{t+1} - 2z^t + 1}{z - 1}=\frac{Q(z)}{z-1}.

Observe that P(1)=1t0P(1) =1-t\neq 0 for t>1t>1, thus z=1z=1 isn’t a root. Consequently, the roots of P(z)P(z) are exactly the roots of Q(z)Q(z), excluding z=1z=1. A polynomial Q(z)Q(z) has multiple roots only if Q(z)Q(z) and its derivative Q(z)Q'(z) share a root.

Q(z)=(t+1)zt2tzt1=zt1[(t+1)z2t].Q'(z) = (t+1)z^t - 2tz^{t-1} = z^{t-1} [ (t+1)z - 2t ].

The unique roots of the derivative are z=0z=0 and z=2tt+1z = \frac{2t}{t+1}. Apparently Q(0)=1Q(0)=1, so z=0z=0 isn’t shared. For the other candidate, we have

Q(2tt+1)=(2tt+1)t+12(2tt+1)t+1=(2tt+1)t[2tt+12]+1=1(2tt+1)t2t+1.\begin{align*} Q\left(\frac{2t}{t+1}\right) &= \left(\frac{2t}{t+1}\right)^{t+1} - 2\left(\frac{2t}{t+1}\right)^t + 1 \\ &= \left(\frac{2t}{t+1}\right)^t \left[ \frac{2t}{t+1} - 2 \right] + 1 \\ &= 1 - \left(\frac{2t}{t+1}\right)^t \frac{2}{t+1}. \end{align*}

We would need (2tt+1)t2t+1=1\left(\frac{2t}{t+1}\right)^t \frac{2}{t+1} = 1. But for all t>1t>1, 2tt+1>1\frac{2t}{t+1} > 1, so the power term grows exponentially. Therefore, all roots of P(z)P(z) are distinct.

Showing that Exactly One Root has Modulus > 1

Because the coefficients of P(z)P(z) are real, the singleton dominant root must be a real number. We can verify that P(1)=1t<0P(1) = 1-t < 0 and P(2)=1>0P(2) = 1 > 0, hence by the intermediate value theoremarrow-up-right, there is a real root in (1,2)(1,2).

By using Rouché's theoremarrow-up-right on the unit disk, we show that the remaining t1t-1 roots of P(z)P(z) lie inside the unit circle z1|z| \le 1; this theorem relates the number of zeros of two functions, f(z)f(z) and f(z)+g(z)f(z)+g(z), inside a region. Let’s rearrange the terms of Q(z)Q(z)

2ztf(z)+zt+1+1g(z).\underbrace{ - 2z^t }_{f(z)} +\underbrace{z^{t+1}+ 1}_{g(z)}.

We focus our attention on a circle of radius z=1+ϵ|z| = 1 + \epsilon (where ϵ\epsilon is a very small positive number). The magnitudes of our functions on the contour line are:

  • f(z)=2zt=2(1+ϵ)t2+2tϵ|f(z)| = |-2z^t| = 2(1+\epsilon)^t \approx 2 + 2t\epsilon

  • g(z)zt+1+1=(1+ϵ)t+1+11+(1+(t+1)ϵ)=2+(t+1)ϵ|g(z)| \le |z|^{t+1} + 1 = (1+\epsilon)^{t+1} + 1 \approx 1 + (1 + (t+1)\epsilon) = 2 + (t+1)\epsilon

Since t>1t > 1, we have 2t>t+12t > t+1, thus f(z)>g(z)|f(z)| > |g(z)|. By Rouché's theorem, the polynomial Q(z)=f(z)+g(z)Q(z) = f(z) + g(z) has the same number of roots inside the circle z=1+ϵ|z| = 1+\epsilon as f(z)f(z).

f(z)f(z) has a root of multiplicity tt at the origin (z=0z=0). Therefore, Q(z)Q(z) has exactly tt roots strictly inside the circle z<1+ϵ|z| < 1+\epsilon. Consequently, P(z)P(z) has exactly t1t-1 roots inside the same disk; recall that we must exclude z=1z=1, which was counted for Q(z)Q(z).

Exercise 18

We need to apply the result from the previous exercise. Since the recurrence formula reflects P(z)=ztzt1z1P(z)=z^t – z^{t–1} – \dots – z – 1, we know there is a single root β\beta (approaches 2 as tt \to \infty) of highest modulus. According to Theorem 4.1, FN[t]CβNF_N^{[t]} \sim C\beta^N. Wikipedia describes this variant as nn-acci sequencearrow-up-right and provides a precise expression for the leading term.

Exercise 19

According to Exercise 3.55

f(z)g(z)=1(1zd1)(1zd2)××(1zdt).\frac{f(z)}{g(z)}= \frac{1}{(1-z^{d_1})(1-z^{d_2})\times \dots \times (1-z^{d_t})}.

Thus, β=1\beta=1, ν=t\nu=t and f(1)=1f(1)=1. We need to compute g(t)(1)g^{(t)}(1).

Each factor can be written as

1zdi=(1z)hi(z),hi(z)=zdi1+zdi2++1.1 - z^{d_i} = (1 - z)h_i(z), \qquad h_i(z)=z^{d_i-1}+z^{d_i-2}+\dots+1.

Apparently, hi(1)=dih_i(1)=d_i and g(z)=(1z)th(z)g(z)=(1-z)^th(z), where h(z)=i=1thi(z)h(z)=\prod_{i=1}^th_i(z). Consequently, h(1)=i=1tdih(1)=\prod_{i=1}^td_i. By the general Leibniz rulearrow-up-right we have

g(t)(z)=k=0t(tk)dkdzk(1z)th(tk)(z).g^{(t)}(z) = \sum_{k=0}^{t} \binom{t}{k} \frac{d^k}{dz^k}(1-z)^t \cdot h^{(t-k)}(z).

At z=1z = 1 all terms with k<tk<t vanish because dkdzk(1z)t \frac{d^k}{dz^k}(1-z)^t still contains a factor of 1z1-z. For k=tk=t we have dtdzt(1z)t=(1)tt! \frac{d^t}{dz^t}(1-z)^t = (-1)^tt! . Thus,

g(t)(1)=(tt)(1)tt!h(1)=(1)tt!i=1tdi.g^{(t)}(1) = \binom{t}{t}(-1)^t \,t! \, h(1) = (-1)^t\,t! \prod_{i=1}^{t} d_i.

By inserting all components into the formula from Theorem 4.1, we get the required identity of this exercise.

Exercise 20

The Python script below generates the extended table. It uses pandasarrow-up-right to produce formatted output.

The code outputs the following table:

Exercise 21 🌟

Tool support is useful here. For example, we can type in WolframAlpha the following command

It produces the requested expansion

x+12x2+23x3+O(x4).-x + \dfrac{1}{2}x^{2} + \dfrac{2}{3}x^{3} + O(x^{4}).

Exercise 22

ln(Nα+Nβ)=lnNα+ln(1+1Nαβx0)=αlnN+1Nαβ12N2(αβ)+13N3(αβ)+O(1N4(αβ)).\begin{align*} \ln(N^\alpha+N^\beta) &= \ln N^\alpha + \ln \left(1+\underbrace{\frac{1}{N^{\alpha-\beta}}}_{x \to 0}\right) \\ &= \alpha\ln N + \frac{1}{N^{\alpha-\beta}}-\frac{1}{2N^{2(\alpha-\beta)}}+\frac{1}{3N^{3(\alpha-\beta)}}+O\left(\frac{1}{N^{4(\alpha-\beta)}}\right). \end{align*}

Exercise 23 🌟

Let’s start with a Taylor expansion of the expression in terms of x=1/(N1)x=1/(N-1)

NN1lnNN1=(1+1N1x)ln(1+1N1)=1N1+12(N1)216(N1)3+O(1(N1)4)(1+x)(xx2/2+x3/3+O(x4))=x+x2/2x3/6+O(x4).\begin{align*} \frac{N}{N-1}\ln \frac{N}{N-1} &= \left(1+\underbrace{\frac{1}{N-1}}_{x}\right)\,\ln \left(1+\frac{1}{N-1}\right) \\ &= \frac{1}{N-1}+\frac{1}{2(N-1)^2}-\frac{1}{6(N-1)^3}+O\left(\frac{1}{(N-1)^4}\right) \qquad (1+x)(x-x^2/2+x^3/3+O(x^4))=x+x^2/2-x^3/6+O(x^4). \end{align*}

A more streamlined version would be stated in terms of 1/N1/N, since N1NN-1 \sim N. By taking x=1/Nx=1/N in the geometric series, we get 1/(N1)=1/N+1/N2+1/N3+O(N4)1/(N-1)=1/N+1/N^2+1/N^3+O(N^{-4}). We need to replace the factors involving 1/(N1)1/(N-1) in the above series with the given expansion of 1/(N1)1/(N-1). This gives

NN1lnNN1=1N+32N2+116N3+O(1N4).\boxed{\frac{N}{N-1}\ln \frac{N}{N-1} = \frac{1}{N}+\frac{3}{2N^2}+\frac{11}{6N^3}+O\left(\frac{1}{N^4}\right)}.

Exercise 24

Let x=0.1x=0.1, so ln0.9=ln(10.1)\ln 0.9=\ln (1-0.1). By using Table 4.2 and expanding each function to within O(x5)O(x^5), we get

2+2x+x2/2+x3/2+x4/3+O(x5).2+2x+x^2/2+x^3/2+x^4/3+O(x^5).

We should be careful when summing up the terms of a logarithm, due to subtraction and negative value of xx. After substituting xx into the above expression, we find that the result is 2.20553, which is to within 10410^{-4}.

Exercise 25

Observe that 9801=992=(1001)29801=99^2=(100-1)^2, so

19801=1(1001)2=1104(10.01)2.\frac{1}{9801}=\frac{1}{(100-1)^2}=\frac{1}{10^4(1-0.01)^2}.

Table 4.2 has an expansion of the geometric series. By differentiating both sides of that identity, we get

1(1x)2=k=1kxk1.\frac{1}{(1-x)^2} = \sum_{k=1}^{\infty} k x^{k-1}.

Now, we substitute xx into our derived series

19801=1104×[1+2(0.01)+3(0.01)2+4(0.01)3+]=0.0001020304ıˊ47484950\begin{align*} \frac{1}{9801} &= \frac{1}{10^4} \times \left[ 1 + 2(0.01) + 3(0.01)^2 + 4(0.01)^3 + \dots \right] \\ &= 0.00|01|02|03|04í\dots|47|48|49|50|\dots \end{align*}

Vertical bars are used to delineate slots, each of which is populated with successive natural numbers. Each slot accommodates two digits, enabling this sequence to progress up to the number 97. In other words, we can predict digits in this manner within 100196100^{-196}. The pattern breaks at the slot with 98, as illustrated below:

The carry from the 100th term impacts the slots containing 98 and 99.

We can generalize this to generate sequences of integers padded to nn digits

1(10n1)2=1102n(110n)2.\frac{1}{(10^n - 1)^2}=\frac{1}{10^{2n}(1 - 10^{-n})^2}.

For example, 1/81=0.0123456791/81=0.012345679\dots (8 is missing due to the carry-over from 9).

Exercise 26

If we apply the nonasymptotic version of Stirling’s formula to describe (2NN)\binom{2N}{N}, we get

(2NN)=(2N)!(N!)2=4NπN  1+θ2N24N(1+θN12N)2.\binom{2N}{N} =\frac{(2N)!}{(N!)^2} =\frac{4^N}{\sqrt{\pi N}}\;\frac{1+\dfrac{\theta_{2N}}{24N}}{\bigg(1+\dfrac{\theta_N}{12N}\bigg)^2}.

Our equation becomes

N4N1(2NN)RN=NπN4AN  (1+θN12N)21+θ2N24NEN,\underbrace{\frac{N4^{N-1}}{\binom{2N}{N}}}_{R_N} =\underbrace{\frac{N\sqrt{\pi N}}{4}}_{A_N}\;\underbrace{\frac{\bigg(1+\dfrac{\theta_N}{12N}\bigg)^2}{1+\dfrac{\theta_{2N}}{24N}}}_{E_N},

where RN,AN,ENR_N,A_N,E_N represent the reference, approximate and error terms, respectively.

Since 0<θN<10<\theta_N<1 and 0<θ2N<10<\theta_{2N}<1, we have the following bounds

1<(1+θN12N)2<(1+112N)2,1<1+θ2N24N<1+124N.1 < \big(1+\tfrac{\theta_N}{12N}\big)^2 < \Big(1+\tfrac{1}{12N}\Big)^2, \qquad 1 < 1+\tfrac{\theta_{2N}}{24N} < 1+\tfrac{1}{24N}.

This gives

AN11+124N<RN<AN(1+112N)2.A_N\cdot\frac{1}{1+\dfrac{1}{24N}} <R_N < A_N\cdot\Bigg(1+\dfrac{1}{12N}\Bigg)^2.

Let the relative error δN\delta_N be defined such that RN=AN(1+δN)R_N = A_N(1 + \delta_N). This gives

11+124N<1+δN<(1+112N)211+124N1<δN<(1+112N)21124N+1<δN<16N+1144N2.\begin{align*} \frac{1}{1 + \frac{1}{24N}} &< 1 + \delta_N < \left(1 + \frac{1}{12N}\right)^2 \\ \frac{1}{1 + \frac{1}{24N}} - 1 &< \delta_N < \left(1 + \frac{1}{12N}\right)^2 - 1 \\ -\frac{1}{24N+1} &< \delta_N < \frac{1}{6N} + \frac{1}{144N^2}. \end{align*}

We can assert that δN=O(1/N)\delta_N=O(1/N).

Exercise 27

According to Table 4.4, H1000=7.4854709H_{1000}=7.4854709.

The specific bounds implied by the absolute formula are

7.4749709H^10007.49497097.4749709 \le \hat H_{1000} \le 7.4949709

The specific bounds implied by the relative formula are

0H^100016.90775530 \le \hat H_{1000} \le 16.9077553

The LHS is truncated to zero as a negative lower bound is meaningless. The RHS is huge, since the assumed constant is too large. In a relative formula, we expect h(N)=o(1)h(N)=o(1) that should decay rapidly even for small problem sizes.

circle-info

Recall that O(h(N))O(h(N)) with a constant C<10|C|<10 covers the range ±Ch(N)±Ch(N).

Exercise 28

The exact value is T10=16,796T_{10}=16,796.

The specific bounds implied by the relative formula are

0T^1037,416.0 \le \hat T_{10} \le 37,416.

Exercise 29

Let’s split the original recurrence at the threshold value M>0M>0

f(N)=k0akNk=0k<MakNk+kMakNkRM(N).f(N)=\sum_{k\ge 0} a_k N^{-k} =\sum_{0\le k< M} a_k N^{-k}+\underbrace{\sum_{k\ge M} a_k N^{-k}}_{R_M(N)}.

Since f(N)f(N) converges then RM(N)R_M(N) converges, too. We can rearrange the remainder term as

RM(N)=j0aM+jN(M+j)=NMj0aM+jNjC=CNM.R_M(N) = \sum_{j\ge 0} a_{M+j}N^{-(M+j)} = N^{-M}\underbrace{\sum_{j\ge 0} a_{M+j}N^{-j}}_{C}= C\cdot N^{-M}.

Hence, RM(N)=O(NM)R_M(N)=O(N^{-M}), which concludes the proof.

Exercise 30

It’s not hard to see from Table 3.3 that terms of the sum can be represented with an EGF

A(z)=11zN=NNz=k0k!Nkzkk!=k0(zN)k.A(z)=\frac{1}{1-\cfrac{z}{N}}=\frac{N}{N-z}=\sum_{k\ge0} \frac{k!}{N^k} \frac{z^k}{k!} = \sum_{k\ge0}\left(\frac{z}{N}\right)^k.

This gives

S(N)=k0k!Nk=k01Nk0ezzkdzk!=Γ(k+1)=0ezk0(zN)kdz=0ezNNzdz=NeNEi(N).\begin{align*} S(N) &=\sum_{k\ge0} \frac{k!}{N^k} \\ &= \sum_{k\ge0} \frac{1}{N^k} \underbrace{\int_0^\infty e^{-z} z^k \, dz}_{k!=\Gamma(k+1)} \\ &= \int_0^\infty e^{-z} \sum_{k\ge0} \left(\frac{z}{N}\right)^k \, dz \\ &= \int_0^\infty e^{-z}\frac{N}{N - z} \, dz \\ &= N e^{-N} \operatorname{Ei}(N). \end{align*}

The derivation uses the gamma functionarrow-up-right and exponential integralarrow-up-right to craft the "closed-form" solution. The last line follows after employing the substitution u=zNu=z-N.

Of course, we can make f(N)=S(N)f(N)=S(N) or construct a function that can be approximated by using few terms of S(N)S(N). Let’s use expansion of the geometric series from Table 4.2 with x=1/Nx=1/N. This gives

g(x)=11x+x21x+4x31x=1+x+2x2+6x3+O(x4),f(N)=g(1/N).g(x)=\frac{1}{1-x}+\frac{x^2}{1-x}+\frac{4x^3}{1-x}=1+x+2x^2+6x^3+O(x^4), \qquad f(N)=g(1/N).

Exercise 31

There is a complementary ω\omega-notation that reflects the opposite relationship compared to the o-notation. It denotes a strict lower bound (grows strictly faster than). We use it in our list below:

  • eN=O(N2)e^N=O(N^2) is false, since limNeNN2=\lim_{N \to \infty}\frac{e^N}{N^2}=\infty. We can say that eN=ω(N2)e^N=\omega(N^2).

  • eN=O(2N)e^N=O(2^N) is false (see Exercise 9). We can say that eN=ω(2N)e^N=\omega(2^N).

  • 2N2^{-N} is exponentially small, so 2N=O(1/N10)2^{-N}=O(1/N^{10}) is true (see Exercise 7).

  • NlnN=O(eln2N)N^{\ln N}=O(e^{\ln^2 N}) is true, since NlnN=elnNlnN=eln2NN^{\ln N}=e^{\ln {N^{\ln N}}}=e^{\ln^2 N}.

Exercise 32

The Taylor expansion of exe^x in terms of x=1/(N+1)x=1/(N+1) is (see Table 4.2)

e1N+1=1+1N+1+12(N+1)2+O(1(N+1)3).e^{\frac{1}{N+1}}=1+\frac{1}{N+1}+\frac{1}{2(N+1)^2}+O\bigg(\frac{1}{(N+1)^3}\bigg).

We can provide a more streamlined version in terms of 1/N1/N (see also Exercise 23). The book shows the expansion of 1/(N+1)=1/N1/N2+O(N3)1/(N+1)=1/N-1/N^2+O(N^{-3}). We need to replace the factors involving 1/(N+1)1/(N+1) in the expansion of e1/(N+1)e^{1/(N+1)} with the given expansion of 1/(N+1)1/(N+1). We get

e1N+1=1+1N12N2+O(N3).\boxed{e^{\frac{1}{N+1}} = 1 + \frac{1}{N} - \frac{1}{2N^2} + O\left(N^{-3}\right)}.

Exercise 33

We can expand the number of terms for HNH_N from Table 4.6. The first approximation to within O(1/N)O(1/N) is

(HN)2=(lnN+γ+12N+O(N2))(lnN+γ+12N+O(N2))=ln2N+2γlnN+γ2+lnNN+O(1/N).\begin{align*} (H_N)^2 &= \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \bigg(\ln N+\gamma+\frac{1}{2N}+O(N^{-2}) \bigg) \\ &= \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+O(1/N). \end{align*}

The second approximation to within o(1/N)o(1/N) simply includes γ/N\gamma/N that was previously absorbed into O(1/N)O(1/N). Thus,

(HN)2=ln2N+2γlnN+γ2+lnNN+γN+o(1/N).(H_N)^2 = \ln^2 N+2\gamma\ln N+\gamma^2+\frac{\ln N}{N}+\frac{\gamma}{N}+o(1/N).

Exercise 34

cotx=cosxsinx=1x2/2+x4/24+O(x6)xx3/6+x5/120+O(x7)=(1x2/2+x4/24+O(x6))1xx3/6+x5/120+O(x7)=1x2/2+x4/24+O(x6)x11x2/6+x4/120+O(x6)=1x2/2+x4/24+O(x6)x(1+x2/6x4/120+x4/36+O(x6))=1x2/3x4/45x=1/xx/3x3/45+O(x5).\begin{align*} \cot x &= \frac{\cos x}{\sin x} = \frac{1-x^2/2+x^4/24+O(x^6)}{x-x^3/6+x^5/120+O(x^7)} \\ &= (1-x^2/2+x^4/24+O(x^6))\,\frac{1}{x-x^3/6+x^5/120+O(x^7)} \\ &= \frac{1-x^2/2+x^4/24+O(x^6)}{x}\,\frac{1}{1-x^2/6+x^4/120+O(x^6)} \\ &= \frac{1-x^2/2+x^4/24+O(x^6)}{x}\,(1+x^2/6-x^4/120+x^4/36+O(x^6)) \\ &= \frac{1-x^2/3-x^4/45}{x} \\ &= 1/x-x/3-x^3/45+O(x^5). \end{align*}

Notice that O(x5)O(x^5) is a sharper remainder than O(x4)O(x^4); we should always seek tightest bounds. In the derivation, we have used the first 3 terms of the geometric series, instead of just two as in the example from the book.

Exercise 35

xex1=x1ex=x1(1+x+x2/2+x3/6+x4/24+x5/120+O(x6))=11+x/2+x2/6+x3/24+x4/120+O(x5)=1x/2x2/6x3/24x4/120+(x/2x2/6x3/24)2+(x/2x2/6)3+(x/2)4+O(x5)=1x/2+x2/12x4/720+O(x5).\begin{align*} \frac{x}{e^x-1} &= -\frac{x}{1-e^x} \\ &= -\frac{x}{1-(1+x+x^2/2+x^3/6+x^4/24+x^5/120+O(x^6))} \\ &= \frac{1}{1+x/2+x^2/6+x^3/24+x^4/120+O(x^5)} \\ &= 1-x/2-x^2/6-x^3/24-x^4/120+(-x/2-x^2/6-x^3/24)^2+(-x/2-x^2/6)^3+(-x/2)^4+O(x^5) \\ &= 1-x/2+x^2/12-x^4/720+O(x^5). \end{align*}

Exercise 36

A full treatment of this topic is available in the paper Asymptotic Expansions of Central Binomial Coefficients and Catalan Numbersarrow-up-right.

Exercise 37 🌟

1N+1(3NN)=exp{ln((3N)!)ln(N!)ln((2N)!)ln(N+1)}=exp{(3N+12)ln(3N)3N+ln2π+O(1/N)(N+12)lnN+Nln2π+O(1/N)(2N+12)ln(2N)+2Nln2π+O(1/N)lnN+O(1/N)}=exp{(3N+12)ln3(2N+12)ln232lnNln2π+O(1/N)}=27N4N32πN3/2(1+O(1/N)).\begin{align*} \frac{1}{N+1}\binom{3N}{N} &= \exp\bigg\{\ln ((3N)!)-\ln (N!)-\ln ((2N)!)-\ln(N+1)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\left(N+\frac{1}{2}\right)\ln N+N-\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\left(2N+\frac{1}{2}\right)\ln (2N)+2N-\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -\ln N+O(1/N)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln 3-\left(2N+\frac{1}{2}\right)\ln 2-\frac{3}{2}\ln N-\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= \frac{27^N}{4^N}\cdot\frac{\sqrt 3}{2\sqrt{\pi} N^{3/2}}\left(1+O(1/N)\right). \end{align*}
circle-info

There is an important technical detail not explicitly mentioned in the book, which is eO(1/N)=1+O(1/N)e^{O(1/N)}=1+O(1/N) (see Table 4.2).

Exercise 38

This is simpler variant of the previous exercise.

(3N!)(N!)3=exp{ln((3N)!)3ln(N!)}=exp{(3N+12)ln(3N)3N+ln2π+O(1/N)3(N+12)lnN+3N3ln2π+O(1/N)}=exp{(3N+12)ln3lnN2ln2π+O(1/N)}=27N32πN(1+O(1/N)).\begin{align*} \frac{(3N!)}{(N!)^3} &= \exp\bigg\{\ln ((3N)!)-3\ln (N!)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln (3N)-3N+\ln \sqrt{2\pi}+O(1/N) \\ &\qquad -3\left(N+\frac{1}{2}\right)\ln N+3N-3\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= \exp\bigg\{\left(3N+\frac{1}{2}\right)\ln 3-\ln N-2\ln \sqrt{2\pi}+O(1/N)\bigg\} \\ &= 27^N\frac{\sqrt 3}{2\pi N}\left(1+O(1/N)\right). \end{align*}

Exercise 39 🌟

(1λN)N=exp{Nln(1λN)}=exp{N(λN+O(1/N2))}=exp{λ+O(1/N)}=eλ+O(1/N).\begin{align*} \left(1-\frac{\lambda}{N}\right)^N &= \exp\bigg\{N\ln \left(1-\frac{\lambda}{N}\right)\bigg\} \\ &= \exp\bigg\{N\left(-\frac{\lambda}{N}+O(1/N^2)\right)\bigg\} \\ &= \exp\bigg\{-\lambda+O(1/N)\bigg\} \\ &= e^{-\lambda}+O(1/N). \end{align*}

Based on the asymptotic expansion, the approximate value of the expression is eλe^{-\lambda} for large NN.

Exercise 40

(1lnNN)N=exp{Nln(1lnNN)}=exp{N(lnNNln2N2N2ln3N3N3+O(ln4N/N4))}=exp{lnNln2N2Nln3N3N2+O(ln4N/N3)}=1Nexp{ln2N2Nln3N3N2}(1+O(ln4N/N3))=1Nln2N2N2+ln4N8N3+o(ln4N/N3).\begin{align*} \left(1-\frac{\ln N}{N}\right)^N &= \exp\bigg\{N\ln \left(1-\frac{\ln N}{N}\right)\bigg\} \\ &= \exp\bigg\{N\left(-\frac{\ln N}{N}-\frac{\ln^2 N}{2N^2}-\frac{\ln^3 N}{3N^3}+O(\ln^4 N/N^4)\right)\bigg\} \\ &= \exp\bigg\{-\ln N-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}+O(\ln^4 N/N^3)\bigg\} \\ &= \frac{1}{N}\exp\bigg\{-\frac{\ln^2 N}{2N}-\frac{\ln^3 N}{3N^2}\bigg\}(1+O(\ln^4 N/N^3)) \\ &= \frac{1}{N}-\frac{\ln^2 N}{2N^2}+\frac{\ln^4 N}{8N^3}+o(\ln^4 N/N^3). \end{align*}

The middle term in the penultimate line must be expanded as 1+x+x2/21+x+x^2/2, where xx is the argument to the exponential function. We must retain the three asymptotically largest components.

Exercise 41

When interest is compounded daily at a 10% annual rate on a $10,000 principal, after 365 days, the amount is . Instead of computing this value directly, let’s apply the result from Exercise 39. Here, we have λ=0.1\lambda=-0.1, so the total amount increases to . This is about $51.8 more than what would be paid if interest were paid once a year.

circle-info

The exact formula gives $51.6, which is $0.02 less than our quick calculation. This showcases the advantages of using asymptotic approximations.

Exercise 42

exp{1+1N+O(1/N2)}=eexp{1N+O(1/N2)}=e(1+1N+O(1/N2))=e+.eN+O(1/N2).\begin{align*} \exp\bigg\{1+\frac{1}{N}+O(1/N^2)\bigg\} &= e \cdot \exp\bigg\{\frac{1}{N}+O(1/N^2)\bigg\} \\ &= e\left(1+\frac{1}{N}+O(1/N^2)\right) \\ &= e+.\frac{e}{N}+O(1/N^2). \end{align*}

Exercise 43

For large NN, using Stirling's approximation for N!N! and expansion of the geometric series, we get

1N!=12πN(eN)N(1112N+O(1N2)).\frac{1}{N!} = \frac{1}{\sqrt{2\pi N}} \left(\frac{e}{N}\right)^N \left(1 - \frac{1}{12N} + O\left(\frac{1}{N^2}\right)\right).

Nonetheless, if we would substitute this into the sine function, then we would go down the rabbit hole. 1N!\frac{1}{N!} is already a rational function of NN, hence we can directly use it in further expansions.

ln(sin(1N!))=ln(1N!(1+O(1N2)))=ln(1N!)+ln(1+O(1N2))=ln(N!)+O(1N2)=NlnN+NlnNln2π112N+O(1N2).\begin{align*} \ln\left(\sin\left(\frac{1}{N!}\right)\right) &= \ln\left(\frac{1}{N!} \left(1+O\left(\frac{1}{N^2}\right)\right)\right) \\ &= \ln\left(\frac{1}{N!}\right) + \ln\left(1+O\left(\frac{1}{N^2}\right)\right) \\ &= -\ln (N!) + O\left(\frac{1}{N^2}\right) \\ &= -N \ln N + N - \ln \sqrt N - \ln \sqrt{2\pi} - \frac{1}{12N} + O\left(\frac{1}{N^2}\right). \end{align*}

Exercise 44

sin(tan(1/N))=sin(1N+13N3+O(1N5))=1N+13N316N3+O(1N5)=1N+16N3+O(1N5)1N.\begin{align*} \sin(\tan(1/N)) &= \sin\left(\frac{1}{N}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right)\right) \\ &= \frac{1}{N}+\frac{1}{3N^3}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \\ &= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}. \end{align*}
tan(sin(1/N))=tan(1N16N3+O(1N5))=1N16N3+13N3+O(1N5)=1N+16N3+O(1N5)1N.\begin{align*} \tan(\sin(1/N)) &= \tan\left(\frac{1}{N}-\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right)\right) \\ &= \frac{1}{N}-\frac{1}{6N^3}+\frac{1}{3N^3}+O\left(\frac{1}{N^5}\right) \\ &= \frac{1}{N}+\frac{1}{6N^3}+O\left(\frac{1}{N^5}\right) \sim \frac{1}{N}. \end{align*}

At this point, we could say that the order of growth of sin(tan(1/N))tan(sin(1/N))\sin(\tan(1/N)) –\tan(\sin(1/N)) is O(N5)O(N^{-5}), but this would be a loose bound. Since the first two terms match, we must expand until we find a difference. Let x=1/Nx=1/N and leverage WolframAlpha to perform the hard work.

series sin(tan(x)) to order 7=x+16x3140x5551008x7+O(x9)series tan(sin(x)) to order 7=x+16x3140x51075040x7+O(x9).\begin{align*} \text{series sin(tan(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{55}{1008}x^7 + O(x^9) \\ \text{series tan(sin(x)) to order 7} &= x + \frac{1}{6}x^3 - \frac{1}{40}x^5 - \frac{107}{5040}x^7 + O(x^9). \end{align*}

Apparently, we have a mismatch at the 4th term, thus the order of growth of sin(tan(1/N))tan(sin(1/N))\sin(\tan(1/N)) –\tan(\sin(1/N)) is O(N7)O(N^{-7}).

Exercise 45

TN=4NπN3(1+O(1/N))HN=lnN+γ+O(1/N).\begin{align*} T_N &= \frac{4^N}{\sqrt {\pi N^3}}(1+O(1/N)) \\ H_N &= \ln N+\gamma+O(1/N). \end{align*}

By substituting N=TNN=T_N into the expansion of HNH_N, we get

HTN=2Nln2lnπN3+γ+O(1/N).H_{T_N} = 2N\ln 2-\ln \sqrt {\pi N^3}+\gamma+O(1/N).

Exercise 46

This exercise is about the asymptotic expansion of the Lambert W functionarrow-up-right. The cited article contains the full expansion in terms of substitutions L1L_1 and L2L_2. We depict here the steps leading to the that expression using those symbols.

We derive the basic equation marked with an asterisk

n=aneanlnn=lnan+anan=L1lnan(*).\begin{align*} n &= a_n e^{a_n} \\ \ln n &= \ln a_n + a_n \\ a_n &= L_1 - \ln a_n \qquad \text{(*)}. \end{align*}

The first iteration of bootstrapping gives anL1a_n \approx L_1. The second iteration gives anL1L2a_n \approx L_1 - L_2. The third iteration gives

anL1ln(L1L2)=L1ln(L1(1L2L1))=L1L2ln(1L2L1)L1L2+L2L1.\begin{align*} a_n &\approx L_1 - \ln(L_1 - L_2) \\ &= L_1 - \ln \left( L_1 \left( 1 - \frac{L_2}{L_1} \right) \right) \\ &= L_1 - L_2 - \ln \left( 1 - \frac{L_2}{L_1} \right) \\ &\approx L_1 - L_2 + \frac{L_2}{L_1}. \end{align*}

By carrying out one more iteration, we get the equation from the cited article to within the requested accuracy. For the sake of completeness, here is the identity without using symbols L1L_1 and L2L_2

an=lnnlnlnn+lnlnnlnn+(lnlnn)22lnlnn2(lnn)2+O(1(logn)3).\boxed{a_n = \ln n - \ln \ln n + \frac{\ln \ln n}{\ln n} + \frac{(\ln \ln n)^2 - 2\ln \ln n}{2(\ln n)^2} + O\left( \frac{1}{(\log n)^3} \right)}.

Exercise 47 🌟

We need to provide the reversion of the power series y = c0 + c1x + c2x2 + c3x3 + O(x4). By applying the hint from the book, this equation transform into z=x+c2x2+c3x3+O(x4)z = x + c'_2x^2 + c'_3x^3 + O(x^4). We already know the reversion of this form from the book:

x=zc2z2+(2c2c3)z3+O(z4).x=z-c'_2z^2+(2c'_2-c'_3)z^3+O(z^4).

Now, we just need to express it again in terms of yy and the original coefficients

x=yc0c1c2c13(yc0)2+2c22c1c3c15(yc0)3+O((yc0)4).\boxed{x=\frac{y-c_0}{c_1} - \frac{c_2}{c_1^3}(y-c_0)^2 + \frac{2c_2^2 - c_1 c_3}{c_1^5}(y-c_0)^3 + O((y-c_0)^4)}.
circle-info

In series reversion, the expansion for xx is in powers of the small quantity yc0y - c_0, not yy itself (unless c0=0c_0 = 0). Therefore, the error term should be written as O((yc0)4)O((y-c_0)^4). Using O(y4)O(y^4) is incorrect when c00c_0 \neq 0 because yy doesn’t tend to zero.

Exercise 48

k11/k2=π2/6\sum_{k\ge 1} 1/k^2=\pi^2/6, which is known as the Basel problemarrow-up-right. According to the direct comparison testarrow-up-right, our sum k11/(k2Hk)\sum_{k\ge 1} 1/(k^2H_k) converges, too. The tail RNR_N can be approximated using the asymptotic expansion of HN=lnN+δH_N=\ln N+\delta (see Table 4.6), where δ>0\delta>0 for large NN. This gives

RN=k>N1k2Hk=k>N1k2(lnk+δ)k>N1k2lnkN1x2lnxdx.R_N=\sum_{k>N} \frac{1}{k^2H_k}=\sum_{k>N} \frac{1}{k^2(\ln k+\delta)}\sim\sum_{k>N} \frac{1}{k^2\ln k} \approx \int_N^\infty \frac{1}{x^2 \ln x} \, dx.

We can estimate RNR_N by using integration by parts to find the leading asymptotic term. Let u=(lnx)1u = (\ln x)^{-1} and dv=x2dxdv = x^{-2} dx. Then du=x1(lnx)2dxdu = -x^{-1}(\ln x)^{-2} dx and v=x1v = -x^{-1}.

N1x2lnxdx=1xlnxNN1x2(lnx)2dx=1NlnNN1x2(lnx)2dxRN=1NlnN+O(1N(logN)2).\begin{align*} \int_N^\infty \frac{1}{x^2 \ln x} dx &= -\frac{1}{x \ln x} \bigg|_N^\infty - \int_N^\infty \frac{1}{x^2 (\ln x)^2} dx \\ &= \frac{1}{N \ln N} - \int_N^\infty \frac{1}{x^2 (\ln x)^2} dx \\ &\therefore \boxed{R_N=\frac{1}{N \ln N} + O\left(\frac{1}{N (\log N)^2}\right)}. \end{align*}

We can conclude that

k=1N1k2Hk=C1NlnN+O(1N(logN)2).\sum_{k=1}^{N} \frac{1}{k^{2} H_{k}} =C - \frac{1}{N \ln N} + O\left(\frac{1}{N (\log N)^2}\right).

Whats the purpose of all this? Without our asymptotic estimate, if we wanted to find the value of CC, we would just sum the first NN terms. Because the original error is 1NlnN\approx \frac{1}{N \ln N}, the series converges excruciatingly slowly. But now we can rearrange the equation to compute CC differently

C=k=1N1k2Hk+1NlnN+O(1N(logN)2).C =\sum_{k=1}^N \frac{1}{k^2 H_k} + \frac{1}{N \ln N} + O\left(\frac{1}{N (\log N)^2}\right).

The new error rate drops much faster, so we save both time and increase accuracy. In spirit, this technique of boosting numerical computations is known as Richardson extrapolationarrow-up-right. There are many other similar methods to speed up convergence of series.

Exercise 49

Last updated