deffib_recursive(n):returnfib_recursive(n -1)+fib_recursive(n -2)if n >1else ndeffib_iterative(n): a, b =0,1for _ inrange(n): a, b = b, a + breturn a
The recursive version needlessly recomputes the same values and has an exponential running time. The iterative version eliminates this deficiency. Otherwise, both correctly return F20=6765.
Exercise 2.2
The total number of arithmetic operations (additions, subtractions and divisions), including loop increments and index calculations, is
It uses only 5(Nmax−1) arithmetic operations to compute the same value as Program 2.1.
Exercise 2.4
A non-recursive program would require Θ(N) time to compute values using recurrence (2) and Θ(N2) for recurrence (3), as shown in Exercise 2.2.
A recursive program would also require Θ(N) time to compute values using recurrence (2). Despite an exponentially growing number of subproblems, their sizes are also exponentially dropping. On the other hand, recursively calculating values via recurrence (3), as mentioned in the book, has unacceptable performance. By instrumenting the code, we get that the number of function calls is 3N. This can be formally proven by induction on N, using the recurrence rN=1+2∑k=0N−1rk (with r0=0) as the inductive hypothesis. Therefore, the number of arithmetic operations is Θ(3N).
Classifying variants with tight asymptotic bounds is convenient here. For example, we don’t need more details to understand that Θ(3N) is prohibitive.
Exercise 2.5
We can’t directly implement the recurrence for the radix exchange sort using the technique demonstrated in Program 2.1. To calculate CN we need to know CN. Let’s transform it
For efficiently calculating the binomial coefficients, the program employs memoization. Here’s the output:
Analysis of Table 2.5
We’d expect that the average number of compares approach the following asymptotic approximations for sufficiently large N:
Standard quicksort: ∼2NlnN≈1.386NlgN
Median-of-three quicksort: ∼(12/7)NlnN≈1.188NlgN
Radix exchange sort: ∼NlgN
Nonetheless, for smaller values of N we can’t ignore lower-order terms, as the data in Table 2.5 shows. This exercise serves as a perfect example of why relying only on the leading asymptotic term can lead to the wrong conclusion for practical input sizes. The book highlights this fact many times. Gradual asymptotic expansion allows fine-tuning the model to reflect the desired accuracy in relation to input size ranges.
Exercise 2.6
Denote by Un and Vn solutions to homogeneous recurrences un and vn, respectively, where f(x,y)=x+y.
un=f(un−1,un−2) for n>1 with u0=1 and u1=0,vn=f(vn−1,vn−2) for n>1 with v0=0 and v1=1.
an=pUn+qVn, where Un=Fn−1 and Vn=Fn, thus an=pFn−1+qFn for n>1. The reason that Un=Fn−1 is because it starts as 1,0,1,1,..., hence behaves like a right shifted Fibonacci sequence.
Exercise 2.7
The homogeneous part remains the same as in Exercise 2.6. The constant term r "obeys" the same Fibonacci like growth pattern, with a difference, that in each iteration we also add one additional instance of r. Imagine always starting a new Fibonacci sequence and adding it to the existing count. We get an=pFn−1+qFn+r∑k=1n−1Fk for n>1. Chapter 3 of the book shows how to attain a closed-form solution for a sum of Fibonacci numbers.
Exercise 2.8 🌟
For f linear, we can express the solution to the recurrence an=f(an−1,an−2) for n>1 in terms of a0, a1, f(0,0), Un and Vn, where Un and Vn are solutions to homogeneous recurrences un and vn, respectively:
un=f(un−1,un−2)−f(0,0) for n>1 with u0=1 and u1=0,vn=f(vn−1,vn−2)−f(0,0) for n>1 with v0=0 and v1=1.
This is a generalization of the previous two exercises.
This is a generalization of Theorem 2.1 of writing down the solution to an=xnan−1+yn for n>t in terms of the x’s, the y’s and the initial value at.
Following through the same steps as in the proof of Theorem 2.1, and stopping at at, we get
an=yn+at(xnxn−1…xt+1)+t+1≤j<n∑yjxj+1xj+2…xn for n>t.
As a sanity check, try shifting the recurrence in Exercise 2.13 and compute values as described here. For example, use t=2 and at=2 to get a7=9/2 in both ways, via the original recurrence and this synthesized formula. When xn=n/(n+1) then xnxn−1…xt+1=(t+1)/(n+1).
Notice, that the recurrence in section 2.2 for CN isn’t the same as in section 1.5 (the initial conditions are different). Consequently, the solution based on Theorem 2.1 is exactly as given in the book. The reported errata to change -1 to -3/2 is incorrect, since it assumes the recurrence from section 1.5.
Exercise 2.16 🌟
This exercise perfectly illustrates the generalized Theorem 2.1 (see Exercise 2.14).
We need to solve the recurrence
nan=(n−4)an−1+12nHnfor n>4 and an=0 for all n≤4.
After dividing both sides by n, we get a first-order linear recurrence
an=nn−4an−1+12Hnfor n>4 and an=0 for all n≤4.
We can now use the generalized Theorem 2.1 with t=4. The reduced summation factor sj+1=xj+1xj+2…xn equals to
This exercise highlights an important detail not explicitly mentioned in the text; the summation factor must be nonzero for all constituent factors.
The correct problem statement is published on the book’s website. After simplifying the expression for AN, we get
AN=NN−6AN−1+2for N>1 and A1=1.
Before proceeding, we must resolve the base cases for N≤6, ensuring that the summation factor is nonzero. Using the original recurrence for AN we get A2=0,A3=2,A4=1,A5=9/5,A6=2. Following the same steps from the previous exercise (see also Table 2.2 from the book), we get
AN=72(N+1) for N>6.
Exercise 2.18
From the signed error term bn=an−α, we have an−1=bn−1+α. We can substitute it back into the main recurrence
At any rate, ∣bn∣≈0.4∣bn−1∣, which explains why each iteration increases the number of significant digits available by a constant number of digits (about half a digit).
Exercise 2.19 🌟
The solution is known as the Dottie number and revolves around the concept of a fixed point. We must show that all conditions are satisfied for applying the Banach fixed point theorem; it then guarantees that the function cos(x) converges to a unique fixed point for any a0∈[0,1].
Let f(x)=cos(x) and x∈[0,1]. We know that f:[0,1]↦[0,1] and is differentiable on this whole interval. Thus, by the mean value theorem, for any distinct x,y∈[0,1], there’s a point c∈(x,y) such that f(x)−f(y)=f′(c)(x−y). Consequently, we can always express the distance between f(x) and f(y) in terms of the distance between x and y. If ∣f′(c)∣<1 for all c∈(0,1), then f has a Lipschitz constant less than 1, therefore f is a contraction mapping on [0,1]. This is indeed the case, since ∣f′(x)∣=sin(x) on this interval and sin(1)<1 (sine is strictly increasing in the interval [0,1], see Exercise 2.22).
We can also represent the continued fraction as f(x)=1/(1+x) for x∈[0,1]. I’s easy to show that all conditions hold for applying the Banach theorem.
Exercise 2.20
Suppose, for the sake of contradiction, that an=0.5(an−1−1/an−1) has a real fixed point for n>0 with a0=0. Since, a0 is a real number, then a1=0.5(a0−1/a0) is also a real number. By induction on n, we can conclude that all an are real numbers for n>0. On the other hand, the fixed point −1=i isn’t a real number. Therefore, we have reached a contradiction, so {an}n>0 diverges.
This exactly matches our recurrence relation. If a0=cot(θ0), then a1=cot(2θ0). Therefore, an=cot(2nθ0) and the sequence will fluctuate chaotically, governed by the doubling of the angle in the cotangent function. It’ll either terminate or not. If an becomes zero, then the next iteration fails due to division by zero. For example, a0=cot(π/4)=1 will terminate the sequence in the second iteration, since a1=0. It turns out that the sequence terminates if and only if θ0 is a dyadic rational multiple of π. That is
θ0=2mkπ,
where k and m are integers. Both directions are trivial to prove.
We prove the lower bound in a similar fashion, as the book does for the upper bound
an1=an−11(1−an−11)=an−11+1−an−11<an−11+2…<2(n+1)≤3n∴an≥31⋅n1=Ω(1/n).(for n>0)(1−an−1>1/2 for sufficiently large n)(telescoping sum)(for all n>1)
Both asymptotic bounds match, hence an=Θ(1/n). Observe, that {an}n>0 is strictly decreasing, since the term 1−an−1<1 in every iteration. Thus, an<an−1 because an−1 is multiplied by a factor less than one.
After computing initial terms, it seems that an∼c/n, where c=1. Let’s prove that limn→∞nan=limn→∞1/ann=1, which matches the ∞/∞ case of the Stolz–Cesàro theorem. It states that if the limit of the differences 1/an−1/an−1n−(n−1) exists, then limn→∞1/ann is equal to that same limit. The numerator is 1, and the denominator is 1/(1−an−1) (see the derivation of the lower bound). Thus,
n→∞lim1/(1−an−1)1=n→∞lim(1−an−1)=1.
Since the limit of the differences is 1, the limit of the original sequence is also 1.
Exercise 2.22
First, we prove that limn→∞an=0. For x∈(0,1] we have that 0<sin(x)<x; the geometric proof relies on the definition of radian. Consequently, an is a strictly decreasing sequence bounded from below by zero. According to the monotone convergence theorem, it converges to the infimum, which is zero.
Since an→0, we can use the Taylor series for sin(x) around x=0:
sin(x)=x−3!x3+5!x5−⋯=x−6x3+O(x5).
We can substitute this expansion into the recurrence an=sin(an−1) to get
an≈an−1−6an−13=an−1(1−6an−12).
If we square both sides of the equation, then we can use the approximation (1−x)2≈1−2x for small x. Thus,
an2≈an−12(1−6an−12)2=an−12(1−3an−12).
Following the hint from the book, let’s flip the above equation to get
bn2≈(1−3an−12)bn−12≈bn−12(1+3an−12)=bn−12+31=1+3n(1−x1≈1+x for small x)(telescoping sum).
Therefore, bn=O(n)⟹an=O(1/n).
Exercise 2.23
If f′(α)>1 then α is a repelling fixed point. The sequence an=f(an−1), starting at any point a0 near α (but not exactly α), will diverge from the fixed point α. To develop an intuition why, let’s use the familiar error term from the book bn=an−α and monitor how it changes in each iteration.
anα+bn=f(an−1)=f(α+bn−1)≈f(α)+f′(α)⋅bn−1∴∣bn∣≈f′(α)⋅∣bn−1∣(by Taylor expansion near α, f(α+bn−1)≈f(α)+f′(α)⋅bn−1)(α=f(α)).
Since f′(α)>1 the magnitude of the error term grows instead of shrinks. In general, what happens in later iterations highly depends on the global properties of the function. The point is, once the error term becomes small, the above approximation for bn becomes relevant and it’ll grow again.
Exercise 2.24 🌟
This exercise states sufficient criteria corresponding to the three cases above for local convergence (when a0 is sufficiently close to α) and quantifies the speed of convergence in terms of f′(α) and f″(α).
The analysis relies on the approximation of the error term bn from the previous exercise and the precondition stated in the book that a0 is sufficiently close to α. The three cases are:
If the fixed point α is attracting then local convergence is guaranteed. The error drops by a constant factor ∣f′(α)∣ at each step, hence the number of significant digits increases by a fixed amount in each iteration.
If the fixed point α is super-attracting, we have guaranteed and fast local convergence. Here, the error term must be more precisely described using a higher-order derivative, assuming f is at least twice continuously differentiable, for example, bn≈∣f′′(α)∣⋅bn−12. Each iteration approximately doubles the number of significant digits available.
If the fixed point α is neutral then convergence isn’t guaranteed. The criteria is more complicated, as described in the excerpt about nonhyperbolic fixed points.
If the fixed point α is repelling then convergence isn’t possible.
The lecture notes by Burbulla nicely recaps the basic theory around fixed points with lots of examples. It contains a case study of a unilaterally attracting/repelling neutral fixed point.
Exercise 2.25
The book mentions that restructuring the terms of a recurrence corresponds to factoring the polynomial whose degree equals the order of the recurrence. There are two worked-out examples:
For an=2n−1=2n−1n the "characteristic polynomial" is (1−2x)(1−x) in factored form.
For an=3n−2n the "characteristic polynomial" is (1−3x)(1−2x) in factored form.
In our case, we need to work with a cubic polynomial
(1−4x)(1−3x)(1−2x)=1x0−9x1+26x2−24x3.
The term xd is associated with an−d. Therefore, we can immediately construct our recurrence
an=9an−1−26an−2+24an−3 for n>2.
The initial values are
a0a1a2=40−30+20=1=41−31+21=3=42−32+22=11.
Exercise 2.26 🌟
This exercise explains how to solve an inhomogeneous recurrence of the form
an = x1an – 1 + x2an – 2 + ... + xtan– t + r for n ≥ t.
The main idea is the same as in Exercise 2.8. The notes for recitation 12, from the course 6.042/18.062J Mathematics for Computer Science (2005), summarize the methodology of solving inhomogeneous linear recurrences. It contains additional examples for each recurrence type.
The paper Some Techniques for Solving Recurrences by GL is a superb tutorial on solving recurrences with advanced material on systematic search for particular solutions of inhomogeneous linear recurrences. The text covers many recurrence types that occur in practical applications.
Exercise 2.27
We know that an=c03n+c12n, as stated in the book. Furthermore, we have
a0a1=c0+c1=3c0+2c1.
We need to work backwards to find initial conditions resulting in c0=0 and c1=1. We immediately get a0=1 and a1=2.
There are no initial conditions for which the solution is an=2n−1. Irrespectively of boundary conditions, an must satisfy the recurrence, which isn’t the case here.
Exercise 2.28
The exercise from the book mentions an=2an−1−an−2+2an−3 for n>2. The characteristic equation has two complex roots, so the general solution is an=c02n+c1in+c2(−i)n. However, there is no way to set initial conditions for an to be constant, except trivially being zero, which is only a special case. So, we assume that this exercise is about the recurrence given in the main text.
By tweaking initial conditions, we can fundamentally change the growth rate of the solution to
an=2an−1+an−2−2an−3 for n>2.
We use the same approach as in the previous exercise. Here are the initial conditions leading to each desired growth rate of an:
Constant is achieved when a0=a1=a2=c0. For example, a0=a1=a2=2 makes an=2.
Exponential is achieved when a0=c2, a1=2c2 and a2=4c2. For example, a0=1, a1=2 and a2=4 makes an=2n.
Fluctuating in sign is achieved when a0=c1, a1=−c1 and a2=c1. For example, a0=1, a1=−1 and a2=1 makes an=(−1)n.
Exercise 2.29
The characteristic equation is z2−2z−4=0 whose roots are z1=1+5 and z2=1−5. Therefore, the general solution is an=c0(1+5)n+c1(1−5)n for n>1. We also have the following system of linear equations:
a0a1=1=c0+c1=2=c0(1+5)+c1(1−5).
The constants are c0=255+1 and c1=255−1.
Exercise 2.30
The characteristic equation is z2−2z+1=0 whose double root is z1=1. Therefore, the general solution is an=c0+c1n for n>1. We also have the following system of linear equations:
a0a1=0=c0=1=c0+c1.
The constants are c0=0 and c1=1.
If we change the initial conditions, then we get another system of linear equations:
a0a1=1=c0=1=c0+c1.
The constants are c0=1 and c1=0.
Exercise 2.31 🌟
This exercise illustrates how to handle complex roots and transform them into trigonometric form.
We need to solve the recurrence an=an−1−an−2 for n>1 with a0=0 and a1=1. The characteristic equation is z2−z+1=0 whose complex roots are z1=(1+3i)/2 and z2=(1−3i)/2. We could continue as usual, but the expression for an would be awkward. Let’s use Euler’s formula to convert complex numbers into trigonometric form. The roots can be represented as e±i3π=cos(3π)±isin(3π). Thankfully to Euler’s formula, exponentiation becomes trivial. The general solution is
Since the sequence an is real (it starts with real numbers a0,a1 and has real coefficients), the constants c0′ and c1′ must be reals, too. Consequently, the constants c0 and c1 must be complex conjugates. We also have the following system of linear equations:
This was an amazing journey from a seemingly straightforward second-order linear recurrence to the trigonometric solution via complex algebra.
Exercise 2.32
We have to solve the recurrence 2an=3an−1−3an−2+an−3 for n>2 with a0=0, a1=1 and a2=2. The characteristic equation is 2z3−3z2+3z−1=0. Let’s first search for a rational root. If we can find one, then by dividing the original polynomial with the corresponding factor, we get a quadratic equation. The latter is then directly solvable. The rational root theorem enumerates all rational candidates. It turns out that ½ is a root, thus (2z−1) is a factor.
Using a classical polynomial division algorithm, we get that the characteristic equation in factored form is
(z2−z+1)(2z−1)=0.
The other two complex roots are z2,3=21±i23. Using the same transformation as in the previous exercise, we can write down the general solution as
an=c0(21)n+c1cos(3nπ)+c2sin(3nπ).
We also have the following system of linear equations:
Let’s split the problem into two subproblems based on parity of n>1
an=⎩⎨⎧2an−221an−2if n is even,if n is odd.
We can combine both cases into a single recurrence an=2(−1)nan−2 for n>1. The initial conditions are a0=a1=1.
Exercise 2.34
The sequence is known as Tribonacci numbers. The characteristic equation is z3−z2−z−1=0, whose single real root is the Tribonacci constant α≈1.839286755. The other two complex roots β and γ have magnitudes less than one, hence they tend to zero for large N. Therefore, the approximate general solution is
FN(3)≈c0αN.
Tools are useful in finding roots of the characteristic polynomial. For example, issuing x^3-x^2-x-1=0 in WolframAlpha gives three solutions (one real and two complex). Switching to trigonometric form reveals the magnitudes of complex roots.
Finding the Unknown Constant c0
To find c0 we need to use the exact general solution
Let’s solve for c0 by running the next Sage script:
It outputs
c0=(α−β)(α−γ)1.
The characteristic polynomial is P(z)=z3−z2−z−1=(z−α)(z−β)(z−γ). Let’s take the derivative of P(z) on both sides and make them equal for α. On the left we have
The initial conditions are b1=b0=1, thus bn=Fn+1. Therefore,
an=n!Fn+1.
Exercise 2.36
The following Python script implements the algorithm for checking the validity of a monomial; covers both types of monomials (pure and mixed). The running time is Θ(plgp+qlgq).
In the expansion of an, for n>1, we have Fn−1−21−(−1)n mixed monomials (consisted of both symbols si and tj). The total number of such monomials, across all iterations, is
In the derivation, we’ve used the expression for the sum of Fibonacci numbers. For example, when n=6 this formula evaluates to 12, which matches the number of mixed monomials listed in the book.
If we count all monomials, including initial values, then the formula for their total number is Fn+2−1.
Exercise 2.37
Solving bn proceeds along the same steps as in Exercise 2.29 and won’t be repeated here. The exact solution is
bn=32(1−(−21)n) for n>1.
Therefore,
an=232(1−(−21)n) for n>1.
Exercise 2.38
Let’s square both sides of the initial recurrence to get an2=1+an−12 for n>0. If we introduce a substitution bn=an2 then the recurrence transforms into bn=1+bn−1, thus bn=n. Therefore, an=n for n>0.
Exercise 2.39
We need to solve the register allocation recurrence an=an−12−2 for n>0 and various initial values.
Case 1: a0=3
The solution is
an=(23+5)2n+(23−5)2n.
As the book notes, only the larger of the two roots predominates in this expression—the one with the plus sign.
Case 2: a0=4
The solution is
an=(2+3)2n+(2−3)2n.
Again, only the larger of the two roots predominates in this expression—the one with the plus sign.
Case 3: a0=23
This case is fundamentally different compared to the previous two. The roots b0 are complex numbers
b0=21(23±−7/4)=43±i47.
We can use the same approach as in Exercise 2.31 to transform these into trigonometric form. Let b0=e±iθ, where θ=arccos(3/4), thus the recurrence becomes
an=(eiθ)2n+(e−iθ)2n=ei(2nθ)+e−i(2nθ)=2cos(2nθ)(eix+e−ix=2cos(x) because sin(−x)=−sin(x)).
The sequence an will therefore fluctuate between -2 and 2, never settling on a single value.
Exercise 2.40 🌟
This exercise introduces the bifurcation theory; as demonstrated in the previous exercise, a fundamentally different behavior occurs around the bifurcation point a0=2.
The accurate approximate answer for large ϵ and sufficiently large n is an≈a02n. Usually, in expressions like a0=2+ϵ, ϵ is a small positive constant, so the subsequent analysis assumes this situation. The critical term is
As n increases, (b0−)2n will rapidly vanish to 0. Consequently, we get an≈(1+ϵ)2n. This shows that even for an infinitesimally small ϵ, the sequence an diverges to infinity at a double-exponential rate.
Exercise 2.41 🌟
This exercise shows a mechanism of transforming generic quadratic recurrences into the register allocation type recurrence, whose solution space is known. We need to find all values of the parameters α, β and γ such that an=αan−12+βan−1+γ reduces to bn=bn−12−2 by a linear transformation bn=f(α,β,γ)an+g(α,β,γ).
Assume that f=0. Otherwise, we can define g=2 and all sequences an would "turn" into bn=bn−12−2 by using the transformation bn=0⋅an+g.
We now match the coefficients of this result with the given recurrence an=αan−12+βan−1+γ and derive the necessary and sufficient condition for transforming an into bn=bn−12−2:
α=f
β=2g⟹g=β/2
γ=fg2−g−2=αβ2/4−β/2−2⟹4αγ=β2−2β−8.
Analysis of an=an−12+1
The parameters are α=1, β=0 and γ=1. Clearly
4αγ=4=β2−2β−8=−8.
This shows that an=an−12+1 does not reduce to the target form.
Exercise 2.42
We need to solve the recurrence an=2an−11−an−12 for n>0 and different initial values. Let’s square both sides and use the substitution bn=an2. We get
The idea is to leverage the result from the previous exercise and transform bn into cn=cn−12−2. The parameters are α=−4=f, β=4=2g and γ=0. Since 4⋅(−4)⋅0=42−2⋅4−8, we can apply the transformation, thus cn=−4bn+2.
If a0=1/2, then b0=1/4, hence c0=1. We already know the solution from the book for this special case, namely cn=−1. Therefore, bn=3/4 which implies that an=3/2 for n>0.
If a0=1/3, then b0=1/9, thus c0=14/9. We hit the third case in Exercise 2.39, so the solution is cn=2cos(2nθ), where θ=arccos(7/9). Therefore, bn=(1−cos(2nθ))/2, which implies that an=(1−cos(2nθ))/2.
Here’s the Python script to plot a6 as a function of a0.
A change in the initial condition a0 has an unpredictable effect on a6. This is especially evident as a0 approaches 1. At large scale, we see some regularities. These traits are typical for dynamic systems investigated by the chaos theory.
Exercise 2.43 🌟
This exercise shows a way to express general classes of “continued fraction” representations as solutions to recurrences.
Our starting recurrence is a generalized "continued fraction" representation
an=γan−1+δαan−1+β for n>0.
Let’s represent the recurrence as a function f(x)=(αx+β)/(γx+δ), where γ=0 (otherwise, we won’t have a continued fraction). This function defines a Möbius transformation. We first need to determine the fixed points denoted by r and s. Afterward we can leverage a change of variable to linearize the original recurrence
bn=an−san−r=f(an−1)−sf(an−1)−r=γan−1+δαan−1+β−sγan−1+δαan−1+β−r=γan−1+δαan−1+β−sγan−1−sδγan−1+δαan−1+β−rγan−1−rδ=αan−1+β−sγan−1−sδαan−1+β−rγan−1−rδ=α1−bn−1r−sbn−1+β−sγ1−bn−1r−sbn−1−sδα1−bn−1r−sbn−1+β−rγ1−bn−1r−sbn−1−rδ=1−bn−1αr−αsbn−1+β−βbn−1−srγ+s2γbn−1−sδ+sδbn−11−bn−1αr−αsbn−1+β−βbn−1−r2γ+rγsbn−1−rδ+rδbn−1=αr−αsbn−1+β−βbn−1−srγ+s2γbn−1−sδ+sδbn−1αr−αsbn−1+β−βbn−1−r2γ+rγsbn−1−rδ+rδbn−1=rα+β−srγ−sδ+(s2γ+sδ−sα−β)bn−1(−sα−β+rsγ+rδ)bn−1−(r2γ+rδ−rα−β)=rα+β−srγ−sδ−sα−β+rsγ+rδ⋅bn−1=γr+δγs+δ⋅bn−1=(γr+δγs+δ)nb0(assuming γan−1+δ=0)(an−1=1−bn−1r−sbn−1)(assuming bn−1=1)(r and s are roots of the quadratics)(−sα−β=−s2γ−sδ and rα+β=r2γ+rδ)(iterating).
The initial value is b0=(a0−r)/(a0−s)=(1−r)/(1−s). If we substitute b0 into the expression for an, assuming s=1, we get
an=(1−s)−(1−r)(γr+δγs+δ)nr(1−s)−s(1−r)(γr+δγs+δ)n for n>0.
As a quick check, we should get the closed-form solution for the continued fraction example from the book by using α=0,β=1,γ=1 and δ=1. The roots of the equation x2+x−1=0 are r=−ϕ and s=−ϕ^. If we substitute these values into the formula for an, and use the properties of the golden ratio, then we get an=Fn+1/Fn+2 for n≥0.
The book erroneously states that bn=Fn+1 instead of bn=Fn+2. The correct initial conditions are b0=1 and b1=2.
Exercise 2.44 🌟
This exercise demonstrates the use of substitution to simplify sequences. Calculating values directly is often difficult and can result in accuracy loss over time. By using an integer sequence, exactness is maintained, allowing any value to be expressed as a rational number at each step.
We need to express the recurrence
an=sn+tnan−11 for n>0 with a0=1,
where sn and tn are arbitrary sequences, as the ratio of two successive terms in a sequence defined by a linear recurrence.
Let an=bn/bn+1 for n≥0. We need to specify a linear recurrence relation on bn such that an also satisfies the original non-linear recurrence. We have
As a quick check, we should get the closed-form solution for the continued fraction example from the book by using b0=1,sn=1 and tn=1. Indeed, bn=Fn+1 implying an=Fn+1/Fn+2 for n≥0.
Exercise 2.45 🌟
This exercise replaces the bad example in the book and demonstrates how a repertoire table streamlines analysis by allowing reuse of many entries.
We are given a relaxed quicksort recurrence
an=f(n)+n21≤j≤n∑aj−1 for n>0 with a0=0.
We need to solve it when f(n)=n3. The repertoire table from the book has many reusable entries except for the n3 term. If we set an=n3 then an−(2/n)∑j=1naj−1=n3/2+n2−n/2. We need to combine entries from the expanded table to get f(n)=n3, thus we have
The post on StackExchange for Mathematics explains the basics of the repertoire method and illustrates its application by solving this exercise.
Exercise 2.47 🌟
The solution of this problem illustrates basic form of the bootstrapping method.
We need to solve the recurrence an=n+an−12 for n>0 with a0=1.
Let’s use the next Python script to calculate numerical values of an, alongside our guess that an∼2/n. This hypothesis is based on the observation that an−1→0 as n→∞.
The output shows that we’re on the right track.
If we employ bootstrapping and substitute our guess back into the recurrence, we get
an=n2−n+22(n−1).
The new version of the script reflects this change.
The results are better aligned as n increases.
After an additional iteration we have
an=(n−1)(n2−2n+4)2(n2−3n+4) for n>1.
We see that the approximation of an is improved again.
This process may continue depending on a desired accuracy level.
There’s no simple closed-form solution. The standard method to "solve" it is based on the technique from Exercise 2.44. In this case, the substitution should be an=2bn−1/bn for n≥1, that yields bn=snbn−1+2tnbn−2, where sn=n and tn=1. The initial values are b0=1 and b1=2.
Exercise 2.48
The book assumes (without stating it explicitly) that we need to find the approximation of the average number of compares used by median-of-three quicksort. For the sake of completeness, it’s repeated here
CN=N+1+(3N)2k=1∑N(k−1)(N−k)Ck−1 for N>2.
The key is to convert this discrete recurrence into its continuous form, by approximating the sum with a definite integral. The process can be broken down into three major stages.
1
Converting the Discrete Recurrence
Let’s simplify the constant multiplier of the sum for large enough N
(3N)2=6N(N−1)(N−2)2≈N3/62=N312.
Let’s set the pivot’s rank k relative to the size N; for this we use the continuous variable x=k/N⟹k=xN⟹dk=Ndx. Finally, we treat k−1≈k and N+1≈N. If we substitute these changes into our original expression and replace the summation with an integral, we have
CN≈N+12∫01(x−x2)CxNdx.
2
Bootstrapping the Solution
Our initial hypothesis is CN∼αNlnN. After all, we expect to have a familiar asymptotic form Θ(NlnN) as other quicksort variants. Let’s substitute this hypothesis into our recurrence relation
If we want to retain only the leading factor, then we should choose α=12/7 to "eradicate" the O(N) summand.
To be rigorous, we should check that this form is "stable"; if we assume αNlnN+O(N), we must get αNlnN+O(N) back. This is indeed true, since the definite integral
N+12∫01[α(xN)ln(xN)+βN](x−x2)dx
resolves again to the same form. β denotes the hidden constant inside the O(N) term.
Exercise 2.49 🌟
This exercise showcases an advanced usage of the bootstrapping method.
With bootstrapping we start with a rough bound and then improve it step by step. The recurrence is
an=n10≤k<n∑n−kak=n21+n11≤k<n∑n−kak for n>0 with a0=1.
We separated the k=0 term, which is n(n−0)a0=n21.
Iteration 1: From O(1/n) to O(n2logn)
Let’s start with the rough guess that an=O(1/n) for n>0. Thus, there exists some constant c>0 such that an≤c/n for sufficiently large n. Note that an>0 for all n≥0. If we substitute this into the recurrence, we have
To be rigorous, we should check that this form is "stable"; if we assume O(1/n2), we must get O(1/n2) back. So, we take that an≤c/n2 for some constant c>0 and n>0. If we substitute this into the recurrence, we have
an≤n21+nc1≤k<n∑k2(n−k)1.
We split the summation again into two parts. When 1≤k≤n/2 then n−k1≤n2, hence
When n/2<k<n then k21≤n24. Thus, S2≤n24∑n−k1=O(n2logn). Therefore,
an≤n21+nc⋅(O(1/n)+O(n2logn))=O(1/n2).
Final Result
From an=O(1/n2) it follows that n2an=O(1).
Exercise 2.50 🌟
We can expect many hardships with the perturbation method. This and the next few exercises demonstrate different obstacles that we may encounter. Here, we must tune the initial growth rate after failing to bound the error rate with the initial guess.
The recurrence is
an+1=(1+n1)an+(1−n1)an−1 for n>0.
Notice that n>0, otherwise a2 would be missing, since initial values are a0=0 and a1=1.
The solution to a simpler recurrence bn+1=bn+bn−1 for n>0 with b0=0 and b1=1 is bn=Fn=Θ(ϕn), where ϕ is the golden ratio.
Following the steps from the book, we need to compare the two recurrences by forming the ratio
ρn=bnan=Θ(ϕn)an.
If we plug Θ(ϕn)ρn into the original recurrence, we won’t be able to bound ρn. Consequently, the solution must grow faster. The book contains a hint as part of Exercise 2.53; let’s add an extra nα multiplier and find the value of α such that ρn=O(1). This gives
ρn∼nαϕnan for n>1,
with ρ1=1/ϕ. We have
(n+1)αϕn+1ρn+1ϕ2nα(n+1)αρn+1ϕ2(1+n1)αρn+1ϕ2(1+n1)αρn+1ϕ2(1+nα)ρn+1(ϕ2+nαϕ2)ρn+1(ϕ2+nαϕ2)ρn+1(ϕ2+nαϕ2)ρn+1=(1+n1)nαϕnρn+(1−n1)(n−1)αϕn−1ρn−1=ϕ(1+n1)ρn+(1−n1)nα(n−1)αρn−1=ϕ(1+n1)ρn+(1−n1)(1−n1)αρn−1≤[ϕ(1+n1)+(1−n1)(1−n1)α]ρn≤[ϕ(1+n1)+(1−n1)(1−nα)]ρn≤[ϕ+nϕ+(1−nα−n1+O(n21))]ρn≤[(ϕ+1)+nϕ−1−α]ρn≤[ϕ2+nϕ−1−α]ρn.(assume ρn is non-decreasing)(Taylor expansion (1+x)α≈1+αx)(ignore O(1/n2)
To solve for α we can set αϕ2=ϕ−1−α, which implies that
α=ϕ2+1ϕ−1.
Thus, the asymptotic growth of the solution is an=O(nϕ2+1ϕ−1ϕn).
Exercise 2.51 🌟
Often, we have to employ the perturbation method iteratively. This exercise illustrates the process.
The coefficient n on an−1 suggests an=ω(n!). Let’s employ a change of variable bn=an/n! to find a solution to a simpler recurrence
As n→∞, n−11→0. Therefore, applying the perturbation method again, we end up with another simpler recurrence cn=cn−1+cn−2 for n>1 with c0=0 and c1=1. Clearly, cn=Fn∼ϕn, where ϕ is the golden ratio.
To account for the perturbation factor, we reuse the approach from the previous exercise. This gives
ρn∼nαϕnbn for n>1,
with ρ1=1/ϕ. We have
nαϕnρnϕ2ρnϕ2ρnϕ2ρn(1−n1)ϕ2ρn(ϕ2−nϕ2)ρn(ϕ2−nϕ2)ρn(ϕ2−nϕ2)ρn(ϕ2−nϕ2)ρn=(n−1)αϕn−1ρn−1+n−1n(n−2)αϕn−2ρn−2=(1−n1)αϕρn−1+n−1n(1−n2)αρn−2=(1−nα)ϕρn−1+n−1n(1−n2α)ρn−2≤ρn−1[(1−nα)ϕ+n−1n(1−n2α)]≤ρn−1[(1−n1)(1−nα)ϕ+(1−n2α)]≤ρn−1[ϕ(1−n1−nα+O(n21))+(1−n2α)]≤ρn−1[ϕ(1−n1−nα)+(1−n2α)]≤ρn−1[ϕ+1−n1(ϕ+ϕα+2α)]≤ρn−1[ϕ2−n1(ϕ+ϕα+2α)].(Taylor expansion (1+x)α≈1+αx)(assume ρn is non-decreasing)(ignore O(1/n2)
To solve for α we can set ϕ2=ϕ+ϕα+2α, which implies that
ϕ2−ϕ1=α(ϕ+2)=α(ϕ+2)∴α=ϕ+21.
Thus, the asymptotic growth of the solution is an=O(n!nϕ+21ϕn).
Exercise 2.52
The paper Some Doubly Exponential Sequences by Aho and Sloane elaborates the solution for a more general recurrence. It’s also instructive to read about how variants of the generic form appear in practical problems.
The recurrence from the book, where gn=1, generates a sequence A003095 from OEIS shifted by one (the first element should be skipped, since in our case a0=1). Using the relationship to the constant c=1.225902443528748…, our α corresponds to c2 (due to the previously mentioned index shift). The digits of α are listed in A077496 from OEIS.
Exercise 2.53
This is a small variation of Exercise 2.50 with α=−ϕ2+1ϕ2.
The perturbation factor is (1−1/n)<1, hence, the sequence is asymptotically slightly less than the standard Fibonacci sequence. The polynomial correction must be decay, which explains why α is negative. Of course, we could have forgotten about the polynomial factor, since the upper bound is valid even without it. But it’s assumed that the O-notation expresses a tight upper bound.
Exercise 2.54
In the best case, we always choose the smaller subinterval of size ⌊(N−1)/2⌋ (see the proof of Theorem 2.3 in the book). Therefore, the recurrence is
BN=B⌊(N−1)/2⌋+1 for N>1 with B1=1.
After one iteration, we get BN=B⌊(N−3)/4⌋+2 leveraging a useful property of floor and ceiling functions that ⌊⌊x⌋/2⌋=⌊x/2⌋. At the end, we must have
2nN−(2n−1)≥1⟹n=⌊lg(N+1)⌋−1.
The number of comparisons is BN=n+1=⌊lg(N+1)⌋ for N≥1.
Exercise 2.55
We always choose the largest subinterval of size ⌈N/3⌉. Therefore, the recurrence is
BN=B⌈N/3⌉+2 for N>1 with B1=1 and B0=0.
The number of comparisons is BN∼2log3N for N>1. If we convert the formula of ternary search to use logarithm of base two, then we can see that the leading term is 2/lg3≈1.26>1. Ternary search is less efficient than binary search in terms of comparisons. Even though it reduces the problem size faster (dividing by 3 instead of 2), the "cost" of doing so (2 comparisons instead of 1) is too high to be worth it.
The recurrence (4) from the book may be regarded as a function f:N→N, thus cannot have two closed-form solutions that map the same input to different outputs.
Exercise 1.5 contains an inductive proof for the RHS of an identity mentioned in this exercise. The LHS is the Theorem 2.4 from the book. Both are closed-form solutions to the same recurrence, so they must resolve to the same value for all N>0.
Exercise 2.58
Figure 2.5 in the book is wrong. PN<0.5NlgN (just by looking entries of Table 2.3), hence fN=O(N) must be negative; the plot of PN−0.5NlgN cannot be at the top. Furthermore, P84=252 instead of 215. As a matter of fact, the top and bottom plots must be swapped. Finally, the cumulative number of zero bits must include zero itself, otherwise we cannot recreate the plots.
{# 1 bits in numbers less than N} – (NlgN)/2
This function is specified as TN=PN–(NlgN)/2, where
PN=P⌈N/2⌉+P⌊N/2⌋+⌊N/2⌋ for N>1 with P1=0 (see the book).
Let’s substitute PN=TN+(NlgN)/2 into the above recurrence relation
ΔN=⌊N/2⌋+21(⌈N/2⌉lg⌈N/2⌉+⌊N/2⌋lg⌊N/2⌋−NlgN)=⌊N/2⌋+21(⌈N/2⌉(lg⌈N/2⌉−lgN)+⌊N/2⌋(lg⌊N/2⌋−lgN))=⌊N/2⌋+21(⌈N/2⌉lg(N⌈N/2⌉)+⌊N/2⌋lg(N⌊N/2⌋))=⎩⎨⎧021(⌈2N⌉lg(N2⌈N/2⌉)+⌊2N⌋lg(N2⌊N/2⌋)−1)if N is evenif N is odd.(−NlgN=−⌈N/2⌉lgN−⌊N/2⌋lgN)
The final simplification reuses ideas from Exercise 1.5 together with an identity lg(M/N)=lg(2⋅M/N)−1.
{# 0 bits in numbers less than N} – (NlgN)/2
Without repeating the previous derivation, we’ll just dump the final results for this and the next subproblem. The starting point is TN=ZN–(NlgN)/2, where
ZN=Z⌈N/2⌉+Z⌊N/2⌋+⌈N/2⌉−1 for N>1 with Z1=1 (we also count zero).
TN=T⌈N/2⌉+T⌊N/2⌋+ΔN, where ΔN is
ΔN=⎩⎨⎧−121(⌈2N⌉lg(N2⌈N/2⌉)+⌊2N⌋lg(N2⌊N/2⌋)−1)if N is evenif N is odd.
{# bits in numbers less than N} – NlgN
TN=T⌈N/2⌉+T⌊N/2⌋+ΔN, where T1=1 and ΔN is (sum of the previous formulae)
ΔN=⎩⎨⎧−1⌈2N⌉lg(N2⌈N/2⌉)+⌊2N⌋lg(N2⌊N/2⌋)−1if N is evenif N is odd.
Exercise 2.59
The recurrences for RN are (for N>1 with R1=R0=0)
RN=RN−2⌊lgN⌋+R2⌊lgN⌋−1+⌊lgN⌋=R⌊N/2⌋+⌊N/2⌋(considering the leftmost bit)(considering the rightmost bit).
Exercise 2.60 🌟
This and the next couple of exercises illustrate how seemingly smooth asymptotic bounds don’t reflect what actually happens under the hood, due to usage of floors and ceilings (see Table 2.4 in the book).
The Python 3 script below generates the required plot.
Apparently, the function is continuous and periodic. It runs slightly below the line y=2N.
Exercise 2.61
The script from the previous exercise needs to be tweaked a little to produce the desired plot.
This time the function is extremely "spiky." Large jumps occur just after powers of 2.
Exercise 2.62
The following Python 3 script creates the required plot and could be used to test out the variants, as mentioned in the book.
We see a similar figure as in Exercise 2.61. If N isn’t a power of 2, then many recursive calls happen on larger indices due to the usage of the ceil function. These effects accumulate and produce large jumps right after these border values. When only floors are used, jumps happen on transitioning to powers of 2. Balancing floors and ceilings in both recurrences completely smooths out the curve.
Exercise 2.63
The function wildly oscillates and is periodic. Let m=lgN. Values of g(N) for N∈[2m,2m+1) can be generated from previously computed values in the interval [2m−1,2m) by adding 2m depending on parity of N. Therefore, every subsequent period is a scaled version of its predecessor.
Exercise 2.64
We’ll develop two equivalent recurrence relations, although they appear to be completely different. As the book hinted in Chapter 1, it’s often easier to derive the total amount TN and calculate the average as TN/N; we do include zero in the range of numbers less than N.
Considering the Leftmost Bit
The constituent parts of the recursive formula for the cumulative number of leading 1s in numbers less than N.
Let’s focus on the leftmost column and define m=⌊lgN⌋. There are 2m zeros, so we should add N−2m 1s to our count. This is represented as (1) in the image. There are two major sections demarcated by a horizontal line between the last 0 and first 1 in the leftmost column. The upper region has m+T2m−1 1s, where (2) denotes those m 1s. The total number of 1s in the bottom section depends whether it contains the row (3). If it doesn’t include (3), then the count is simply zero. Otherwise, it is TN−2m−T2m−1. After combining all these ingredients into a single formula, we get
If N is a power of two, then TN=RN due to symmetry. Otherwise, if we focus on the rightmost bit, we see that the range can be split onto numbers ending with 1 and those ending with 0. Suppose we count how many leading 1s are there in those two subintervals. What we’re only missing is the ⌊lgN⌋ 1s associated with the number 2⌊lgN⌋−1∈[1,N). Thus, we need to add it to the total. Therefore, the recurrence becomes TN=T⌊N/2⌋+T⌈N/2⌉+⌊lgN⌋ for N>1 with T1=0. Notice, that it’s exactly the one from Exercise 2.60.
For example, when N=2n, the average simplifies to 2−Nn+2, which tends to 2 as N→∞.
Exercise 2.65
A bitstring of length N has the form b1b2…bN, where each bi∈0,1. The initial run of 1s ends at the first 0 (or continues to the end if all bits are 1). Let X be a random variable denoting the length of the initial string of 1s in a random bitstring of length N. Its probability mass function is defined as
While the book states c2 depends on α,β,γ, the derivation shows it must also depend on δ due to the integration of the logarithmic term.
Exercise 2.69
The recurrence falls under case 2 of the Master theorem. The exact solution takes the form
aN=Nlog3N+N⋅P(N).
The function P represents the fluctuations caused by the floor function ⌊N/3⌋. We have
P(N)=NaN−log3N for N>3.
The plot of the periodic part shows a fractal image, where jumps occur at values of N=4⋅3k for k>0.
Exercise 2.70
Here is the Python script for trying out different configurations. Currently, it is setup to draw a variant where floors are replaced by ceils.
This is the plot where 2 floors are replaced by ceilings. Apparently, the jumps aren’t so steep as in Exercise 2.69.
Exercise 2.71
The exact solution takes the form
a(x)=2x+α2x/β+α22x/β2+⋯+αt2x/βt=k=0∑tαk2x/βk=2x(1+k=1∑tαk2x⋅βk1−βk), where t=⌊logβx⌋.
Since β>1, we have βk1−βk<0 for k≥1. The 2c⋅x term grows faster than any polynomial αx, hence ax/2c⋅x→0 as x→∞, for any positive constant c.
Therefore, we can conclude that a(x)=Θ(2x).
Exercise 2.72
There are many ways to give an asymptotic solution to the recurrence. One approach is to use the more general Akra-Bazzi method. Another one is to draw a recursion tree and discover patterns. There’s a tool called VisuAlgo to see recursive trees in action. It has lots of prebuilt recurrences, but it’s also possible to define a custom recurrence by selecting the Custom Code option from the drop-down list. The following snippet contains the definition for this exercise.
After entering a value for N and pressing the Run button, the tool visualizes the whole tree. In general, the cost per level is N. The largest subproblem size decreases by a factor of 3/4 at each step, so the depth is ∼log4/3N. Therefore, the recurrence satisfies aN=Θ(NlgN).
If we want to be rigorous, then we must prove the hypothesis just formulated, for example, using induction on N.
We can even determine the leading constant by substituting aN∼CNlogN into the recurrence. This gives aN∼log4−43log3NlogN.
Exercise 2.73
The same remark applies here as in the previous exercise. By drawing a recursion tree, we can see that the depth is ∼lgN and the amount of work at each step drops by a factor of 3/4. In other words, the cost at level k is (3/4)kN. After summing up costs, we can conclude that the recurrence satisfies aN=Θ(N).
We can make one step further and discover the leading coefficient. Assuming that aN∼cN, we can substitute aN into the recurrence and find that aN∼4N.
Exercise 2.74
We must prove that the solution to the recurrence
an=af(n)+ag(n)+ah(n)+1 for n>t with an=1 for n≤t,
with the constraint that f(n)+g(n)+h(n)=n, is an=Θ(n). We assume that all index functions are positive. We’ll prove the lower and upper bounds separately.
Suppose an=O(n)⟹an≤c2n−d2 for some positive constants c2 and d2 as well as sufficiently large n. Substituting this hypothesis into our recurrence, we get
an=af(n)+ag(n)+ah(n)+1≤c2(f(n)+g(n)+h(n))−3d2+1=c2n−d2−(2d2−1)≤c2n−d2∴an=O(n)✓(by the inductive hypothesis)(if we make 2d2−1≥0⟹d2≥1/2)
The lower bound is even easier. Suppose an=Ω(n)⟹an≥c1n for some positive constant c1 and sufficiently large n. Substituting this hypothesis into our recurrence, we get
an=af(n)+ag(n)+ah(n)+1≥c1(f(n)+g(n)+h(n))+1=c1n+1≥c1n∴an=Ω(n)✓(by the inductive hypothesis)
Since both bounds are the same, we can conclude that an=Θ(n). Observe that we can choose suitable values for our constants to satisfy the base cases, too. For example, c1=1/t,c2=3/2 and d2=1/2 configuration works.
Exercise 2.75
The recurrence an=af(n)+ag(n)+1 for n>t with an=1 for n≤t, where f(n)+g(n)=n−h(n), describes the number of nodes in a recursion tree of a divide-and-conquer algorithm with a "coarsening effect" (subproblems of size ≤t are handled as base cases). We assume that all index functions are positive.
The +1 represents the current node.
The problem is split into two subproblems of size f(n) and g(n).
We are also given the condition f(n)+g(n)=n−h(n). This equation tells us how the "size" of the problem decreases at each step. At any internal node with size n, the combined size of its children is n−h(n). This means that at every step of the recursion, a size drops by h(n). Since the recursion starts with n, and ends when reaching the "leaves" (blocks of size ≤t), the sum of reductions at every internal node must be approximately equal to the initial size n (minus the small residuals at the leaves). This can be expressed as
a = b = 0; c = 1
n = 20
for _ in range(n - 2):
a, b, c = b, c, a + b + c
a = 1.839286755
c0 = 1 / (3 * a**2 - 2 * a - 1)
print(f"{c} ~ {c0 * a**20}")
def is_valid_monomial(s, t, n):
"""
This function checks if a given monomial, represented by two
lists s and t, appears in the expansion of a[n], for n > 1.
The function returns True if the monomial is valid, otherwise False.
"""
assert(n > 1)
# Insert sentinel values to avoid index errors.
s.append(0)
t.append(0)
i = len(s) - 1
j = len(t) - 1
s.sort()
t.sort()
while n > 1:
# Simple reduction rules:
# 1. If the largest element is in s,
# then the sub-monomial comes from the previous iteration.
# 2. If the largest element is in t,
# then the sub-monomial comes from two iterations back.
# The premise is that the current monomial is valid
# iff built from a valid sub-monomial. Easily provable by induction.
if s[i] > t[j]:
if s[i] != n - 1:
return False
i -= 1
n -= 1
else:
if t[j] != n - 2:
return False
j -= 1
n -= 2
return n > 0 and i == j == 0
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
def a(a0):
x = a0
for _ in range(6):
x = 2 * x * sqrt(1 - x * x)
return x
def plot_a(num_points=1000, save_path="a_plot.png",
label_fontsize=18, tick_fontsize=14, title_fontsize=20):
xs = np.linspace(0.0, 1.0, num_points)
ys = [a(float(x)) for x in xs]
plt.figure(figsize=(8, 5))
plt.plot(xs, ys, marker='o', linestyle='-', markersize=4)
plt.grid(True)
plt.xlabel(r"$a_0$", fontsize=label_fontsize)
plt.ylabel(r"$a_6$", fontsize=label_fontsize)
plt.title(r"Values of $a_6$ for $a_0$ on the interval $[0,1]$",
fontsize=title_fontsize)
plt.xticks(fontsize=tick_fontsize)
plt.yticks(fontsize=tick_fontsize)
plt.tight_layout()
plt.savefig(save_path, dpi=150)
plt.show()
if __name__ == "__main__":
plot_a()
a = 1
for n in range(1, 11):
a = 2/(n+a)
print(f"a({n})={a}, â({n})={2/n}")