We have a∣b⟹b=pa and b∣c⟹c=qb for integers p and q. Therefore, c=(pq)a⟹a∣c.
Exercise 31.1-4
The only divisors of p are 1 and p. No divisor of k except 1 divides p, since k<p⟹p∤k. Therefore, gcd(k,p)=1.
Exercise 31.1-5
We start from the fact that gcd(a,n)=1, so we get
1bbb=ax+ny=abx+nby=nkx+nby=n(kx+by)∴n∣b(by Theorem (3.12))(after multiplying both sides by b)(n∣ab⟹ab=kn)
Exercise 31.1-6
Lemma 1
For any integers a1,a2,…,an, and p, if gcd(ai,p)=1 for all i=1,2,…,n, then gcd(∏i=1nai,p)=1.
Proof We prove the lemma by induction on n. The base cases are when n=1∨n=2 and both trivially hold (see Theorem 31.6). For n>2 we have
gcd(ai,p)=1⟹gcd(i=1∏n−1ai,p)=1∧gcd(an,p)=1⟹gcd(i=1∏nai,p)=1(for all i=1,2,…,n)(by the inductive hypothesis)(by Theorem (3.16))
According to the principles of mathematical induction, we can conclude that the lemma holds for all n≥1. Q.E.D.
(kp)=k!(p−k)!p! is a positive integer, as it represents the number of ways to select k items out of p. Based on Exercise 31.1-4 and Lemma 1, for 0<k<p, gcd(k!,p)=1 and gcd((p−k)!,p)=1, so (kp)=(k!(p−k)!(p−1)!)p⟹p∣(kp). Thus, in a binomial expansion all terms except the first and last one are divisible by p, which concludes the proof.
Exercise 31.1-7
We successively apply the Division theorem, so we have
xmoda=(xmodb)moda=(ymodb)moda=ymoda(by the first part)(since x=y(modb))(by the first part)
Exercise 31.1-8
Assume that multiplication of two β-bit integers requires O(β2) time and equality test of two β-bit integers requires O(β) time. The algorithm below only showcases an existence of a solution in time polynomial in β without claiming anything about its asymptotic optimality. As a matter of fact, it is far from being optimal.
The smallest candidate is a=2 and 2β>n⟹k∈[2,β). Therefore, we iterate over all possible exponents k in increasing order and see if the kth root of n is an integer.
We can leverage binary search over the interval [2,n) in searching for this root. We iteratively raise each candidate a to the kth power by multiplying the current product with a (initially setting it to 1). In each iteration, we also check if this product is greater than n, so that we stop as soon as possible. This stage demands O(β)(O(β2)+O(β))=O(β3) time. In case of a match, we finish the search, otherwise, recurse into the lower or upper subinterval depending on whether ak>n or not, respectively. If the subinterval becomes empty, then we declare an unsuccessful attempt to find the root for the given exponent.
All in all, we can determine whether a given β-bit integer n is a nontrivial power in O(β5) time.
Exercise 31.1-9
Proof of 31.6
By definition, gcd(a,b) is the largest element in the set D(a,b) of the common divisors of a and b. Analogously, gcd(b,a) is the largest element in the set D(b,a) of the common divisors of b and a. We know that D(a,b)=D(b,a) and there cannot be two different largest element in a set. Therefore, gcd(a,b)=gcd(b,a).
Proof of 31.7
d1=gcd(a,b)⟹(d1∣a∧d1∣b), but this also means that d1∣(−a), hence d1≤d2=gcd(−a,b). With similar reasoning we can conclude that d2≤d1, therefore d1=d2.
By property (31.8) we have that gcd(a,0)=gcd(∣a∣,0). Any common divisor of ∣a∣ and 0 cannot be larger than ∣a∣, so gcd(a,0)=∣a∣, since any integer divides 0.
Proof of 31.10
By property (31.8) we have that gcd(a,ka)=gcd(∣a∣,∣ka∣)=gcd(∣a∣,∣k∣∣a∣). We have the following cases:
If a=0 then gcd(0,0)=∣a∣.
If k=0 then gcd(a,0)=∣a∣ by property (31.9).
Let a=0∧k=0. We have that ∣a∣≤∣k∣∣a∣ when ∣k∣>0. Furthermore, a=0⟹∣a∣≥1. Therefore, gcd(∣a∣,∣k∣∣a∣)=min{∣a∣,∣k∣∣a∣}=∣a∣.
Exercise 31.1-10
Let d1=gcd(a,gcd(b,c)) and d2=gcd(gcd(a,b),c). We prove the statement from the book by showing that d1∣d2 and d2∣d1, thus by equation (31.5) they must be equal, as both of them are nonnegative.
We know that d1∣a∧d1∣gcd(b,c), which entails that d1∣b∧d1∣c. The fact that d1 divides both a and b implies that it also divides gcd(a,b) by Corollary 31.3. Therefore, d1∣d2 by applying again Corollary 31.3.
Using the same reasoning we can conclude that d2∣d1, which finishes the proof about associativity of the gcd operator.
Exercise 31.1-11
You can read the proof here that uses only facts mentioned in the book.
Exercise 31.1-12
The O(β2) time algorithm shown here also handles the edge case when a divisor is larger than a number to be divided.
Exercise 31.1-13
We follow the hint from the book. The main idea is to split a β-bit binary number and obtain the top and bottom halves of the result with separate recursions. The divide step must enable an easy assembly of these partial results. Since we are converting into a decimal representation it makes sense to split according to powers of 10. The key insight is to precompute these powers (in binary) to facilitate efficient splitting.
Precomputing powers of 10
Precomputation is a powerful technique in algorithm design. The steps to accomplish this are laid out below:
1
Configure parameters
Calculate the total number of decimal digits D=⌈βlg102⌉, set the recursion depth L=⌈lgD⌉, and create 1-indexed lookup tables S (stores split points) and P (stores powers of 10) as static arrays of size L.
2
Fill the lookup tables
For each level i=1,2,…,L:
Determine the split point Si=⌈D/2i⌉ (half the digits at level i).
Compute Pi=10Si using exponentiation by squaring.
Each Pi has Θ(β/2i) bits, so computing it demands O(M(β/2i)lgβ) time. The total time is ∑i=1LO(M(β/2i)lgβ)=O(∑i=1LM(β/2i)lgβ)=O(M(β)lgβ) using the fact that M(β)=Ω(β) and assuming M satisfies the properties of standard multiplication algorithms, like, being non-decreasing as well as the sum of M over geometrically decreasing sizes is dominated by M(β) (sort of weak regularity).
Recursive conversion function
Assume that handling decimal representation is not influencing the asymptotical behavior of our main routine. For example, concatenating two decimal representations is expected to run in O(1) time.
1
Function signature
Our function has the signature Convert(n,d,i), where
n is the number to convert
d is the upper bound on number of decimal digits of n
i is the current recursion level
The function returns a decimal representation of n. It should be initially called with Convert(N,D,1).
2
Base case
If d=1 then return the single digit representation of n.
3
Recursive case
Compute the values for the top t=⌊n/Pi⌋ and bottom b=nmodPi halves.
Recursively obtain the decimal representation of the top half rtop=Convert(t,d−Si,i+1).
Recursively obtain the decimal representation of the bottom half rbottom=Convert(b,Si,i+1).
Pad rbottom with leading zeros to length Si.
Produce a combined result r=rtop⋈rbottom by concatenating the two halves.
Remove leading zeros from r, if any.
Return r.
The recurrence formula is T(β)=2T(β/2)+M(β). We have two variations on the recurrence using the fact that M(β)=Ω(β):
If M(β)=Θ(βlgβ) then by the Master theorem T(β)=Θ(M(β)lgβ).
If M(β)=Ω(β1+ϵ) for some ϵ>0 then by the Master theorem T(β)=Θ(M(β)) assuming M(β) satisfies the regularity condition (most standard multiplication/division algorithms do).
We can conclude that T(β)=O(M(β)lgβ) taking into account the precomputation and conversion times.
You can read a very nice proof in the attached PDF document. It was retrieved on July, 2025 from this link. There are two remarks about external references in the content:
Proposition 1(iv) is about the rule (a=0∧d∣a)⟹∣d∣≤∣a∣. In essence, you can also use equation (31.5), in the context of the paper, instead of the previously mentioned proposition.
We create a table, similar to Figure 31.1 from the book, by running the next Python 3 script.
rows = []
def add_row(a, b, d, x, y):
rows.append(f"{a}\t{b}\t{a // b if b > 0 else '-'}\t{d}\t{x}\t{y}")
def print_rows():
for row in reversed(rows):
print(row)
def extended_euclid(a, b):
if b == 0:
add_row(a, b, a, 1, 0)
return (a, 1, 0)
(d, x, y) = extended_euclid(b, a % b)
(x, y) = (y, x - (a // b) * y)
add_row(a, b, d, x, y)
return (d, x, y)
extended_euclid(899, 493)
print("a\tb\t⌊a/b⌋\td\tx\ty\n")
print_rows()
Observe the usage of the list rows to reverse the output shown below.
a b ⌊a/b⌋ d x y
899 493 1 29 -6 11
493 406 1 29 5 -6
406 87 4 29 -1 5
87 58 1 29 1 -1
58 29 2 29 0 1
29 0 - 29 1 0
Exercise 31.2-3
Let d1=gcd(a,n) and d2=gcd(a+kn,n). We proceed by showing that d1∣d2 and d2∣d1, thus according to equation (31.5) they are equal.
We know that d1∣a and d1∣n, so by equation (31.4) we have d1∣a+kn. Based on Corollary 31.3 we can conclude that d1∣d2.
We know that d2∣n, so n=pd2 for some integer p. Furthermore, d2∣a+kn, thus a+kn=qd2 for some integer q. Therefore, a=qd2−kn=qd2−kpd2=(q−kp)d2 that implies d2∣a. Based on Corollary 31.3 we can conclude that d2∣d1.
a=1(modn)⟹a=1+kn, so gcd(1+kn,n)=gcd(1,n) by equation (31.18). Consequently, we have gcd(a,n)=1.
Exercise 31.2-4
Below is the iterative version of Euclid's algorithm in Python 3.
def iterative_euclid(a, b):
while b > 0:
a, b = b, a % b
return a
Exercise 31.2-5
According to Lamé’s theorem if k=2+logϕb and b<Fk+1≈5ϕk+1=5ϕ3b, which definitely holds, then the call Euclid(a,b) will make fewer than k recursive calls.
Let d=gcd(a,b), so a=d⋅(a/d) and b=d⋅(b/d). According to Corollary 31.4 we have gcd(a,b)=gcd(d⋅(a/d),d⋅(b/d))=dgcd(a/d,b/d). Therefore, if k=2+logϕ(b/d) and b/d<Fk+1⟺b<dFk+1, which again defintely holds, then the call Euclid(a,b) will make fewer than k recursive calls.
Exercise 31.2-6
Assume that k>2, otherwise we would immediately reach the base case. Fk+1=Fk+Fk−1, hence by Exercise 31.1-1 we have that Fk−1=Fk+1modFk. According to the properties of Fibonacci numbers Fk+1>Fk for all k≥2. Therefore, the call to Extended–Euclid(Fk+1,Fk) recurses into Extended–Euclid(Fk,Fk−1), and this process continues until hitting the base that happens right after the call to Extended–Euclid(F3,F2) . The final call Extended–Euclid(F2,0) returns a 3-tuple (1,1,0). Thus, gcd(Fk+1,Fk)=1. We now need to find out the coefficients x and y.
In general, Extended–Euclid(Fk+1,Fk) returns a 3-tuple (1,(−1)k+1Fk−2,(−1)kFk−1). We prove this by induction on k. The base case k=2 obviously holds.
For k>2 the quotient q=⌊Fk+1/Fk⌋=⌊(Fk+Fk−1)/Fk⌋=1. Thus, (d,x,y)=(d′,y′,x′−y′) at all levels. By the inductive hypothesis the call to Extended–Euclid(Fk,Fk−1) returns a 3-tuple (1,(−1)kFk−3,(−1)k−1Fk−2), thus Extended–Euclid(Fk+1,Fk) returns
We proceed by induction on n. The base cases are when n=1∨n=2. If n=1 then we have nothing to permute and for n=2 we can apply equation (31.6) (see Exercise 31.1-9). For n>2 assume that the inductive hypothesis holds for n arguments. Every permutation is consisted of individual pairwise swaps of elements. An informal indirect proof is an ability of any sorting algorithm to find the matching permutation that results in proper ordering of elements. This is attained in a step-by-step fashion, swapping each time a pair of elements on different positions. Thus, let us consider elements ai and aj where 0≤i<j≤n. We have two cases:
If i>0 then it immediately follows by the inductive hypothesis that gcd(a0,a1,…,ai,…,aj,…,an)=gcd(a0,a1,…,aj,…,ai,…,an).
If i=0 then we have the following sequence of equivalent transformations:
gcd(ai,a1,…,aj,…,an)=gcd(ai,gcd(a1,…,aj,…,an))=gcd(ai,gcd(aj,a1,…,an))=gcd(gcd(aj,a1,…,an),ai)=gcd(aj,gcd(a1,…,an,ai))=gcd(aj,gcd(a1,…,ai,…,an))=gcd(aj,a1,…,ai,…,an)(by the inductive hypothesis)(by equation (31.6))(by the associativity rule)(by the inductive hypothesis)
We can incrementally find the coefficients x0,x1,…,xn by using the extended Euclid's algorithm from the book. For efficiently updating the coefficients perform two passes over the array X[0:n]. Set X[n]=1. Store X[i]=xi from right to left and keep the multiplicative factors in a separate array Y[0:n]. Set Y[0]=1. In the second pass, iterate from left to right and multiply each X[i] with the cumulative factor ∏i=0iyi.
In general, computing gcd(a,b) takes O(lgb) recursive calls, thus divisions. Even if b>0 and b∣a it requires one division to hit the base case with a remainder of zero. Therefore, we definitely need at least n divisions. If we also take into account the cost of finding coefficients, then we need to double this count.
Let M=max{a0,a1,…,an}. The gcd function monotonically decreases as we add more nonzero arguments, i.e., gcd(an−1,an)≥gcd(an−2,an−1,an)≥⋯≥gcd(a0,a1,…,an). This follows from the min function also being monotonically decreasing as we add more positive arguments and gcd(a0,a1,…,an)≤min{∣a0∣,∣a1∣,…,∣an∣}. The worst-case scenario is when gcd turns out to be 1. Overall, there can be no more than O(lgM) divisions for a gcd to drop from a maximal possible value M down to 1.
The total number of divisions performed by our algorithm is O(n+max{a0,a1,…,an}).
Exercise 31.2-8
The base case is defined as lcm(a,b)=gcd(a,b)∣a⋅b∣. The generalization to arbitrary number of arguments is the same as with the generic gcd introduced in Exercise 31.2-7. Therefore, we have
lcm(a1,a2,…,an)=lcm(a1,lcm(a2,a3,…,an))
Exercise 31.2-9
We first describe the general case and demonstrate it using the example with k=4 from the book. WLOG assume that indices are 0-based.
1
Constructing the derived pairs
Write each index i∈{0,1,…,k−1} into unique binary form using exactly r=⌈lgk⌉ digits, padding with leading zeros, as needed.
For each bit position 0≤j<r partition the set of input numbers into two groups:
Aj={ni:the jth bit from the right in the binary form of i is 1}
Bj={ni:the jth bit from the right in the binary form of i is 0}
For each j produce aj=∏n∈Ajn and bj=∏n∈Bjn, thus forming r pairs (aj,bj).
2
Proving correctness
If the set of input numbers are pairwise relatively prime, then gcd(ns,nt)=1 for s=t. Fix any j. Suppose for the sake of contradiction that some prime p∣aj and p∣bj. Consequently, there are some distinct numbers ns∈Aj and nt∈Bj such that p∣ns and p∣nt. But that contradicts the initial assumption about our set of input numbers. Thus, no such prime exists, and gcd(aj,bj)=1 for all j, as our choice of j was arbitrary.
If the derived pairs are relatively prime, then gcd(aj,bj)=1 for all pairs. Take any two distinct indices s and t that differ at bit position j. WLOG assume that s has 1 and t has 0. Therefore, ns∈Aj and nt∈Bj. Suppose for the sake of contradiction that some prime p∣ns and p∣nt. But we know that ns∣aj and nt∣bj, so this implies that p∣aj and p∣bj. But that contradicts the initial assumption about derived pairs being relatively prime. Therefore, the set of input numbers are pairwise relatively prime, since our choice of indices s and t was arbitrary.
3
Explaining the special case
For k=4 we see that A0={n1,n3}, A1={n2,n3}, B0={n0,n2} and B1={n0,n1}. Therefore, the derived pairs are (n1n3,n0n2) and (n2n3,n0n1). They all satisfy the properties of being pairwise relatively prime. Notice that we are using 0-based indexing, so just shift indices by one to map them to those from the book.
Exercise 31.3-1
+4012300123112302230133012
⋅5123411234224133314244321
There could be multiple one-to-one correspondeces showing that these groups are isomorphic. They share one common trait that neutral elements are always paired.
f(0)=1,f(1)=3,f(2)=4,f(3)=2
f(0)=1,f(1)=2,f(2)=4,f(3)=3
Exercise 31.3-2
Subgroups of Z9 are {0}, {0,3,6} and Z9 itself. Subgroups of Z13∗ are {1}, {1,12}, {1,3,9}, {1,5,8,12}, {1,3,4,9,10,12} and Z13∗ itself.
Exercise 31.3-3
Associativity is "inherited" from the group operation, as it doesn't depend on its arguments.
Since S′ is nonempty, there exists an element a∈S′. As S′⊆S is finite, a has a finite order t=ord(a). By closure of S′, all powers of a must lie in S′. Specifically, at=e∈S′. Similarly, an inverse of a is at−1, since a⋅at−1=at=e and a−1=at−1∈S′. We have not assumed anything special about a, hence our conclusion applies to all elements of S′. This finishes the proof.
Exercise 31.3-4
We simply apply equation (31.21) to get
ϕ(pe)=pe(1−p1)=pe(pp−1)=pe−1(p−1)
Exercise 31.3-5
We proceed by showing the fa is a bijection. Any bijection from a set to itself is a permutation of that set. Since the domain and codomain of fa are of the same cardinality, it is enough to show that fa is a surjective function.
Select any b∈Zn∗. According to Theorem 31.13 the system Zn∗ is a finite abelian group. Consequently, a−1∈Zn∗ and we have
fa(a−1b)=a(a−1b)modn=(aa−1)bmodn=bmodn
Therefore, the preimage of b under fa contains the element a−1b=ba−1(modn) (equality holds due to commutativity of Zn∗). This concludes the proof.
Exercise 31.4-1
We have a=35, n=50 and b=10. The extended version of Euclid's algorithm returns (5,3,−2), so d=5 and x0=3(10/5)=6. The step size is 50/5=10, thus all solutions are {6,16,26,36,46}.
Exercise 31.4-2
Based on Corollary 31.26 there is a multiplicative inverse a−1modn, so multiplying both sides of ax=ay(modn) by a−1 we get a−1ax=a−1ay(modn), which implies that x=y(modn). If gcd(a,n)>1, then the previous conclusion is not valid. One counterexample is 2x=2y(mod4), where x=1 and y=3 is also a valid solution.
Exercise 31.4-3
Yes, it will work, forcing the procedure to emit solutions in sorted order. Among those d distinct solutions, which are n/d positions apart from each other, one will always belong to the interval [0,n/d). Let it be xi=x′(b/d)+i(n/d)(modn) for some i∈[0,d). Exercise 31.1.-7 implies that xi=x′(b/d)(modn/d), hence setting x0=xi(modn/d) is the smallest starting value.
Exercise 31.4-4
This exercise is about the factor theorem applied modulo p. I recommend reading the proof based on Euclidean division of polynomials, which is aligned with the narrative from the book.
For deriving broader conclusions from the factor theorem, it is crucial for p to be prime. For example, if p=8 then the polynomial f(x)=x2+7(modp) has 4 solutions {1,3,5,7} modulo p instead of t=2, where t is a degree of f.
We proceed by induction on degree t. The base case t=1 follows from Corollary 31.25. For t>1 we have f(x)=(x−a)g(x), so f has at most 1+(t−1)=t distinct roots modulo p (after applying the inductive hypothesis on g(x)). This concludes the proof for all t≥1.
Exercise 31.5-1
We follow the process outlined in Theorem 31.27. We have x1=4, n1=5, x2=5 and n2=11. We calculate m1=11, m1−1=1(mod5), c1=11⋅1=11, m2=5, m2−1=9(mod11) and c2=5⋅9=45. Now we have all components to find x=11⋅4+45⋅5(mod55). Therefore, all solutions are of the form 49+55k for arbitrary integers k.
Exercise 31.5-2
We follow the steps outlined in Theorem 31.27. We have m1=56, m1−1=5(mod9), c1=280, m2=63, m2−1=7(mod8), c2=441, m3=72, m3−1=4(mod7) and c3=288. We can now calculate x=1⋅280+2⋅441+3⋅288(mod504). Therefore, all solutions are of the form 10+504k for arbitrary integers k.
Exercise 31.5-3
Let x=a−1(modn). We simply employ equation (31.30) and Corollary 31.29 to get that ax=1(modn) has a component-wise modular representation of (1,1,…,1), where every component is aixi=1(modni). Therefore, xi=ai−1(modni). This concludes the proof.
Exercise 31.5-4
Let n=∏i=1kni, where all terms of this product are pairwise relatively prime. According to Theorem 31.27 there is a one-to-one correspondence between a root of the equation f(x)=0(modn) and its component-wise modular representation. Based on Corollary 31.29 we also have that f(x)=0(modn) if and only if f(x)=0(modni) for i=1,2,…,k. Furthermore, by the basic rules of modular arithmetic f(x)=0(modni)⟺f(xmodni)=0(modni).
Let a k-tuple (a1,a2,…,ak)∈Zn1×Zn2×⋯×Znk be one solution of the system of equations f(ai)=0(modni) for i=1,2,…,k. Consequently, a∈Zn computed from inputs (a1,a2,…,ak), by following the steps from Theorem 31.27, represents a zero of f modulo n. Therefore, the number of such distinct k-tuples equals the number of roots of f modulo n, and vice versa. This concludes the proof.
Exercise 31.6-1
Element
Order
1
1
2
10
3
5
4
5
5
5
6
10
7
10
8
10
9
5
10
2
The smallest primitive root is 2 and the indices ind11,2(x) of x∈Z11∗ are shown below.
Element
Index
1
0
2
1
3
8
4
2
5
4
6
9
7
7
8
3
9
6
10
5
Exercise 31.6-2
x2=1(modpe)⟺pe∣x2−1⟺pe∣(x−1)(x+1)
Exercise 31.6-3
Replace lines 6 and 7 with the following two lines:
else d = Modular-Exponentiation(a,(b-1)/2, n) // b is odd
return (a*d mod n)*d mod n
The return statement illustrates how to reduce the magnitude of intermediary results.
Exercise 31.6-4
Below is the iterative modular repeated squaring algorithm in Python 3.
def modular_exponentiation(a, b, n):
result = 1
d = a % n
while b > 0:
if b % 2 == 1:
result = (result * d) % n
d = (d * d) % n
b >>= 1
return result
Exercise 31.6-5
By Euler's theorem we know that aϕ(n)=1(modn)⟺aaϕ(n)−1=1(modn). Therefore, we can calculate the inverse by calling Modular-Exponentiation(a,ϕ(n)−1,n).
Exercise 31.7-1
ϕ(n)=(p−1)(q−1)=280, so d=187(mod280). The encryption of the message M=100, by equation (31.37), is P(100)=254. You can test that everything is proper by calling Modular-Exponentiation(254,187,319) (see Exercise 31.6-4) and getting back the original message 100.
Exercise 31.7-2
Letβ denote the number of bits in n and assume that p and q are large primes. As e=3, we have 3d=1(modϕ(n)) that is equivalent to 3d−1=k(p−1)(q−1) for some integer k. Since 0<d<ϕ(n) we know that k=1∨k=2. We can rewrite the previous expression as 3d−1=k(n−(p+q)+1). Apparently, 3d−1<n⟹k=1 and 3d−1>n⟹k=2.
We can find x=p+q=n−(3d−1)/k+1. Plugging in q=n/p into the expression for x we get x=p+n/p⟹n=p(x−p). The last equation can be solved for p using the quadratic formula. Obviously, once we have p we can trivially find q.
All listed steps can be executed in time polynomial in β, hence the whole process is also doable in time polynomial in β.
Suppose an attacker possesses a ciphertext CA encrypted with PA. She/he executes the following procedure:
Set C=CA and create an empty list L containing temporary messages.
Try to decrypt C. Store the result in M.
If successful, then for each message Mi in L, find its inverse Mi−1 modulo n and update M=MMi−1(modn). At the end, M=MA and the process stops.
Otherwise, generate a new message Mi∈Zn∗, add it to L, and update C=PA(Mi)C.
Goto step 2.
The chance of success at step 2 is 1%. The number of trials follows the geometric distribution with an expected value of 100. Finding inverses modulo n is step 3 can be efficiently carried out using the extended Euclid's GCD algorithm. Almost all generated messages at step 4 will be relatively prime to n, since the chance of stumbling across one that contains a prime factor of n is practically zero. All in all, an attacker can reveal the original message with high probability.
Exercise 31.8-1
One example is n=7⋅5=35, where 6 is a nontrivial square root of 1 modulo 35, since 62=1(mod35). Such a root always exists for n that satisfies the conditions from this exercise.
Decompose n into a product n1n2, where n1 and n2 are odd numbers greater than 1 that are relatively prime to each other. By Corollary 31.28, there exists an x simultaneously satisfying the equations
xx=−1(modn1)=1(modn2)
Corollary 31.29 gives that x=±1(modn). On the other hand, we have
x2x2=1(modn1)=1(modn2)
Corollary 31.29 gives that x2=1(modn). Therefore, x is a nontrivial square root of 1 modulo n.
A least common multiple of two positive integers a and b, represented with equations (31.11) and (31.12), respectively, can be calculated similarly to gcd(a,b) by replacing minimums with maximums in equation (31.13). This method, based on prime factorization, can be easily extended to arbitrary number of arguments (see also Exercise 31.2-8). In principle, lcm(a1,a2,…,am)=a1a2…am/d, where d contains prime factors whose exponents compensate overshooting maximums, due to multiplication of numbers in the numerator, where powers simply add up. It follows that ϕ(n) is a multiple of λ(n), thus λ(n)∣ϕ(n). Another perspective is given here.
Suppose for the sake of contradiction that a Carmichael number n is not "square-free", thus it has a factor piei where ei>1. Consequently, pi∣ϕ(piei)⟹pi∣λ(n), since λ(n) is defined as a least common multiple of these individual Euler's phi functions. Furthermore, pi∣n⟹pi∤n−1, which is a contradiction with the property that λ(n)∣n−1 (see also Exercise 31.1-3). Therefore, n must be "square-free."
Suppose for the sake of contradiction that a Carmichael number n is consisted of two primes p<q. We have that λ(n)=lcm(p−1,q−1)⟹q−1∣λ(n). On the other hand, q−1∤n−1 because by the Division theorem we have n−1=pq−1=(q−1)p+(p−1). But this is again a contradiction with the property that λ(n)∣n−1. Therefore, n must be consisted of at least three primes.
Exercise 31.8-3
By Exercise 31.6-2 we have x2=1(modn)⟺n∣(x−1)(x+1). If x is a nontrivial square root of 1 modulo n, then by Corollary 31.35 n is composite, so 2<x<n−1. It follows that n∤x−1 and n∤x+1, thus prime factors of n are distributed between these two terms of a product. Therefore, gcd(x−1,n) and gcd(x+1,n) are both nontrivial divisors of n.
Problem 31-1
a.
By Corollary 31.4, we have gcd(a,b)=gcd(2(a/2),2(b/2))=2gcd(a/2,b/2).
b.
By equation (31.13), gcd(a,b) doesn't have a prime factor 2. Dividing b by 2 reduces the exponent of 2 in b, hence making it closer to zero. Consequently, gcd(a,b)=gcd(a,b/2).
c.
Let d1=gcd(a,b) and d2=gcd(a−b,b). We proceed by showing that d1∣d2 and d2∣d1, thus according to equation (31.5) they are equal.
We know that d1∣a and d1∣b, so by equation (31.4) we have d1∣a−b. Based on Corollary 31.3 we can conclude that d1∣d2
We know that d2∣a−b and d2∣b, so by equation (31.4) we have d2∣(a−b)+b=a. Based on Corollary 31.3 we can conclude that d2∣d1.
Since a−b is even and b is odd, the division of a−b by 2 is justified according to part (b).
d.
I recommend reading this study of the binary GCD algorithm with insightful C++ based optimizations.
Problem 31-2
In all subproblems below assume that 1<b≤a, so that q≥1.
a.
We do all the arithmetic in binary by processing the bits of a from the most significant to the least, accumulating the initial O(lgb) bits of a into a value (current residue r) greater or equal than b, as they would anyhow produce leading zeros of a quotient.
In the analysis of the long division algorithm, the bits of the quotient and the divisor are counted differently because they play distinct roles. Each bit of q is determined in one step of the algorithm. Thus, the number of steps required to compute all bits of q is proportional to it's number of bits ⌊lgq⌋+1=O(1+lgq). This expression is nonzero even if q≤1.
The algorithm still requires at least one step to compare a and b even if q=0 (when a<b). The expression 1+lgq=1+0=1 aims to encompass this case, despite lgq being undefined here.
At each step, the algorithm performs operations (e.g., comparisons or subtractions) involving the divisor b. Since it has approximately O(lgb) bits, these operations require that many bit operations per step. Therefore, the total count is O(1+lgq)×O(lgb)=O((1+lgq)lgb). This complexity reflects that the algorithm's time is dominated by the number of quotient bits (steps) and the size of the divisor (cost per step).
The remainder r gets calculated along the way (see also Exercise 31.1-12).
b.
The reduction requires us to compute the remainder that can be accomplished in time as given in part (a). We know that a=qb+r, where 0≤r<b.
The last line follows from 1+xx≤1 and x1+x≤2 for all x≥1. Therefore, for c≥2c′ we have established the required proof.
c.
As shown in part (b), EUCLID takes at most c(μ(a,b)−μ(b,amodb)) bit operations on the first recursive call, at most c(μ(b,amodb)−μ(amodb,bmod(amodb))) on the second, and so on. This sum telescopes, hence we end up with cμ(a,b)−O(1)=O(μ(a,b)).
When applied to two β-bit inputs we get O(μ(a,b))=O((1+β)2)=O(β2).
Problem 31-3
a.
The algorithmic recurrence of the running time is T(n)=T(n−1)+T(n−2)+Θ(1). We use a substitution method to prove that this version cannot do better than being exponential in n. The constant c1 denotes the lower bound in Θ(1).
Assume that T(n)≥cFn for n≥2.
T(n)=T(n−1)+T(n−2)+Θ(1)≥cFn−1+cFn−2+c1=cFn+c1≥cFn∴T(n)=Ω(Fn)(by the inductive hypothesis)
This concludes the proof, as Fibonacci numbers grow exponentially.
b.
Here is the Python 3 script showcasing memoization. The recurrence T(n)=T(n−1)+Θ(1) resolves to T(n)=O(n), since memoization prevents repeated work.
import functools
@functools.cache
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
print(fib(100))
It will print in the blink of an eye 354224848179261915075. Try commenting out the annotation to see the difference.
c.
We can leverage the repeated squaring method, with a small change, that instead of multiplying numbers we will work with small constant sized matrices. Each matrix operation will then also take constant time.
We prove the next statement using induction on n, following the hint from the book.
[0111]n=[Fn−1FnFnFn+1]
The base case n=1 obviously holds. For n>1 we have
[0111]n=[0111][0111]n−1=[0111][Fn−2Fn−1Fn−1Fn]=[Fn−1Fn−2+Fn−1FnFn−1+Fn]=[Fn−1FnFnFn+1]✓(by the inductive hypothesis)
At the end, we need to read out the matching cell containing Fn.
d.
Fibonacci numbers have β=Θ(n) binary digits, as explained here.
The bare recursive variant still has exponential running time. Increasing the amount of work at each level cannot invalidate the previously established lower bound.
The memoized version has a new recurrence T(n)=T(n−1)+Θ(n) that solves to T(n)=Θ(n2).
The matrix version employs both addition and multiplication of numbers, hence it has a new recurrence T(n)=T(n/2)+Θ(n2). Case 3 of the Master theorem states that T(n)=Θ(n2).
Problem 31-4
a.
Theorem 31.32 implies that Zp∗ is a cyclic group, so it has a generator g, where all values gi modulo p are distinct for i=0,1,…,p−2. Every even index is associated with a quadratic residue, since (gi/2)2=gi(modp). Thus, the number of quadratic residues is at least (p−1)/2.
Suppose for the sake of contradiction that there is also some odd index i such that x2=gi(modp) for some x. We must have that x=gj(modp) and g2j=gi(modp). Theorem 31.33 implies that 2j=i(modϕ(p)), where ϕ(p)=p−1. But this is impossible, since p−1∤2j−i due to mismatch in parity. We can conclude that the the number of quadratic residues is exactly (p−1)/2.
b.
If a is a quadratic residue modulo p, then there is an x such that
Otherwise, gi=a(modp) for some odd index i (see part(a)). Let x=g(p−1)/2. Theorem 31.34 implies that x2=1(modp) has only two possible solutions x=1∨x=−1. According to Fermat's theorem gp−1=g0=1(modp), so it must be that g(p−1)/2=−1(modp), as powers of g are distinct for i=0,1,…,p−2. We have
We can call modular_exponentiation(a, (p-1)/2, p) (see Exercise 31.6-4) and check whether the returned value is 1. If we let β=lgp, then It requires O(β3) bit operations.
c.
By part (b), a(p−1)/2=a2k+1=1(modp), so (ak+1)2=a2k+2=a⋅a2k+1=a(modp). Therefore, ak+1 is a square root of a modulo p. We can again use modular exponentiation to efficiently find it using O(β3) bit operations.
d.
Run the algorithm of part (b) by repeatedly trying random numbers a∈[2,p) until (pa)=−1(modp). Half the elements in Zp∗ are not quadratic residues, thus each experiment can be regarded as a Bernoulli trial with probability 1/2. The expected number of trials follows the geometric distribution and equals 2. The average number of arithmetic operations is O(β).