Section notes 11 PDF

Title Section notes 11
Author Han Na Shin
Course Foundations of Computer Science
Institution University of Michigan
Pages 4
File Size 104.3 KB
File Type PDF
Total Downloads 55
Total Views 142

Summary

Section Notes 11...


Description

EECS 376: Foundations of Computer Science University of Michigan, Winter 2018

1 1.1

Section Notes 11

Fast Modular Exponentiation Modular Arithmetic Reminders

Last week, we covered modular arithmetic, and learned some tricks. It is useful to remember some tricks when performing modular arithmetic. Notably, that (a · b) mod n = (a mod n)(b mod n) mod n Similarly, (ab ) mod n = (a mod n)b mod n .

In RSA, this is why me mod n = (m mod n)e mod n. and cd mod n = (c mod n)d mod n. So, we can see how cd mod n = (me mod n)d mod n = med mod n = m

mod n

obeys these rules.

1.2

Power of 2

Suppose we want to calculate ab mod n, where b is a power of 2. Notice that b

b

b

ab = a 2 · a 2 = (a 2 )2 . Since b is a power of 2, we can keep on pulling 2’s out of the exponent: ab = a2

log(b)

= (((a2 )2 )... )2

and thus ab is computable in O(log(b)) multiplications. The name of this algorithm is repeated squaring, since we continually square the number. Example 1.1. We can compute 7256 using only log2 (256) = 8 multiplications 72 ≡ 7 · 7 ≡ 49

≡ 10 (mod 13)

78 ≡ 74 · 74 ≡ 9 · 9 ≡ 81

≡ 3 (mod 13)

4

2

2

7 ≡ 7 · 7 ≡ 10 · 10 ≡ 100 ≡ 9 (mod 13)

716 ≡ 78 · 78 ≡ 3 · 3 .. .

7256 ≡ 7128 · 7128 ≡ 3 · 3 so 7256 mod 13 = 9.

1

≡ 9 (mod 13)

≡ 9 (mod 13)

EECS 376: Foundations of Computer Science University of Michigan, Winter 2018

1.3

Section Notes 11

Calculate ab mod n in O(log(b))-time

We have just showed above that if b is a power of 2 we can calculate it in O(log(b)) − time. We now can extend this algorithm to work for any non-negative integer b. Claim 1.2. ab mod n can be computed in O(log(b))-time by composing modular exponentiations of powers of 2. Proof. Let br br−1 · · · b1 b0 be the binary representation of b. Note that this means that b = b0 · 20 + b1 · 21 + ... + br · 2r So ab = ab0 ·2

0 +b

1 ·2

1 +...+b

r ·2

r

r

0

1

= ab0 ·2 · ab1 ·2 ...abr ·2

r

i

Note that in the process of computing a2 by the fast exponentiation algorithm, we compute a2 0 1 r for i = 0, 1, ..., r − 1. Thus, the time it takes to compute ab0 ·2 · ab1 ·2 ...abr ·2 is the time it takes r to compute a2 plus the time it takes to make the r multiplications. Observing that r = O(log(b)) r and that the time to compute a2 is O(log(b)) by the fast exponentiation algorithm, we conclude b that we can compute a in O(log(b))-time. Example 1.3. Compute 7117 mod 13. First we calculate the powers of 2 up to 64 71 mod 13 = 7 mod 13 = 7 72 mod 13 = 49 mod 13 = 10 74 mod 13 = 100 mod 13 = 9 78 mod 13 = 81 mod 13 = 3 716 mod 13 = 9 mod 13 = 9 732 mod 13 = 81 mod 13 = 3 764 mod 13 = 9 mod 13 = 9 Since 117 = 11101012 = 64 + 32 + 16 + 4 + 1, we have: 7117

mod 13 = (71 · 74 · 716 · 732 · 764 ) mod 13 = (7 · 9 · 9 · 3 · 9) mod 13 = 63 · 27 · 9 mod 13 = 11 · 1 · 9 mod 13 = 99 mod 13 =8

2

Baby-step Giant-step Algorithm

First, recall from lecture that for any n ∈ N, we define Zn = {0, 1, 2, . . . , n − 1}. The group Zn∗ is a subset of Zn such that all elements of Zn have an inverse modulo n. In mathematical terms, Z∗n = {x ∈ Zn | gcd(x, n) = 1}. When we consider an arbitrary prime p, we have Z∗p = {1, 2, . . . , p − 1} = Zp \{0} because all positive numbers less than p are relatively prime to p. 2

EECS 376: Foundations of Computer Science University of Michigan, Winter 2018

Section Notes 11

Definition 2.1. Let p be prime. g ∈ Z∗p is a generator of Z∗p if for every h ∈ Z∗p , there exists x ∈ {0, 1, . . . , p − 2} such that g x = h. In other words, {g x : x ∈ {0, 1, . . . , p − 2}} = Zp∗.

The Baby-step Giant-step algorithm is a method for computing the discrete logarithm of an integer h, with respect to some generator g of p. That is, given some h ∈ Zp , we want to find some x where h = g x. √ Suppose we let m = ⌈ p⌉. Note that we can write x = mq + r for some integers q, 0 ≤ r < m. So h ≡ g x ≡ g mq+r ≡ g mq g r (mod p)

By moving terms around we then get h · g −m·q ≡ g r (mod p)

which, if we can solve for q and r lets us find x.

To do this, we compute arrays u = {g 0 , g 1 , ..., g m−1 } and v = {h · (g −m )0 , h · (g −m )1 , ..., h · (g −m )m } and then find the collision between them. Question: What is the runtime of this algorithm? √ Observe that we can compute each of these two arrays in O( p), since it takes O(1) to compute √ each entry in the array. Finding the collision naively will take O( p2 ) = O(p) which is very √ √ inefficient. However, we can improve on this. First we sort one array, taking O( p log( p)) time. Then we search the sorted array for each element in the unsorted array within the sorted array by √ √ √ performing p binary searches, giving an overall runtime of O( p log( p)). √ Better yet, if we use make use of hash tables, we can find the collision in O( p) time. Question: Is this algorithm efficient? √ No, if we use the hash tables, we are left with a O( p) algorithm, where the input is of size O(log(p)). However, this is a good thing because if we did have an efficient algorithm then modern day cryptography would be broken... Example 2.2. √Let p = 31 and g = 3. Find the discrete log of h = 6. We have m = ⌈ 31⌉ = ⌈5.57⌉ = 6, so compute:   g 0 = 1 ≡ 1 (mod 31)      g 1 = 3 ≡ 3 (mod 31)     g 2 = 9 ≡ 9 (mod 31) u=  g 3 = 27 ≡ 27 (mod 31)      g 4 = 81 ≡ 19 (mod 31)     g 5 = 19 · 3 ≡ 26 (mod 31)

We can use the Euclidean algorithm to find that 3−6 ≡ (3−1 )6 ≡ 216 ≡ 2 (mod 31). Then compute:   h · g −m·0 = 6 · 20 ≡ 6 (mod 31)    −m·1   = 6 · 21 ≡ 12 (mod 31) h · g v = h · g −m·2 = 6 · 22 ≡ 24 (mod 31)     h · g −m·3 = 6 · 23 ≡ 17 (mod 31)    h · g −m·4 = 6 · 24 ≡ 3 (mod 31) 3

EECS 376: Foundations of Computer Science University of Michigan, Winter 2018

Section Notes 11

At which point we find a collision. We find that the collision between u and v is at u(1) = v(4), so q = 4 and r = 1, which means that h ≡ g 4m+1 ≡ g 6·4+1 ≡ g 25 (mod p) and therefore 25 is the discrete logarithm of 6 with respect to generator 3.

3

Cryptography and NP-Completeness

RSA and Diffe-Hellman both rely on, what are thought to be, hard problems. In order to crack RSA, one needs an efficient algorithm for Integer Factorization. To crack Diffe-Hellman one needs an efficient solution to the Discrete Logarithm problem. It is unknown whether these two problems are in P. Additionally, it is unknown whether or not these problems are NP-Hard. However, both of these problems are in NP, so if P = NP then both of these problems would be solvable in polynomial time, which would be very problematic for modern cryptographic applications.

4...


Similar Free PDFs