We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Combinatorics, Probability, and Information Theory | SpringerLink

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Skip to main content

Combinatorics, Probability, and Information Theory

  • Chapter
  • First Online:
An Introduction to Mathematical Cryptography

Part of the book series: Undergraduate Texts in Mathematics ((UTM))

  • 31k Accesses

Abstract

In considering the usefulness and practicality of a cryptographic system, it is necessary to measure its resistance to various forms of attack. Such attacks include simple brute-force searches through the key or message space, somewhat faster searches via collision or meet-in-the-middle algorithms, and more sophisticated methods that are used to compute discrete logarithms, factor integers, and find short vectors in lattices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Sometimes the length of the search can be significantly shortened by matching pieces of keys taken from two or more lists. Such an attack is called a collision or meet-in-the-middle attack; see Sect. 5.4.

  2. 2.

    You may wonder why Alice and Bob, those intrepid exchangers of encrypted secret messages, are sitting down for a meal with their cryptographic adversary Eve. In the real world, this happens all the time, especially at cryptography conferences!

  3. 3.

    The binomial theorem’s fame extends beyond mathematics. Moriarty, Sherlock Holmes’s arch enemy, “wrote a treatise upon the Binomial Theorem,” on the strength of which he won a mathematical professorship. And Major General Stanley, that very Model of a Modern Major General, proudly informs the Pirate King and his cutthroat band:

    About Binomial Theorem I’m teeming with a lot o’ news—

    With many cheerful facts about the square of the hypotenuse.

    (The Pirates of Penzance, W.S. Gilbert and A. Sullivan 1879)

  4. 4.

    This cipher is named after Blaise de Vigenère (1523–1596), whose 1586 book Traicté des Chiffres describes the known ciphers of his time. These include polyalphabetic ciphers such as the “Vigenère cipher,” which according to [63] Vigenère did not invent, and an ingenious autokey system (see Exercise 5.19), which he did.

  5. 5.

    More typically one uses a key phrase consisting of several words, but for simplicity we use the term “keyword” to cover both single keywords and longer key phrases.

  6. 6.

    Cryptography and the Art of Decryption.

  7. 7.

    We were a little lucky in that every relation in Table 5.6 is correct. Sometimes there are erroneous relations, but it is not hard to eliminate them with some trial and error.

  8. 8.

    David Copperfield, 1850, Charles Dickens.

  9. 9.

    General (continuous) probability theory also deals with infinite sample spaces Ω, in which case only certain subsets of Ω are allowed to be events and are assigned probabilities. There are also further restrictions on the probability function \(\mathrm{Pr}:\varOmega \rightarrow \mathbb{R}\). For our study of cryptography in this book, it suffices to use discrete (finite) sample spaces.

  10. 10.

    The authors of [51, chapter 1] explain the ubiquity of urns in the field of probability theory as being connected with the French phrase aller aux urnes (to vote).

  11. 11.

    More generally, the success rate in a Monte Carlo algorithm need not be 50 %, but may instead be any positive probability that is not too small. For the Miller–Rabin test described in Sect. 3.4, the corresponding probability is 75 %. See Exercise 5.28 for details.

  12. 12.

    For an amusing commentary on long strings of heads, see Act I of Tom Stoppard’s Rosencrantz and Guildenstern Are Dead.

  13. 13.

    A sequence a 1, a 2, a 3,  is called a geometric progression if all of the ratios a n+1a n are the same. Similarly, the sequence is an arithmetic progression if all of the differences a n+1a n are the same.

  14. 14.

    Note that the expression Pr(X = x and Y = y) is really shorthand for the probability of the event

    $$\displaystyle{\bigl \{\omega \in \varOmega: X(\omega ) = x\ \mbox{ and}\ Y (\omega ) = y\big\}.}$$

    If you find yourself becoming confused about probabilities expressed in terms of values of random variables, it often helps to write them out explicitly in terms of an event, i.e., as the probability of a certain subset of Ω.

  15. 15.

    If you think that \(\frac{40} {365}\) is the right answer, think about the same situation with 366 people. The probability that someone shares your birthday cannot be \(\frac{366} {365}\), since that’s larger than 1.

  16. 16.

    If this value of x happens to be negative and we want a positive solution, we can always use the fact that g N = 1 to replace it with x = yz + N.

  17. 17.

    For example, it would suffice that F have a continuous derivative.

  18. 18.

    For most cryptographic applications, the prime p is chosen such that p − 1 has precisely one large prime factor, since otherwise, the Pohlig–Hellman algorithm (Theorem 2.31) may be applicable. And it is unlikely that d will be divisible by the large prime factor of p − 1.

  19. 19.

    As is typical, we have omitted reference to the underlying sample spaces. To be completely explicit, we have three probability spaces with sample spaces Ω M , Ω C , and Ω K and probability functions Pr M , Pr C , and Pr K . Then MC and K are random variables

    $$\displaystyle{M:\varOmega _{M} \rightarrow \mathcal{M},\qquad K:\varOmega _{K} \rightarrow \mathcal{K},\qquad C:\varOmega _{C} \rightarrow \mathcal{C}.}$$

    Then by definition, the density function f M is

    $$\displaystyle{f_{M}(m) =\mathrm{ Pr}(M = m) =\mathrm{ Pr}_{M}{\bigl (\{\omega \in \varOmega _{M}: M(\omega ) = m\}\bigr )},}$$

    and similarly for K and C.

  20. 20.

    Although this notation is useful, it is important to remember that the domain of H is the set of random variables, not the set of n-tuples for some fixed value of n. Thus the domain of H is itself a set of functions.

  21. 21.

    This convention makes sense, since we want H to be continuous in the p i ’s, and it is true that limp→0 plog2 p = 0.

  22. 22.

    It should be noted that when implementing a modern public key cipher, one generally combines the plaintext with some random bits and then performs some sort of invertible transformation so that the resulting secondary plaintext looks more like a string of random bits. See Sect. 8.6.

  23. 23.

    To be rigorous, one should really define upper and lower densities using liminf and limsup, since it is not clear that limit defining H(L) exists. We will not worry about such niceties here.

  24. 24.

    This does not mean that one can remove 70 % of the letters and still have an intelligible message. What it means is that in principle, it is possible to take a long message that requires 4.7 bits to specify each letter and to compress it into a form that takes only 30 % as many bits.

  25. 25.

    As mentioned in Sect. 2.1, the question of whether \(\mathcal{P} = \mathcal{N}\mathcal{P}\) is one of the $1,000,000 Millennium Prize problems.

  26. 26.

    Spelt is an ancient type of wheat.

  27. 27.

    A hekat is \(\frac{1} {30}\) of a cubic cubit, which is approximately 4.8 l.

References

  1. M. Agrawal, N. Kayal, N. Saxena, PRIMES is in P. Ann. Math. (2) 160(2), 781–793 (2004)

    Google Scholar 

  2. M. Ajtai, C. Dwork, A public-key cryptosystem with worst-case/average-case equivalence, in STOC ’97, El Paso (ACM, New York, 1999), pp. 284–293 (electronic)

    Google Scholar 

  3. R.P. Brent, An improved Monte Carlo factorization algorithm. BIT 20(2), 176–184 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  4. H. Cohen, A Course in Computational Algebraic Number Theory. Volume 138 of Graduate Texts in Mathematics (Springer, Berlin, 1993)

    Google Scholar 

  5. S.A. Cook, The complexity of theorem-proving procedures, in STOC ’71: Proceedings of the Third Annual ACM Symposium on Theory of Computing, Shaker Heights (ACM, New York, 1971), pp. 151–158

    Google Scholar 

  6. M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences (W. H. Freeman, San Francisco, 1979)

    Google Scholar 

  7. G.R. Grimmett, D.R. Stirzaker, Probability and Random Processes, 3rd edn. (Oxford University Press, New York, 2001)

    Google Scholar 

  8. E.T. Jaynes, Information theory and statistical mechanics. Phys. Rev. (2) 106, 620–630 (1957)

    Google Scholar 

  9. D. Kahn, The Codebreakers: The Story of Secret Writing (Scribner Book, New York, 1996)

    Google Scholar 

  10. P.L. Montgomery, Speeding the Pollard and elliptic curve methods of factorization. Math. Comput. 48(177), 243–264 (1987)

    Article  MATH  Google Scholar 

  11. NIST–DES, Data Encryption Standard (DES). FIPS Publication 46-3, National Institue of Standards and Technology, 1999. http://csrc.nist.gov/publications/fips/fips46-3/fips46-3.pdf

  12. J.M. Pollard, Monte Carlo methods for index computation (mod p). Math. Comput. 32(143), 918–924 (1978)

    MathSciNet  MATH  Google Scholar 

  13. E.L. Post, A variant of a recursively unsolvable problem. Bull. Am. Math. Soc. 52, 264–268 (1946)

    Article  MathSciNet  MATH  Google Scholar 

  14. S. Ross, A First Course in Probability, 9th edn. (Pearson, England 2001)

    Google Scholar 

  15. C.E. Shannon, A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423, 623–656 (1948)

    Article  MathSciNet  Google Scholar 

  16. C.E. Shannon, Communication theory of secrecy systems. Bell Syst. Tech. J. 28, 656–715 (1949)

    Article  MathSciNet  MATH  Google Scholar 

  17. V. Shoup, Lower bounds for discrete logarithms and related problems, in Advances in Cryptology—EUROCRYPT ’97, Konstanz. Volume 1233 of Lecture Notes in Computer Science (Springer, Berlin, 1997), pp. 256–266

    Google Scholar 

  18. J. Talbot, D. Welsh, Complexity and Cryptography: An Introduction (Cambridge University Press, Cambridge, 2006)

    Book  Google Scholar 

  19. E. Teske, Speeding up Pollard’s rho method for computing discrete logarithms, in Algorithmic Number Theory, Portland, 1998. Volume 1423 of Lecture Notes in Computer Science (Springer, Berlin, 1998), pp. 541–554

    Google Scholar 

  20. E. Teske, Square-root algorithms for the discrete logarithm problem (a survey), in Public-Key Cryptography and Computational Number Theory, Warsaw, 2000 (de Gruyter, Berlin, 2001), pp. 283–301

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Hoffstein, J., Pipher, J., Silverman, J.H. (2014). Combinatorics, Probability, and Information Theory. In: An Introduction to Mathematical Cryptography. Undergraduate Texts in Mathematics. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-1711-2_5

Download citation

Publish with us

Policies and ethics