Next: , Previous: , Up: Preliminaries   [Contents][Index]


1.3.2 Security

“Mental Poker” solutions cannot prevent that malicious players exchange private information, for example, by telephone or Internet chat. Cryptographic protocols can only minimize the effect of such colluding parties and should try to protect the confidentiality for honest players. But even this small protection often relies on number-theoretical assumptions which are only believed to be true, i.e., problems like factoring products of large primes or computing discrete logarithms are only believed to be hard. That means, strict mathematical proofs1 for the hardness of these problems are not known, and it is not very likely that such proofs will ever be found. However, almost all public key cryptosystems rely on such assumptions and therefore you should not care about this issue, as long as reasonable security parameters are chosen and practical quantum computers are out of range.

LibTMCG was originally designed to provide security in the “honest-but-curious” (aka “semi-honest” or passive) adversary model. That means, all participants follow the protocol instructions properly but they may gather information and share them within a coalition to obtain an advantage in the game. Thus we are basically not concerned with robustness and availability issues which are hard to solve in almost asynchronous environments like the Internet. However, the most operations are verifiable such that cheating can be detected. To obtain this verifiability, the protocols deploy so-called zero-knowledge proofs which yield no further knowledge but the validity of a statement. The soundness error of these proofs is bounded by a fixed security parameter \kappa. Depending on your application scenario this parameter should be chosen such that there is a reasonable tradeoff between the cheating probability (which is less or equal than 2^{-\kappa}) and the produced computational and communication complexity. LibTMCG also uses so-called zero-knowledge proofs of knowledge due to Bellare and Goldreich (see On defining proofs of knowledge, Advances in Cryptology – Proceedings of CRYPTO’ 92, 1992), however, for convenience we will not further distinguish between these building blocks. Finally, some of the protocols (e.g. the efficient shuffle argument by Groth) are only zero-knowledge with respect to a so-called “honest verifier” who follows all protocol instructions faithfully. Since version 1.2.0 of LibTMCG we use a two-party version of a distributed coin flipping protocol by Jarecki and Lysyanskaya [JL00] to protect against malicious verifiers in that case.

Unfortunately, in practice there is another substantial problem with the detection of cheaters. It requires that an authenticated broadcast channel has been set up, where all players have read/write access. There exist protocols (so-called “reliable broadcast” or even “atomic broadcast”) for creating such a channel, however, only under the additional condition that the number of parties t who act faulty or even malicious (so-called “Byzantine adversary”) is reasonable small. In a full asynchronous environment like the Internet resilience is achievable for t < n/3 only, where n denotes the total number of parties in the protocol. LibTMCG provides a well-known protocol due to Bracha (see An asynchronous [(n - 1)/3]-resilient consensus protocol, Proceedings of 3rd ACM Symposium on Principles of Distributed Computing, 1984) in a slightly optimized variant by Cachin, Kursawe, Petzold, and Shoup [CKPS01]. Please note that in most cases the application programmer must decide, where the use of a broadcast channel is neccesary and appropriate. Thus, without reliable broadcast you should take into account that not necessarily the player acting as prover is the source of evil, if a verification procedure fails. This level of uncertainty is the main reason for our still limited adversary model.

Note that it is not known, whether the used protocols retain their zero-knowledge property, if they are composed and executed in a concurrent setting. Thus the application programmer should be careful and avoid parallel protocol sessions. It is an open research project to create a protocol suite whose security can be proven in the UC-framework of Canetti (see Universally Composable Security: A New Paradigm for Cryptographic Protocols, Cryptology ePrint Archive: Report 2000/067) or even more elaborated UC-frameworks (see e.g. Dennis Hofheinz and Victor Shoup: GNUC: A New Universal Composability Framework, Cryptology ePrint Archive: Report 2011/303). Furthermore, the protocols should employ concurrent zero-knowledge proofs (see Cynthia Dwork, Moni Naor, and Amit Sahai: Concurrent Zero-Knowledge, Journal of the ACM 51(6):851–898, 2004).

Please also note, that in some protocols the Fiat-Shamir heuristic [FS87] is used to turn interactive special honest verifier zero-knowledge arguments resp. proofs into non-interactive versions in the random oracle model. However, there are some theoretical (see e.g. Nir Bitansky, Dana Dachman-Soled, Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, Adriana Lopez-Alt, and Daniel Wichs: Why ’Fiat-Shamir for Proofs’ Lacks a Proof, TCC 2013, LNCS 7785, 2013) and pratical (see David Bernhard, Olivier Pereira, Bogdan Warinschi: How Not to Prove Yourself: Pitfalls of the Fiat-Shamir Heuristic and Applications to Helios, ASIACRYPT 2012, LNCS 7658, 2012) concerns that show the insecurity of Fiat-Shamir heuristic w.r.t. the soundness of the argument resp. proof. That means, if deterministic hash functions are used as public coin [BR93], then the random oracle assumption obviously does not hold and therefore a malicious prover can manipulate the challenges in order to cheat and thus violates the soundness property. On the other hand, the Fiat-Shamir heuristic, and in general the non-interactivness of the transformed protocols, protect against a malicious verifier. Thus it is another important measure to deal with the limitation of honest verifier zero-knowledge proofs resp. arguments of knowledge without loosing their efficiency. However, non-interactive protocols are necessarily malleable (when used without unique identifiers), and the cheating verifier can generate a convincing proof of knowledge by copying one sent by the prover in a previous iteration of the protocol. This issue must be adressed by the application programmer, for example, by using fresh randomness in each card or stack operation which should be verifiable.

LibTMCG was carefully implemented with respect to timing attacks (see Paul C. Kocher: Cryptanalysis of Diffie-Hellman, RSA, DSS, and other cryptosystems using timing attacks, CRYPTO ’95, LNCS 963, 1995). Therefore we loose some efficiency, e.g., during modular exponentiations. However, it is strongly recommended to leave the timing attack protection turned on, unless you know exactly where it is really not needed.

Security Advice: We have implemented all cryptographic primitives according to the cited research papers and to the best of our knowledge. However, we can not eliminate any possibility of contained flaws or bugs, because the implementation of such complex protocols is always an error-prone process. Moreover, the scientific results are sometimes controversial or even wrong. Thus we encourage readers with advanced cryptographic background to review given references and the source code of LibTMCG. Please report any complaint or correction proposal!


Footnotes

(1)

For instance, a “tight reduction” to a known hard problem in the sense of complexity theory.


Next: , Previous: , Up: Preliminaries   [Contents][Index]