Keywords

1 Introduction

The intriguing notion of a zero-knowledge proof introduced by Goldwasser, Micali and Rackoff [31] has been for almost three decades a source of fascinating open questions in Cryptography and Complexity Theory. Indeed, motivated by new real-world attacks, the notion has been studied in different flavors (e.g., non-interactive zero knowledge [8], non-malleable zero knowledge [21], concurrent zero knowledge [23], resettable zero knowledge [16]) and each of them required extensive research to figure out the proper definition and its (in)feasibility. Moreover all such real-world attacks have been considered also for the natural generalization of the concept of zero knowledge: secure computation [30].

Leakage Attacks. Leakage resilience deals with modeling real-word attacks where the adversary manages through some physical observations to obtain side-channel information on the state (e.g., private input, memory content, randomness) of the honest player (see, for example, [42]). Starting with the works of [25, 34, 35, 41] leakage resilience has been a main-stream research topic in Cryptography, and recently the gap between theory and practice has been significantly reduced [22, 40, 43].

The notions of leakage-resilient zero knowledge [28] (LRZK) and secure multi-party computation [10] (LRMPC) have been also considered. Despite the above intensive research on leakage resilience, LRZK and LRMPC are still rich of interesting open problems.

1.1 Previous Work and Open Problems

Leakage Resilience vs. Tolerance. The first definition for leakage-resilient zero knowledge (LRZK, in short) was given by Garg et al. in [28]. In their definition, the simulator is allowed to make leakage queries in the ideal world. This was justified by the observation that an adversary can, through leakage queries, easily obtain some of the bits of the witness used by the prover in the real world. Clearly, these bits of information can not be simulated, unless the simulator is allowed to make queries in the ideal model. Therefore the best one can hope for is that a malicious verifier does not learn anything from the protocol beyond the validity of the statement being proved and the leakage obtained from the prover. This formalization of security has been extensively studied by Bitansky et al. in [6] for the case of universally composable secure computation [15]. Similar definitions have been used in [9, 11, 12, 36].

In [28], constructions for LRZK in the standard model and for non-interactive LRZK in the common reference string (CRS) model were given. The simulator of [28] for LRZK asks for a total of \((1+\epsilon )\cdot l\) bits in the ideal world, where l is the number of bits obtained by the adversarial verifier. Thus the simulator is allowed to obtain more bits than the verifier and this seems to be necessary as Garg et al. show that it is impossible to obtain a simulator that ask for less than l bits in the ideal world. Very recently, Pandey [39] gave a constant-round construction for LRZK under the definition of [28].

Nowadays, leakage tolerance is the commonly accepted term for the security notion used in [6, 28, 39] as it does not prevent a leakage attack but only guarantees that a protocol does not leak more than what can be obtained through leakage queries. Bitansky et al. [7] obtained UC-secure continual leakage tolerance using an input-independent leak-free preprocessing phase.

Open Problems: Leakage Resilience with Leak-Free Encoding. The motivation to study leakage-tolerant Cryptography is based on the observation that a private input can not be protected in full from a leakage query. However this notion is quite extreme and does not necessarily fit all real-world scenarios. Indeed, it is commonly expected that an adversary attacks the honest player during the execution of the protocol, while they are connected through some communication channel. It is thus reasonable to assume that a honest player receives his input in a preliminary phase, before having ever had any interaction with the adversary. Once this input is received, the honest player can encode it in order to make it somewhat intelligible from leakage queries but still valid for the execution of a protocol. This encoding phase can be considered leak-free since, as stressed before, the honest player has never been in touch with the adversaryFootnote 1. Later on, when the interaction with the adversary starts, leakage queries will be possible but they will affect the current state of the honest player that contains an encoding of the input. The need of a leak-free phase to protect a secret from leakage queries was considered also in [26, 32, 33].

The above realistic scenario circumvents the argument that leakage tolerance is the best one can hope for, and opens the following challenging open questions:

Open Question 1: “Assuming players can encode their inputs during a leak-free phase, is it possible to construct LRZK argument/proof systems?”

Open Question 2: “Assuming players can encode their inputs during a leak-free phase, is it possible to construct protocols for leakage-resilient Multi-Party Computation (LRMPC)?”

Leakage Resilience Assuming the Existence of a CRS. Very recently, Ananth et al. [1], showed that in the CRS (common reference string) model it is possible to have an interactive argument system that remains non-transferable even in presence of continual leakage attacks. More precisely, in their model a prover encodes the witness in a leak-free environment and, later on, the prover runs the protocol with a verifier using the encoded witness. During the execution of the protocol, the adversarial verifier is allowed to launch leakage queries. Once the protocol has been completed, the prover can refresh (again, in a leak-free environment) its encoded witness and then it can play again with the verifier (under leakage attacks). Non-transferability means that an adversarial verifier that mounts the above attack against a honest prover does not get enough information to later prove the same statement to a honest verifier. The main contribution of [1] is the construction of an encoding/refreshing mechanism and a protocol for non-transferable arguments against such continual leakage attacks. They left explicitly open the following open problem (see page 167 of [1]): is it possible to obtain non-transferable arguments/proofs that remain secure against continual leakage attacks without relying on a CRS? This problem has similarities with Open Problem 1. Indeed, zero knowledge (without a CRS) implies non-transferability and therefore solving Open Problem 1 in the positive and with continual leakage would solve the problem opened by [1] in a strong sense since non-transferability would be achieved through zero knowledge, and this goes even beyond the security definition of [1]Footnote 2. However, as we will show later we will give a negative answer to Open Problem 1 for the case of black-box simulation. Even in light of our negative results, the open problem of [1] remains open as one might be able to construct leakage resilient non-black-box zero knowledge (which is clearly non-transferable) or leakage resilient witness hiding/indistinguishable proofs (that can still be non-transferable since non-malleable proofs can be achieved with non-malleable forms of WI as shown in [37]).

Leakage Resilience Assuming Leak-Free Preprocessing. In [10], Boyle et al. proposed a model for leakage-resilient secure computation based on the following three phases:

  1. 1.

    a leak-free interactive preprocessing to be run only once, obliviously w.r.t. inputs and functions;

  2. 2.

    a leak-free stand-alone input-encoding phase to be run when a new input arrives (and of course after the interactive preprocessing), obliviously w.r.t. functions to be computed later;

  3. 3.

    an on-line phase where parties, on input the states generated during the last executions of the input-encoding phases, and on input a function f, run a protocol that aims at securely computing the output of f.

In the model of [10] leakage attacks are not possible during the first two phases but are possible in any other moment, including the 3rd phase and in between phases.

Reference [10] showed (a) the impossibility of leakage-resilient 2-party computation and, more in general, of n-party LRMPC when \(n-1\) players are corrupted; (b) the feasibility of leakage-resilient MPC when the number of players is polynomial and a constant fraction of them is honest.

The positive result works for an even stronger notion of leakage resilience referred to as “continual leakage” that has been recently investigated in several papers [13, 14, 19, 20, 24]). Continual leakage means that the same input can be re-used through unbounded multiple executions of the protocol each allowing for a bounded leakage, as long as the state can be refreshed after each execution. Leakage queries are allowed also during the refreshing.

Boyle et al. explicitly leave open (see paragraph “LR-MPC with Non-Interactive Preprocessing” on page 1240 of [10]) the problem of achieving their results without the preprocessing (i.e., Open Question 2) and implicitly left open the case of zero-knowledge arguments/proofs. (i.e., Open Question 1) since when restricting to the ZK functionality only, the function is known in advance and therefore their impossibility for the two-party case does not directly hold.

We notice that the result of [1] does not yield a continual leakage-resilient non-transferable proof system for the model of [10]. Indeed, while the preprocessing of [10] can be used to establish the CRS needed by [1], the refresh of the state of [1] requires a leak-free phase that is not available in the model of [10]. We finally stress that the construction of [1] is not proved to be LRZK.

However the interesting open question in the model of [10] consists in achieving continual LRZK without an interactive preprocessing. Indeed, if an interactive preprocessing is allowed, continual LRZK can be trivially achieved as follows. The preprocessing can be used to run a secure 2-party computation for generating a shared random string. The input-encoding phase can replace the witness with a non-interactive zero-knowledge proof of knowledge (NIZKPK). The on-line phase can be implemented by simply sending the previously computed NIZKPK. This trivial solution would allow the leakage of the entire state, therefore guaranteeing continual leakage (i.e., no refresh is needed).

Impossibility Through Obfuscation. In the model studied by Garg et al. [28], the simulator is allowed to see the leakage queries issued by the adversarial verifier (and not the replies) and, based on these, it decides his own leakage queries in the ideal model. Nonetheless, the actual simulator constructed by [28] does not use this possibility; such a simulator is called leakage-oblivious. In our setting (in which the simulator is not allowed to ask queries) leakage-oblivious simulators are very weak: an adversarial verifier that asks the query for function \(R(x,\cdot )\) applied to the witness w (here R is the relation associated to \(\mathbb {NP}\) language L and x is the common input) cannot be simulated. Notice though that in the model we are interested in, the leak-free encoding phase might invalidate this approach since the encoded witness could have a completely different structure and therefore could make R evaluate to 0. Despite this issue (that is potentially fixable), the main problem is that in our setting the simulator can read the query of the adversarial verifier and could easily answer 1 (the honest prover always has a valid witness). Given the recent construction of circuit obfuscators [27], one could then think of forcing simulators to be leakage-oblivious by considering an adversary that obfuscates its leakage queries. While this approach has a potential, we point out that our goal is to show the impossibility under standard assumptions (e.g., the existence of a family of CRHFs).

The Technique of Nielsen et al. [36]. We finally discuss the very relevant work of Nielsen et al. [36] that showed a lower bound on the size of a secret key for leakage-tolerant adaptively secure message transmission. Nielsen et al. introduced in their work a very interesting attack consisting in asking a collision-resistant hash of the state of a honest player through a leakage query. Then a succinct argument of knowledge is run through leakage queries in order to ask the honest player to prove knowledge of a state that is consistent with the previously sent hash value. As we will discuss later, we will extend this technique to achieve our main result. The use of CRHFs and succinct arguments of knowledge for impossibility of leakage-resilience was also used in [18] but in a very different context. Indeed in [18] the above tools are used to check consistency with the transcript of played messages with the goal of proving that full adaptive security is needed in multi-party protocols as soon as some small amount of leakage must be tolerated.

1.2 Our Results

In this paper we study the above open questions and show the following results.

Black-Box LRZK Without CRS/Preprocessing. As a main result, we show that, if a family of collision-resistant hash functions exist, then black-box LRZK is impossible for non-trivial languages if we only rely on a leak-free input-encoding phase (i.e., without CRS/preprocessing). More in details, with respect to the works of [1, 10], our results shows that, by removing the CRS/preprocessing, not only non-transferable continual black-box LRZK is impossible, but even ignoring non-transferability and continual leakage, the simple notion of 1-time black-box LRZK is impossible. Extending the techniques of [36], we design an adversarial verifier \({\mathsf {V}}^\star \) that uses leakage queries to obtain a very small amount of data compared to the state of the prover and whose view cannot be simulated in a black-box manner. The impossibility holds even knowing already at the input-encoding phase which protocol will be played later.

Overview of Our Techniques. We prove the above impossibility result by extending the previously discussed technique of [36]: the adversary will attack the honest player without running the actual protocol at all! Indeed, the adversary will only run an execution of another (insecure) protocol in its head, using leakage queries to get messages from the other player for the “virtual” execution of the (insecure) protocol.

More in details, assuming by contradiction the existence of a protocol \(({\mathsf {P}},{\mathsf {V}})\) for a language \(L\not \in {\mathsf {BPP}}\), we show an adversary \({\mathsf {V}}^{\star }\) that first runs a leakage query to obtain a collision-resistant (CR) hash \(\tilde{w}\) of the state \(\hat{w}\) of the prover. Then it takes a communication-efficient (insecure) protocol \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) and, through leakage queries, \({\mathsf {V}}^{\star }\) runs in its head an execution of \(\varPi \) playing as a honest verifier \(\varPi .{\mathsf {V}}\), while the prover \({\mathsf {P}}\) will have to play as \(\varPi .{\mathsf {P}}\) proving that the hash is a good one: namely, it corresponds to a state that would convince a honest verifier \({\mathsf {V}}\) on the membership of the instance in L. We stress that this technique was introduced in [36].

Notice that in the real-world execution \({\mathsf {P}}\) would convince \({\mathsf {V}}^{\star }\) during the “virtual” execution of \(\varPi \) since \({\mathsf {P}}\) runs as input an encoded witness that by the completeness of \(({\mathsf {P}},{\mathsf {V}})\) convinces \({\mathsf {V}}\).

Therefore a black-box simulator will have to do the same without having the encoding of a witness but just relying on rewinding capabilities. To show our impossibility we extend the technique of [36] by making useless the capabilities of the simulator. This is done by connecting leakage resilience with resettability. Indeed we choose \(\varPi \) not only to be communication efficient on \(\varPi .{\mathsf {P}}\)’s side (this helps so that the sizes of the outputs of leakage queries will correspond to a small portion of the state of \({\mathsf {P}}\)), but also to be a resettable argument of knowledge (and therefore resettably sound). Such arguments of knowledge admit an extractor \(\varPi .\mathsf{Ext}\) that works even against a resetting prover \(\varPi .{\mathsf {P}}^{\star }\) (i.e., such an adversary in our impossibility will be the simulator \(\mathsf{Sim}\) of \(({\mathsf {P}},{\mathsf {V}})\)).

The existence of a family of CR hash functions gives not only the CR hash function required by the first leakage query but also the communication-efficient resettable argument of knowledge for \(\mathbb {NP}\). Indeed we can use Barak’s public-coin universal argument [3] that enjoys a weak argument of knowledge property when used for languages in NEXP. Instead when used for \(\mathbb {NP}\) languages, Barak’s construction is a regular argument of knowledge with a black-box extractor. We can finally make it extractable also in presence of a resetting prover by using the transformation of Barak et al. [4] that only requires the existence of one-way functions.

Summing up, we will show that the existence of a black-box simulator for \(({\mathsf {P}},{\mathsf {V}})\) implies either that the language is in \({\mathsf {BPP}}\), or that \(({\mathsf {P}},{\mathsf {V}})\) is not sound or that the family of hash functions is not collision resistant.

The Non-Black-Box Case. Lower bounds in the case of non-black-box simulation are rare in Cryptography and indeed we can not rule out the existence of LRZK argument whose security is based on the existence of a non-black-box simulator. We will however discuss some evidence that achieving a positive result under standard assumptions requires a breakthrough on non-black-box simulation that goes beyond Barak’s non-black-box techniques.

Impossibility of Leakage-Resilient MPC for Several Functionalities. Additionally, we address Open Question 2 by showing that for many functionalities LRMPC with a leak-free input-encoding phase (and without an interactive preprocessing phase) is impossible. This impossibility holds regardless of the number of players involved in the computation and only assumes that one player is corrupted. It applies to functionalities that when executed multiple times keeping unchanged the input \(x_{i}\) of a honest player \(P_{i}\), produce outputs delivered to the dishonest players that reveal more information on \(x_{i}\) than what a single output would reveal. Similar functionalities were studied in [17]. We also require outputs to be short.

Our impossibility is actually even stronger since it holds also in case the functionality and the corresponding protocol to be run later are already known during the input-encoding phase.

For simplicity, we will discuss a direct example of such a class of functionalities: a variation of Yao’s Millionaires’ Problem, where n players send their inputs to the functionality that will then send as output a bit b specifying whether player \(P_{1}\) is the richest one.

High-Level Overview. The adversary will focus on attacking player \(P_{1}\) that has an input to protect. The adversary can play in its head by means of a single leakage query the entire protocol selecting inputs and randomnesses for all other players, and obtaining as output of the leakage query the output of the function (i.e., the bit b). This “virtual” execution can be repeated multiple times, therefore extracting more information on the input of the player. Indeed playing multiple times and changing the inputs of the other players while the input of \(P_{1}\) remains the same, it is possible to restrict the possible input of \(P_{1}\) to a much smaller range of values than what can be inferred by a single execution.

The above attack will be clearly impossible to simulate since it would require the execution of multiple queries in the ideal world, but the simulator by definition can make only one query.

When running the protocol through leakage queries, we are of course assuming that authenticated channels do not need to be simulated by the adversaryFootnote 3 since their management is transparent to the state of the players running the leakage-resilient protocol. This is already assumed in previous work like [10] since otherwise leakage-resilient authenticated channels would have been required, while instead [10] only requires an authenticated broadcast channel (see Sect. 3 of [10]).

We will give only a sketch of this additional simpler result.

2 Definitions

We will denote by “\(\alpha \circ \beta \)” the string resulting from appending \(\beta \) to \(\alpha \), and by [k] the set \(\{1,\ldots ,k\}\). A polynomial-time relation R is a relation for which it is possible to verify in time polynomial in |x| whether \(R(x,w)=1\). We will consider \(\mathbb {NP}\)-languages L and denote by \(R_L\) the corresponding polynomial-time relation such that \(x\in L\) if and only if there exists w such that \(R_L(x,w)=1\). We will call such a w a valid witness for \(x\in L\) and denote by \(W_L(x)\) the set of valid witnesses for \(x\in L\). We will slightly abuse notation and, whenever L is clear from the context, we will simply write W(x) instead of \(W_L(x)\). A negligible function \(\nu (k)\) is a function such that for any constant \(c<0\) and for all sufficiently large k, \(\nu (k)< k^c\).

We will now give all definitions required for the main result of our work, the impossibility of black-box LRZK. Since we will only sketch the additional result on LRMPC, we defer the reader to [10] for the additional definitions.

2.1 Interactive Proof Systems

An interactive proof system [31] for a language L is a pair of interactive Turing machines \(({\mathsf {P}},{\mathsf {V}})\), satisfying the requirements of completeness and soundness. Informally, completeness requires that for any \(x\in L\), at the end of the interaction between \({\mathsf {P}}\) and \({\mathsf {V}}\), where \({\mathsf {P}}\) has on input a valid witness for \(x \in L\), \({\mathsf {V}}\) rejects with negligible probability. Soundness requires that for any \(x\not \in L\), for any computationally unbounded \({\mathsf {P}}^{\star }\), at the end of the interaction between \({\mathsf {P}}^{\star }\) and \({\mathsf {V}}\), \({\mathsf {V}}\) accepts with negligible probability. When \({\mathsf {P}}^{\star }\) is only probabilistic polynomial-time, then we have an argument system. We denote by \(\langle {\mathsf {P}},{\mathsf {V}}\rangle (x)\) the output of the verifier \({\mathsf {V}}\) when interacting on common input x with prover \({\mathsf {P}}\). Also, sometimes we will use the notation \(\langle {\mathsf {P}}(w),{\mathsf {V}}\rangle (x)\) to stress that prover \({\mathsf {P}}\) receives as additional input witness w for \(x\in L\). We will write \(\langle {\mathsf {P}}(w;r_{P}),{\mathsf {V}}(r_{V}) \rangle (x)\) to make explicit the randomness used by \({\mathsf {P}}\) and \({\mathsf {V}}\). We will also write \({\mathsf {V}}^{\star }(z)\) to denote an adversarial verifier \({\mathsf {V}}^{\star }\) that runs on input an auxiliary string z.

Definition 1

[31]. A pair of interactive Turing machines \(({\mathsf {P}},{\mathsf {V}})\) is an interactive proof system for the language L, if \({\mathsf {V}}\) is probabilistic polynomial-time and

  1. 1.

    Completeness: There exists a negligible function \(\nu (\cdot )\) such that for every \(x\in L\) and for every \(w\in W(x)\) \(\text{ Prob }\left[ \;\langle {\mathsf {P}}(w),{\mathsf {V}}\rangle (x)=1\;\right] \ge 1-\nu (|x|).\)

  2. 2.

    Soundness: For every \(x\not \in L\) and for every interactive Turing machines \({\mathsf {P}}^{\star }\) there exists a negligible function \(\nu (\cdot )\) such that \(\text{ Prob }\left[ \;\langle {\mathsf {P}}^{\star },{\mathsf {V}}\rangle (x)=1\;\right] \le \nu (|x|).\)

If the soundness condition holds only with respect to probabilistic polynomial-time interactive Turing machines \({\mathsf {P}}^{\star }\) then \(({\mathsf {P}},{\mathsf {V}})\) is called an argument.

We now define the notions of reset attack and of resetting prover.

Definition 2

[4]. A reset attack of a prover \({\mathsf {P}}^{\star }\) on \({\mathsf {V}}\) is defined by the following two-step random process, indexed by a security parameter k.

  1. 1.

    Uniformly select and fix \(t = \mathsf{poly}(k)\) random tapes, denoted by \(r_1,\ldots ,r_t\), for \({\mathsf {V}}\), resulting in deterministic strategies \({\mathsf {V}}^{(i)}(x) = {\mathsf {V}}_{x,r_{i}}\) defined by \({\mathsf {V}}_{x,r_{i}}(\alpha ) = {\mathsf {V}}(x,r_i, \alpha )\), where \(x \in \{0,1\}^{k}\) and \(i \in {1,\ldots ,t}\). Each \({\mathsf {V}}^{(i)}(x)\) is called an incarnation of \({\mathsf {V}}\).

  2. 2.

    On input \(1^k\), machine \({\mathsf {P}}^{\star }\) is allowed to initiate \(\mathsf{poly}(k)\)-many interactions with \({\mathsf {V}}\). The activity of \({\mathsf {P}}^{\star }\) proceeds in rounds. In each round \({\mathsf {P}}\) chooses \(x \in \{0,1\}^k\) and \(i \in {1,\ldots ,t}\), thus defining \({\mathsf {V}}^{(i)}(x)\), and conducts a complete session (a session is complete if is either terminated or aborted) with it.

We call resetting prover a prover that launches a reset attack.

We now define proofs/arguments of knowledge, in particular considering the case of a prover launching a reset attack.

Definition 3

[5]. Let R be a binary relation and \(\epsilon :\{0,1\}^{\star }\rightarrow [0, 1]\). We say that a probabilistic polynomial-time interactive machine \({\mathsf {V}}\) is a knowledge verifier for the relation R with knowledge error \(\epsilon \) if the following two conditions hold:

  • Non-triviality: There exists a probabilistic polynomial-time interactive machine \({\mathsf {P}}\) such that for every \((x,w) \in R\), with overwhelming probability an interaction of \({\mathsf {V}}\) with \({\mathsf {P}}\) on common input x, where \({\mathsf {P}}\) has auxiliary input w, is accepting.

  • Validity (or Knowledge Soundness) with Negligible Error \(\epsilon \) : for every probabilistic polynomial-time machine \({\mathsf {P}}^{\star }\), there exists an expected polynomial-time machine \(\mathsf{Ext}\), such that and for every \(x,aux,r \in \{0,1\}^{\star }\), \(\mathsf{Ext}\) satisfies the following condition: Denote by p(xauxr) the probability (over the random tape of \({\mathsf {V}}\)) that \({\mathsf {V}}\) accepts upon input x, when interacting with the prover \({\mathsf {P}}^{\star }\) who has input x, auxiliary-input aux and random-tape r. Then, machine \(\mathsf{Ext}\), upon input (xauxr), outputs a solution \(w \in W(x)\) with probability at least \(p(x, aux, r)-\epsilon (|x|)\).

A pair \(({\mathsf {P}}, {\mathsf {V}})\) such that \({\mathsf {V}}\) is a knowledge verifier with negligible knowledge error for a relation R and \({\mathsf {P}}\) is a machine satisfying the non-triviality condition (with respect to \({\mathsf {V}}\) and R) is called an argument of knowledge for the relation R. If the validity condition holds with respect to any (not necessarily polynomial- time) machine \({\mathsf {P}}^{\star }\), then \(({\mathsf {P}},{\mathsf {V}})\) is called a proof of knowledge for R. If the validity condition holds with respect to a polynomial-time machine \({\mathsf {P}}^{\star }\) launching a reset attack, then \(({\mathsf {P}},{\mathsf {V}})\) is called a resettable argument of knowledge for R.

In the above definition the extractor does not depends on the code of the prover (i.e., the same extractor works with all possible provers) \(\mathsf{Ext}\) then the interactive argument/proof system is a black-box (resettable) argument/proof of knowledge.

The Input-Encoding Phase. Following previous work we will assume that the prover receives the input and encodes it running in a leak-free environment. This is unavoidable since otherwise a leakage query can cask for some bits of the witness and therefore zero knowledge would be trivially impossible to achieve, unless the simulator is allowed to ask leakage query in the ideal world (i.e., leakage tolerance). After this leak-free phase that we call input-encoding phase, the prover has a state consisting only of the encoded witness and is ready to start the actual leakage-resilient protocol.

Leakage-Resilient Protocol [39]. As in previous work, we assume that random coins are available only in the specific step in which they are needed. More in details, the prover \({\mathsf {P}}\) at each round of the protocol obtains fresh randomness r for the computations related to that round. However, unlike in previous work, we do not require the prover to update its state by appending r to it. We allow the prover to erase randomness and change its state during the protocol execution. This makes our impossibility results even stronger.

The adversarial verifier performs a leakage query by specifying a polynomial-sized circuit C that takes as input the current state of the prover. The verifier gets immediately the output of C and can adaptively decide how to continue. An attack of the verifier that includes leakage queries is called a leakage attack.

Definition 4

Given a polynomial p, we say that an interactive argument/proof system \(({\mathsf {P}},{\mathsf {V}})\) for a language \(L \in \mathbb {NP}\) with a witness relation R, is p(|x|)-leakage-resilient zero knowledge if for every probabilistic polynomial-time machine \({\mathsf {V}}^{*}\) launching a leakage attack on \({\mathsf {P}}\) after the input-encoding phase, obtaining at most p(|x|) bits, there exists a probabilistic polynomial-time machine \(\mathsf{Sim}\) such that for every \(x \in L\), every w such that \(R(x, w)=1\), and every \(z \in \{0, 1\}^{*}\) distributions \(\langle {\mathsf {P}}(w),{\mathsf {V}}^{\star }(z)\rangle (x)\) and \(\mathsf{Sim}(x, z)\) are computationally indistinguishable.

The definition of standard zero-knowledge is obtained by enforcing that no leakage query is allowed to any machine and removing the input-encoding phase.

In the above definition the simulator does not depends on the code of the verifier (i.e., the same simulator works with all possible verifiers) \(\mathsf{Sim}\) then the interactive argument/proof system is leakage-resilient black-box zero knowledge. We will denote by \(\mathsf{Sim}^{{\mathsf {V}}^\star }\) an execution of \(\mathsf{Sim}\) having oracle access to \({\mathsf {V}}^\star \).

3 Impossibility of Leakage-Resilient Zero Knowledge

Here we prove that LRZK argument systems exist only for \({\mathsf {BPP}}\) languages.

Tools. In our proof we assume the existence of a communication-efficient argument system \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) for a specific auxiliary \(\mathbb {NP}\) language (to be defined later). Moreover we require such an argument system to be a resettable argument of knowledge. Specifically, we require that on common input x, \(\varPi .{\mathsf {P}}\) sends \(O(|x|^\epsilon )\) bits to \(\varPi .{\mathsf {V}}\) for an arbitrarily chosen constant \(\epsilon >0\). We denote, with a slight abuse of notation, by \(\varPi .{\mathsf {P}}\) the prover’s next message function; that is, \(\varPi .{\mathsf {P}}\) on input x, randomness \(r_1,\ldots ,r_{i-1}\) used in the previous \(i-1\) rounds, fresh randomness \(r_i\) and verifier messages \(v_1,\ldots ,v_{i}\) received so far, outputs \(\mathtt {msg}_i\), the prover’s i-th message. Similarly, we denote the verifier’s next message function by \(\varPi .{\mathsf {V}}\). Finally, we denote by \(\varPi .\mathsf{Ext}\) the extractor that in expected polynomial time outputs a witness for \(x \in L\) whenever a polynomial-time prover can make \(\varPi .{\mathsf {V}}\) accept \(x \in L\) with non-negligible probability.

Such a resettable argument of knowledge \(\varPi \) exists based on the existence of a family of collision-resistant hash functions. It can be obtained by starting with the public-coin universal argument of [3] that for \(\mathbb {NP}\) languages is also an argument of knowledge. Then by applying the transformation of [4] that requires one-way functions, we have that the resulting protocol is still communication efficient, and moreover is a resettable argument of knowledge.

Theorem 1

Assume the existence of a family of collision-resistant hash functions. If an \(\mathbb {NP}\)-language L admits an \((|x|^\epsilon )\)-leakage-resilient black-box zero-knowledge argument system \(\varPi _{LRZK}=({\mathsf {P}},{\mathsf {V}})\) for some constant \(\epsilon >0\) then \(L\in {\mathsf {BPP}}\).

Proof. For sake of contradiction, we assume that language \(L\not \in {\mathsf {BPP}}\) admits a \((|x|^\epsilon )\)-leakage-resilient zero-knowledge argument system \(({\mathsf {P}},{\mathsf {V}})\) with black-box simulator \(\mathsf{Sim}\) for some constant \(\epsilon >0\). We now describe an adversarial verifier \({\mathsf {V}}^\star ={\mathsf {V}}^\star _{x,s,h,t}\), parameterized by input x, strings s and t, and function h from a family of collision-resistant hash functions. In the description of \({\mathsf {V}}^\star \), we let \(\{F_s\}\) be a pseudorandom family of functions.

Our proof makes use of the auxiliary language \(\varLambda \) consisting of the tuples \(\tau =(h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\) for which there exists \(\hat{w}\) such that \(h(\hat{w})=\tilde{w}\) and \(\langle {\mathsf {P}}(\hat{w};\mathtt {rand}^{{\mathsf {P}}}),{\mathsf {V}}(\mathtt {rand}^{{\mathsf {V}}})\rangle (x)=1\). Clearly, \(\Lambda \in \mathbb {NP}\). Let \(\varPi =(\varPi .{\mathsf {P}},\varPi .{\mathsf {V}})\) be a communication-efficient argument system for \(\varLambda \). We assume wlog that the number of rounds of \(\varPi \) is \(2\ell \) (i.e., \(\ell \) messages played by the verifier and \(\ell \) messages played by the prover) where \(\ell >1\) and that the verifier speaks first.

  1. 1.

    At the start of the interaction between \({\mathsf {P}}\) and \({\mathsf {V}}^\star \) on an n-bit input x with \(n=\mathsf{poly}(k)\), the state of \({\mathsf {P}}\) consists solely of the encoding \(\hat{w}\) of the witness w for \(x\in L\), where \(|\hat{w}|=\mathsf{poly}(n)\).

  2. 2.

    \({\mathsf {V}}^\star \) issues leakage query \(Q_0\) by specifying function h; as a reply, \({\mathsf {V}}^\star \) receives \(\tilde{w}=h(\hat{w})\), a hash of the encoding of the witness used by \({\mathsf {P}}\).

  3. 3.

    \({\mathsf {V}}^\star \) then selects randomness

    $$\mathtt {rand}=( \mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}, \mathtt {rand}^{\varPi .{\mathsf {P}}}_1, \ldots ,\mathtt {rand}^{\varPi .{\mathsf {P}}}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_1,\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell },\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1} )$$

    by setting \(\mathtt {rand}=F_s(\tilde{w}\circ x)\).

  4. 4.

    \({\mathsf {V}}^\star \) performs, by means of leakage queries, an execution of the protocol \(\varPi \) on common input \((h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\).

    Specifically, for round \(i=1,\ldots ,\ell \), \({\mathsf {V}}^\star \) computes

    $$v_{i}=\varPi .{\mathsf {V}}\left( (h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}), \{\mathtt {msg}_{j}\}_{0<j<i}, \{\mathtt {rand}^{\varPi .{\mathsf {V}}}_{j}\}_{0<j\le i} \right) $$

    and issues leakage query \(Q_i\) for the prover’s next message function

    $$\varPi .{\mathsf {P}}\left( (h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}),\ \cdot \ , \{v_j\}_{0<j\le i}, \{\mathtt {rand}^{\varPi .{\mathsf {P}}}_{j}\}_{0<j\le i} \right) $$

    that is to be applied to the state \(\hat{w}\) of prover \({\mathsf {P}}\). In other words, the query computes the prover’s i-th message \(\mathtt {msg}_i\) of an interaction of protocol \(\varPi \) in which prover \(\varPi .{\mathsf {P}}\) (running on randomness \(\mathtt {rand}^{\varPi .{\mathsf {P}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {P}}}_\ell \)) tries to convince verifier \(\varPi .{\mathsf {V}}\) (running on randomness \(\mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell },\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1}\)) that \((h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}})\in \varLambda \).

    After receiving prover \(\varPi .{\mathsf {P}}\)’s last message, \({\mathsf {V}}^\star \) computes \(\varPi .{\mathsf {V}}\)’s output in this interaction:

    $$b=\varPi .{\mathsf {V}}((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}}), \mathtt {msg}_{1},\ldots ,\mathtt {msg}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1}).$$
  5. 5.

    If \(b=1\) then \({\mathsf {V}}^\star \) outputs t; otherwise, \({\mathsf {V}}^\star \) outputs \(\bot \).

This concludes the description of \({\mathsf {V}}^\star \).

Counting the Number of Bits Leaked. The total number of bits leaked is equal to the output of the first leakage query (i.e., the length in bits of a range element of the collision-resistant hash function) \(|\tilde{w}|=k\) and the number of bits sent by the prover in \(\varPi \) which, for inputs of length n, is \(O(n^{\epsilon '})\) for an arbitrarily constant \(\epsilon '>0\) . Being \(n=\mathsf{poly}(k)\), we have that the amount of leakage can be made smaller than \(n^\epsilon \) for any \(\epsilon >0\).

\(\mathsf{Sim}\) Can Get t only by Succeeding in \(\varPi \)Therefore Properly Answering to Leakage Queries. We continue by observing that the output of the real game (i.e., when \({\mathsf {P}}\) and \({\mathsf {V}}^\star _{x,s,h,t}\) interact) is t. Therefore, \(\mathsf{Sim}\) must output t when interacting with \({\mathsf {V}}^\star _{x,s,h,t}\) with overwhelming probability. Since \(\mathsf{Sim}\) is a black-box simulator, and since all messages of \({\mathsf {V}}^{\star }_{x,s,h,t}\) except for the last one, are independent of t, the only way \(\mathsf{Sim}\) can obtain t from \({\mathsf {V}}^\star _{x,s,h,t}\) is by replying with a value \(\tilde{w}\) to the first leakage query and by replying to queries \(Q_1,\ldots ,Q_\ell \) so to define a transcript \(\mathsf {Conv}=(v_{1},\mathtt {msg}_{1},\ldots ,v_\ell ,\mathtt {msg}_{\ell })\) that for common input \((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}})\) produces \(1=\varPi .{\mathsf {V}}((h,\tilde{w},\mathtt {rand}^{\mathsf {P}},\mathtt {rand}^{\mathsf {V}}), \mathtt {msg}_{1},\ldots ,\mathtt {msg}_{\ell }, \mathtt {rand}^{\varPi .{\mathsf {V}}}_{1},\ldots ,\mathtt {rand}^{\varPi .{\mathsf {V}}}_{\ell +1})\).

By the security of the pseudorandom function, we can consider the same experiment except having that \(\mathtt {rand}=\mathcal{R}(\tilde{w}\circ x)\) (computed by \({\mathsf {V}}^\star \) in step 3 of its description) where \(\mathcal R\) is a truly random function (i.e., each time \(\tilde{w}\circ x\) is new, \(\mathtt {rand}\) is computed by sampling fresh randomness).

We denote by \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\) the simulation in such a modified game. We can show (the proofs of the following lemmas are omitted for lack of space) that when considering \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\), still t is given in output with overwhelming probability.

Lemma 1

The output of \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }\) is computationally indistinguishable from the output of \(\mathsf{Sim}^{{\mathsf {V}}^\star }\).

We can then show that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t also for some \(x\not \in L\).

Lemma 2

If \(L \not \in {\mathsf {BPP}}\) then there exists some \(x\not \in L\) such that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability greater than 2/3.

Let \(x\not \in L\) be a special statement such that \(\mathsf{Sim}_{R}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability at least 2/3 (such an x exists since we are assuming that \(L\not \in {\mathsf {BPP}}\)). This means that \(\mathsf{Sim}_{R}\) feeds \({\mathsf {V}}^\star \) with a transcript of messages that with non-negligible probability produces t as output.

Let \(\mathsf{time}_{\mathsf{Sim}_{R}}\) be the expected running time of \(\mathsf{Sim}_{R}\). Consider the strict polynomial-time machine \(\mathsf{Sim}_{pR}\) that consists of running the first \(3\mathsf{time}_{\mathsf{Sim}_{R}}\) steps of \(\mathsf{Sim}_{R}\).

We can prove the following lemma.

Lemma 3

If \(L \not \in {\mathsf {BPP}}\) then there exists some \(x\not \in L\) such that \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^\star }(x,z)\) outputs t with probability greater than 1 / 3.

For notation purposes, we say that a query of \(\mathsf{Sim}_{pR}\) to \({\mathsf {V}}^{\star }\) belongs to the i-th session if it is a tuple \((h,\tilde{w},\ldots )\) where \(\tilde{w}\) is the i-th different value played by \(\mathsf{Sim}_{pR}\) as first message of \(\varPi .{\mathsf {P}}\) answering a leakage query of \({\mathsf {V}}^{\star }\). Let \(\mathsf{time}_{\mathsf{Sim}_{pR}}\) be the strict polynomial corresponding to the running time of \(\mathsf{Sim}_{pR}\).

We can then prove the existence of a critical session i.

Lemma 4

There exist \(x\not \in L\) and \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) such that \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^{\star }}\) obtains t after answering to a query of the i-th session with non-negligible probability.

Consider now the augmented simulator \(\mathsf{Sim}_{pR}^{i^{{\mathsf {V}}^{\star }}}\) that works as \(\mathsf{Sim}_{pR}^{{\mathsf {V}}^{\star }}\) except that \({\mathsf {V}}^{\star }\) in the i-th session will only send h, while all other messages of \({\mathsf {V}}^{\star }\) will be asked to an external oracle that plays as honest verifier of \(\varPi \). Let \(\mathsf{time}_{\varPi .\mathsf{Ext}}\) be the expected running time of \(\varPi .\mathsf{Ext}\).

We can prove the following lemma.

Lemma 5

There exist \(x\not \in L\) and \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) such that the extractor \(\varPi .\mathsf{Ext}\) of \(\varPi \) outputs a witness \(\hat{w}\) for \(\tau =(h,\tilde{w},\mathtt {rand}^{{\mathsf {P}}},\mathtt {rand}^{{\mathsf {V}}}) \in \varLambda \) with non-negligible probability and running in expected polynomial time. Moreover \(\text{ Prob }\left[ \;\langle {\mathsf {P}}(\hat{w}),{\mathsf {V}}\rangle (x)=1\;\right] \) is non-negligible.

We now show an adversarial prover \({\mathsf {P}}^{\star }\) that violates the soundness of \(\varPi _{LRZK}\). Let \(\varPi .\mathsf{Ext}_{p}\) be the strict polynomial-time extractor that behaves precisely as \(\varPi .\mathsf{Ext}\) (up to a given polynomial number of steps) as specified in the last part of the proof of Lemma 5.

\({\mathsf {P}}^{\star }\) works as follows:

  1. 1.

    \({\mathsf {P}}^{\star }\) picks at random \(i \in [\mathsf{time}_{\mathsf{Sim}_{pR}}]\) and then runs \(\varPi .\mathsf{Ext}_{p}\) with respect to \(\mathsf{Sim}_{pR}^{i^{{\mathsf {V}}^{\star }}}.\) If \(\varPi .\mathsf{Ext}_{p}\) does not give in output a state \(\hat{w}\) as part of a witness proving that \(\tau \in \varLambda \), then \({\mathsf {P}}^{\star }\) aborts.

  2. 2.

    \({\mathsf {P}}^{\star }\) then runs the honest prover \({\mathsf {P}}\) of \(\varPi _{LRZK}\) on input \(\hat{w}\) for proving to a honest verifier \({\mathsf {V}}\) that \(x \in L\) where x is the above special statement (i.e., \(x \not \in L\)).

First of all, the running time of \({\mathsf {P}}^{\star }\) is clearly polynomial since both the above steps take polynomial time. Then, we notice that by Lemma 5, both Step 1 and 2 correspond to runs without aborting with non-negligible probability. This is due to the fact that the extractor \(\varPi .\mathsf{Ext}_{p}\) fails only with negligible probability and that the extracted state \(\hat{w}\) gives to a honest prover of \(({\mathsf {P}},{\mathsf {V}})\) non-negligible probability to convince the verifier. Therefore \({\mathsf {P}}^{\star }\) succeeds in proving a false statement to honest \({\mathsf {V}}\) with non-negligible probability.

We have proved that if \(L \not \in {\mathsf {BPP}}\) then \(\varPi _{LRZK}\) can not be both LRZK and sound.

3.1 Discussion on Non-Black-Box LRZK

Since we have shown that LRZK is impossible when security is proved through black-box simulation, a natural question is whether non-black-box simulation can be useful to overcome this impossibility result.

The technique that we have shown for the black-box case is based on an adversarial verifier \({\mathsf {V}}^\star \) that uses leakage queries to perform an execution of a resettably sound communication-efficient argument of knowledge \(\varPi \) against a honest prover. This makes the rewinding capabilities of the simulator ineffective therefore showing the impossibility of a black-box simulation.

However, the technique proposed by Barak in [2] allows for non-black-box straight-line simulation thus bypassing the difficulties to simulate a protocol where rewinds are useless. The construction and simulator proposed by Barak in [2] allows to get public-coin constant-round zero knowledge with a straight-line simulator, going therefore beyond the limits of black-box simulation [29]. It is also known that non-black-box simulation allows for resettably sound zero knowledge [4] where a prover can reset a verifier while the protocol still remains sound and zero knowledge. This is similar to the setting in which our black-box impossibility result holds. Indeed our adversarial verifier \({\mathsf {V}}^\star \) is resilient to rewinds of the black-box simulator.

Having in mind the goal of overcoming the above impossibility result through non-black-box simulation, remember that in order to answer properly to the leakage queries of our adversarial verifier, a simulator either must simulate the execution of the universal argumentFootnote 4 or must use a special trapdoor. Such a trapdoor must allow a honest prover of \(\varPi _{LRZK}\) to succeed in convincing a honest verifier that runs on input a randomness r. Such randomness is later revealed by \({\mathsf {V}}^\star \) only after seeing the short representation of the state \(\tilde{w}\). Barak’s construction does not allow to run the prover with an input different from a witness for \(x \in L\), however, we next present a simple variant of it that does.

A Variation of Barak’s Construction. Consider the following variant of Barak’s protocol: (1) the verifier sends the description of a CRHF h; (2) the prover sends \(h_w=h(\mathsf{{Com}}(w,u))\) to the verifierFootnote 5 where w is its private input, \(\mathsf{{Com}}\) is the commitment function of a non-interactive commitment scheme and u is a random string; (3) the verifier sends a random string z; (4) the prover runs a witness indistinguishable universal argument proving that either \(x \in L \vee h_w\) corresponds to the hash of a commitment of a machine M that in at most \(n^{\log \log n}\) steps outputs z; the prover uses its private input w and u as witness in the universal argument.

Notice that the variation is really minimal: it just consists in asking the prover to use its private input when computing \(h_w\). The impact of this variation is that the prover now can run successfully the protocol both when receiving as input a witness for \(x\in L\) and also when receiving as input the code of the verifier.

The above small variation does not affect the zero-knowledge property (the proof is the same as Barak’s), but allows the simulator to answer leakage queries of \({\mathsf {V}}^\star \) since the description of \({\mathsf {V}}^\star \) can be used as a legitimate encoded state that a prover can use in order to convince a verifier using a randomness r (again, such r is revealed by \({\mathsf {V}}^\star \) upon receiving through a leakage query the short representation of the state of the prover).

We stress that the discussion so far does not propose a LRZK protocol, rather it shows that the impossibility result given for the black-box case fails spectacularly when Barak’s non-black-box techniques are considered.

Defeating Barak’s Non-Black-Box Simulation Technique. While the above discussion seems to say that Barak’s techniques could be used to design a LRZK protocol, we argue here that a breakthrough on non-black-box simulationFootnote 6 is required in order to obtain a LRZK protocol. Notice that the above variation of Barak’s construction allowed the prover to use a special trapdoor (the code of the verifier) instead of a witness to successfully run the protocol. Moreover, notice that the size of such a trapdoor is not bound by a fixed polynomial in the length of the common input since it depends on the size of the adversarial verifier. Instead there exists a constant \(c>0\) such that the length of a legitimate encoded witness of a LRZK protocol for a common input of length n is at most \(n^c\). Therefore, let us consider an adversarial verifier that, just as in the impossibility proof for black-box LRZK, uses the leakage queries to execute a special protocol with a prover. In such a protocol, in addition to proving that the encoded state (that is consistent with the commitment already sent) makes the verifier accept, the prover also proves that the committed value is the hash of an encoded state of length at most \(n^c\). Then the code of the adversarial verifier can not be used anymore as the simulation fails for adversarial verifiers whose code is longer than \(n^c\). In other words, Barak’s technique turns out to be insufficient. Additionally, the adversarial verifier might send a long vector of random strings \(r_1,\ldots ,r_\ell \) therefore asking the prover to prove in the universal argument that the verifier would have accepted the proof running with any of those \(\ell \) randomnesses. Since \(\ell \) can be greater than the upperbound on the encoded witness, there is no way to commit to a small machine that can predict all such strings.

In other words, we would need a non-black-box simulation technique that relies on standard assumptions and allows to construct a protocol where the trapdoor used by the simulator is of an a-priori fixed bounded size and can thus be given as input to the prover. Notice that it is exactly because of this limitation (or, rather, because of the lack of it) on the size of the trapdoor that the construction from [2] requires the use of a witness indistinguishable universal arguments instead of a witness-indistinguishable arguments of knowledge. In turn, this implies that the straight-line simulation of [2] can only be extended to bounded concurrency, leaving still unsolved the question of achieving constant-round concurrent zero knowledge under standard assumptions.

As a conclusion, as for many other lower bounds in zero knowledge, when taking into account non-black-box simulation, we can not rule out the existence of a non-black-box LRZK argument system, but at the same time we gave evidence that, to obtain such a result, new breakthroughs on non-black-box simulation are required.

4 Impossibility of LRMPC

We now use again the technique of running a protocol in the head of the adversary through leakage queries to show that LRMPC is impossible, therefore solving a problem opened in [10]. For this simpler result we give only a sketch of the proof and we defer for the additional definitions to [10]. We stress that the only variation here is that the interactive preprocessing does not take place (as required in the formulation of the open problem in [10]).

We can show that for many functionalities LRMPC with a leak-free input-encoding phase is impossible. The involved functionalities are the ones such that when they are run multiple times keeping unchanged the input \(x_{i}\) of a honest player \(P_{i}\), the (short) outputs delivered to the dishonest players reveal more information on \(x_{i}\) than what a single output would reveal. Our impossibility requires just one dishonest player.

For simplicity we will now consider one such functionality: a variation of Yao’s Millionaires’ Problem, where n players \(P_{1},\ldots ,P_{n}\) send their inputs to the functionality \(\mathcal {F}\) and then \(\mathcal {F}\) outputs to all players a bit b specifying whether \(P_{1}\) is the richest one.

Theorem 2

Consider the n-party functionality \(\mathcal {F}\) that on input n k-bit strings \(x_{1},\ldots ,x_{n}\) outputs to all players the bit \(b=1\) when \(x_{1}\ge x_{j}\) for \(1<j \in [n]\) and 0 otherwise. If at least one among \(P_{2},\ldots ,P_{n}\) is corrupted and can get two bits as total output of leakage queries then there exists no LRMPC for \(\mathcal {F}\).

Proof

We will sketch the proof since the main ideas were already used in the proof of the impossibility of LRZK.

Assume by contradiction that there exists a secure multi-party protocol \(\varPi \). Assume wlog that all players are honest except \(P_{n}\). The adversary \(\mathsf{Adv}\) controls \(P_{n}\) and works as follows.

  1. 1.

    It sends a leakage query that includes different encodings of the same value \(x_{2}=\cdots =x_{n}=2^{k-1}\) for players \(P_{2},\ldots ,P_{n}\); the leakage query asks for a “virtual” execution of the protocol where \(P_{1}\) uses its state \(\hat{x}_{1}\), and requires to give in output the output of \(P_{n}\).

  2. 2.

    It repeats Step 1 changing the value to be used for the \(n-1\) encodings of \(P_{2},\ldots ,P_{n}\) (still a unique value for all of them) according to binary search (i.e., \(2^{k-1}+2^{k-2}\) if the previous output was 1 or \(2^{k-2}\) otherwise).

  3. 3.

    \(\mathsf{Adv}\) ends the protocol by giving in output the first two bits of the original (i.e., pre-encoding) input of \(P_{1}\).

The communication complexity (from honest player to adversary) of this execution through leakage queries is the constant 2. Notice that the above leakage attack can be mounted with two queries each obtaining one bit as output, or with one single query obtaining two bits as output. As a result of the above leakage attack, \(\mathsf{Adv}\) in the real world obtains the first two bits of \(x_{1}\), the original input of \(P_{1}\). \(\mathsf{Sim}\) in the ideal world does not have such an information since it can perform only one query to \(\mathcal {F}\), therefore getting at most one bit.