Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

1.1 Background

In the setting of secure computation, a set of parties with private inputs wish to compute a joint function of their inputs, without revealing anything but the output. Protocols for secure computation guarantee privacy (meaning that the protocol reveals nothing but the output), correctness (meaning that the correct function is computed), and more. These security guarantees are provided in the presence of adversarial behavior. There are two classic adversary models that are typically considered: semi-honest (where the adversary follows the protocol specification but may try to learn more than allowed from the protocol transcript) and malicious (where the adversary can run any arbitrary polynomial-time attack strategy). In the information-theoretic model, security is obtained unconditionally and even in the presence of computationally unbounded adversaries. In contrast, in the computational model, security is obtained in the presence of polynomial-time adversaries and relies on cryptographic hardness assumptions.

Despite its stringent requirements, it has been shown that any polynomial-time functionality can be securely computed with computational security [3, 12, 23] and with information-theoretic security [4, 8, 20]. These results hold both for semi-honest and malicious adversaries, but a two-thirds honest majority must be assumed in order to obtain information-theoretic security (or an honest majority when assuming broadcast).

There are two main approaches for constructing secure computation protocols: the secret-sharing approach (followed by [4, 8, 12]) works by having the parties interact for every gate of the circuit, whereas the garbled-circuit approach (followed by [3, 23]) works by having the parties construct an encrypted version of the circuit which can be computed at once. Both approaches have importance and have settings where they perform better than the other. On the one hand, the garbled-circuit approach yields protocols with a constant number of rounds. Thus, in high-latency networks, they far outperform secret-sharing based protocols which have a number of rounds that is linear in the depth of the circuit being computed. On the other hand, protocols based on secret-sharing typically have low bandwidth, in contrast to garbled circuits that are large and costly in bandwidth. Given that the bandwidth is often the bottleneck, it follows that protocols with low communication have the potential to achieve much higher throughput.

1.2 Our Results

In this paper, we focus on the question of achieving secure computation in the presence of malicious adversaries with very high throughput on a fast network (without utilizing special-purpose hardware). We start with the recent three-party protocol of [1] that achieves security in the presence of semi-honest adversaries. The protocol requires transmitting only a single bit per AND gate, and the computation per gate is very simple. On a cluster of three 20-core servers with a 10 Gbs connection, the protocol of [1] achieves a rate of computation of 7 billion AND gates per second. This can be used, for example, to securely compute 1.3 million AES block operations per second.

Our approach to achieving malicious security follows the Beaver multiplication triple approach [2] used in [5, 9, 19] (and many follow-up works). According to this approach, the parties securely generate shares of triples (abc) where ab are random and \(c=ab\) (for the case of Boolean circuits, this is equivalent to \(c=a \wedge b\)). Such triples can then be used to verify that AND gates are computed correctly. In the (difficult) case of no honest majority considered in [5, 9, 19], there are two major challenges: (a) how to generate such triples without malicious parties causing either the shares to be invalid or causing \(c\ne ab\), and (b) how to force the parties to send their “correct” values in the multiplication triple in the verification stage. The first problem is solved in [5, 9, 19] by using cut-and-choose: many triples are generated, some are opened, and the others are put in buckets and used to verify each other. The second problem is solved in [5, 9, 19] by using homomorphic MACs on all of the values. The generation of the triples to start with and the use of MACs adds additional overhead that is very expensive.

In this paper, we heavily utilize the fact that at most one party (out of 3) is corrupted in order to generate triples at very little expense, and to force the parties to send the correct values. In fact, the secret-sharing method used in [1] is such that it is possible to generate shares of random values without any interaction (using correlated randomness which is generated by the parties at almost no cost). Furthermore, we show how it is possible to detect if a malicious party sends incorrect values (in the prepared multiplication triples) when there is an honest majority, without requiring MACs of any kind. As a result, generating multiplication triples is very cheap. In turn, this enables us to generate a large number of triples at once, which further improves the parameters of the cut-and-choose step as well.

Overall, our protocol requires very simple computation, and achieves malicious security at very low communication cost. Specifically, with a statistical error of \(2^{-40}\) each party needs to send only 10 bits per AND gate to one other party; for \(2^{-80}\) this rises to only 16 bits per AND gate.

Based on the implementation results in [1], our estimates are that our new protocol should achieve a rate of over 500 million AND gates per second on the same setup as [1]. This is orders of magnitude faster than any other protocol achieving malicious security (see related work below).

1.3 Outline of Our Solution and Organization

In this section, we describe the different subprotocols and constructions that make up our protocol, and provide the high-level ideas behind our constructions.

In Sect. 2.1, we present the 2-out-of-3 secret-sharing scheme used in [1] and some important properties of it. Then, in Sect. 2.2 we describe the semi-honest protocol of [1] for multiplication (AND) gates. In addition, we prove a crucial property of this protocol that we heavily rely on in our construction: for any malicious adversary, the honest parties always hold a valid sharing after the multiplication protocol; the shared value may either equal the AND of the input (if the adversary follows the protocol) or its complement (if the adversary cheats).

In Sect. 2.3 we show how to generate correlated randomness (functionality \(\mathcal{F}_\mathrm{cr}\)); after an initial exchange of keys for a pseudorandom function, the protocol is non-interactive. This makes it highly efficient, and also secure for malicious adversaries (since there is no interaction at all and so no way to cheat). In Sect. 2.4, we use \(\mathcal{F}_\mathrm{cr}\) to securely compute functionality \({\mathcal{F}_\mathrm{rand}}\) that provides random shares to all parties. A very important feature of our protocol is based on the fact that \({\mathcal{F}_\mathrm{rand}}\) can be securely computed non-interactively using correlated randomness. This means that the first step in generating multiplication triples – generating shares of random ab via calls to \({\mathcal{F}_\mathrm{rand}}\) – can be carried out non-interactively and thus at a very fast rate.

In Sect. 2.5, we use \({\mathcal{F}_\mathrm{rand}}\) to carry out secure coin tossing (by generating shares of random values and then just opening them); each coin is generated by sending just a single bit. We explain how this is achieved, since it introduces a key technique that we use throughout. As we have mentioned, shares of a random value can be generated non-interactively and thus this is secure for malicious adversaries by default. However, when opening the shares to obtain the coin, a malicious adversary can cheat by sending an incorrect share value. Here we critically utilize the fact that we have an honest majority. In particular, we can simply have all pairs of parties send their shares to each other. Since the sharing is valid and any two parties can reconstruct the secret, each party can reconstruct separately based on the shares received from each other party, and compare. If the adversary cheats, then the result will be different reconstructed secrets, which will result in an abort. Our secret-sharing scheme has two bits and so this would cost each party sending 4 bits. However, we observe that in order to open it suffices to send 1 bit of each share only. Furthermore, we observe that if each party sends its bit to only one other party (\(P_1\) to \(P_2\), \(P_2\) to \(P_3\), and \(P_3\) to \(P_1\)) then the bit sent by one honest party to another will result in the correct coin (there is always one such pair since only one party is corrupted). Thus, it actually suffices for each party to send its bit to only one other party and to record the result of the coin on a “public view” string. Then, at the end of the entire execution, before any output is revealed, the parties can compare their views by sending a collision-resistant hash of their local public view. If the two honest parties received a different coin at any point then they will have different local public views and so will abort before anything is revealed. As a result, coin-tossing can be achieved by each party sending just a single bit to one other party.

In Sects. 2.6, 2.7 and 2.8, we introduce additional functionalities needed for our protocol. First, in order to carry out the cut-and-choose, a random permutation must be applied to the tuples generated. This is carried out using \({\mathcal{F}_\mathrm{perm}}\) (Sect. 2.6) which computes a random permutation of array indices. This functionality is easily realized by the parties just coin tossing the amount of randomness needed to define the permutation. In addition, in order for the parties to share inputs and obtain output, we need a way to deal shares and open shares that is secure for malicious adversaries. These are constructed in Sect. 2.7 (\(\mathcal{F}_\mathrm{reconst}\) for robustly reconstructing a secret to one party) and Sect. 2.8 (\({\mathcal{F}_\mathrm{share}}\) for robustly sharing a value).

We now explain how the above subprotocols can be used to generate correct multiplication triples. The parties first call \({\mathcal{F}_\mathrm{rand}}\) to generate shares of random values ab and then run the semi-honest multiplication protocol of [1] to generate shares of c. As we have mentioned above, the semi-honest multiplication protocol has the property that even if the adversary is malicious, the shares of c are valid. However, if the adversary cheats then it may be the case that \(c=ab\oplus 1\) instead of equalling ab. In order to prevent this from happening, the parties generate many triples and use some to check the others. Namely, the parties first randomly choose a subset of the triples which are opened to verify that indeed \(c=ab\). This uses the subprotocol in Sect. 2.9 which carries out this exact check. Next, the remaining triples are partitioned randomly into buckets of size B (the random division is carried out using \({\mathcal{F}_\mathrm{perm}}\)). Then, in each bucket, \(B-1\) of the triples are used to verify that one of the triples is correct, except with negligible probability (without revealing the triple being checked). This uses the subprotocol of Sect. 2.10 which shows how to use one triple to verify that another is correct. This protocol is described in Sect. 3, and it securely computes functionality \({\mathcal{F}_\mathrm{triples}}\) that generates an array of random multiplication triples for the parties.

Finally, we show how to securely compute any functionality f using random multiplication triples. Intuitively, this works by the parties running the semi-honest multiplication protocol for each AND gate and verifying each multiplication using a triple. The verification method, as used in [9, 19], has the property that if a multiplication triple is good and the adversary cheats in the gate multiplication, then this is detected by the honest parties. As with all of our protocols, we take care to minimize the communication, and verify each gate by sending only 2 bits (beyond the single bit needed for the multiplication itself).

The efficiency of our construction relies heavily on the cut-and-choose parameters, both with respect to how many triples need to be opened and checked and the bucket size. In Sect. 5 we provide a tight analysis of this cut-and-choose game which yields a significant improvement over previous analyses for similar games in [5, 19]. For concrete parameters that are suitable for our protocol, our analysis is approximately 25% better than [5, 19].

Caveats. We stress that our protocol is specifically defined for the case of 3 parties only. This case is of interest for outsourced computations, as in the Sharemind business model [22], for two-party setting where a third auxiliary server can be used, and in other settings of interest as described in [1]. The generalization of our protocol to more parties is not straightforward since we rely on replicated secret sharing, and the size of such shares increases exponentially in the number of parties. In addition, our protocol is only secure with abort; this is unlike other protocols for the honest majority case that achieve fairness. Nevertheless, this is sufficient for many applications. For this setting, we are able to achieve security for malicious adversaries with efficiency way beyond any other known protocol.

1.4 Related Work

Most of the work on concretely-efficient secure computation has focused on the dishonest majority case. These protocols are orders of magnitude less efficient than ours, but deal with a much more difficult setting. For example, the best protocols based on garbled circuits for batch executions [17, 21] require only sending 4 garbled circuits per execution. Even ignoring all of the additional work and communication (which is very significant), 4 garbled circuits per execution means sending 1000 bits per gate, which is 100 times the cost of our protocol. Likewise, the SPDZ/MASCOT protocol [14] communicates approximately 360 bits per gate for three parties, which is 36 times the cost of our protocol. The same is true for all other dishonest-majority protocols; c.f. [5, 9, 19].

In the setting of an honest majority, the only highly-efficient protocol with security for malicious adversaries that has been implemented, to the best of our knowledge, is that of [18]. We compare our protocol to [18] in detail in Sect. 6. Our protocol is more than an order of magnitude cheaper both in communication and computation; however, their protocol is constant-round and therefore better suited to slow networks.

1.5 Definition of Security

Our protocols are proven secure under the standard ideal/real simulation paradigm, for the case of malicious adversaries and with abort.

2 Building Blocks and Subprotocols

2.1 The Secret Sharing Scheme

We denote the three parties by \(P_1,P_2\) and \(P_3\). Throughout the paper, in order to simplify notation, when we use an index (say i) to denote the ith party, we will write \(i-1\) and \(i+1\) to mean the “previous” and “subsequent” party, respectively. That is, when \(i=1\) then \(P_{i-1}\) is \(P_3\) and when \(i=3\) then \(P_{i+1}\) is \(P_1\).

We use the 2-out-of-3 secret sharing scheme of [1], defined as follows. In order to share a bit v, the dealer chooses three random bits \(s_1,s_2,s_3\in \{0,1\}\) under the constraint that \(s_1 \oplus s_2 \oplus s_3 = v\). Then:

  • \(P_1\)’s share is the pair \((t_1,s_1)\) where \(t_1=s_3\oplus s_1\).

  • \(P_2\)’s share is the pair \((t_2,s_2)\) where \(t_2=s_1\oplus s_2\).

  • \(P_3\)’s share is the pair \((t_3,s_3)\) and \(t_3=s_2\oplus s_3\).

It is clear that no single party’s share reveals anything about v. In addition, any two parties can obtain v; e.g., given \((t_1,s_1),(t_2,s_2)\) one can compute \(v = t_1 \oplus s_2\). We denote by \([v]\) a 2-out-of-3 sharing of the value v according to the above scheme.

Claim 2.1

The secret v together with the share of one party fully determine the shares of the other parties.

Proof:

By the definition of the secret sharing scheme, it holds that \(t_{i} = s_{i-1} \oplus s_i\). Since \((t_i,s_i)\) for some \(i \in \{1,2,3\}\) and v are determined, this determines both \(s_{i-1}\) and \(s_{i+1}\) as well. This follows since \(s_{i-1}=t_i \oplus s_i\) and \(s_{i+1}=v \oplus s_i \oplus s_{i-1}\). Thus, the shares of the other two parties are determined.   \(\blacksquare \)

Opening shares. We define a subprocedure, denoted \(\mathsf{open}([v])\), for our secret sharing scheme, as follows. Denote the shares of v by \(\Big \{(t_i,s_i)\Big \}_{i=1}^{i=3}\). Then, each party \(P_i\) sends \(t_i\) to \(P_{i+1}\), and each \(P_i\) outputs \(v =s_i \oplus t_{i-1}\).

Local operators for shares. We define the following local operators on shares:

  • Addition \([v_1] \oplus [v_2]\): Given a share \((t_i^1,s_i^1)\) of \(v_1\) and a share \((t_i^2,s_i^2)\) of \(v_2\), each party \(P_i\) computes: \((t_i^1 \oplus t_i^2, s_i^1 \oplus s_i^2)\).

  • Multiplication by a scalar \(\sigma \cdot [v]\): Given a share \((t_i,s_i)\) of v and a value \(\sigma \in \{0,1\}\), each party \(P_i\) computes \((\sigma \cdot t_i,\sigma \cdot s_i)\).

  • Addition of a scalar \([v] \oplus \sigma \): Given a share \((t_i,s_i)\) of v and a value \(\sigma \in \{0,1\}\), each party \(P_i\) computes \((t_i,s_i \oplus \sigma )\).

  • Complement \(\overline{[v]}\): Given a share \((t_i,s_i)\) of v, each party \(P_i\) computes \((t_i,\overline{s_i})\) (where \(\overline{b}\) is b’s complement)

We stress that when writing \([v_1]\oplus [v_2]\) the symbol “\(\oplus \)” is an operator on shares and not bitwise XOR, whereas when we write \(v_1\oplus v_2\) the symbol “\(\oplus \)is bitwise XOR; likewise for the product and complement notation. We now prove that these local operators achieve the expected results.

Claim 2.2

Let \([v_1],[v_2]\) be shares and let \(\sigma \in \{0,1\}\) be a scalar. Then, the following properties hold:

  1. 1.

    \([v_1] \oplus [v_2] = [v_1 \oplus v_2]\)

  2. 2.

    \(\sigma \cdot [v_1] = [\sigma \cdot v_1]\)

  3. 3.

    \([v_1] \oplus \sigma = [v_1 \oplus \sigma ]\)

  4. 4.

    \(\overline{[v_1]} = [\overline{v_1}]\)

Proof:

Denote the shares of \(v_1\) and \(v_2\) by \(\Big \{(t_i^1,s_i^1)\Big \}_{i=1}^{i=3}\) and \(\Big \{(t_i^2,s_i^2)\Big \}_{i=1}^{i=3}\), respectively.

  1. 1.

    We prove that \([v_1] \oplus [v_2] = [v_1 \oplus v_2]\) by showing that \(\{(t_i^1 \oplus t_i^2, s_i^1 \oplus s_i^2)\}_{i=1}^{i=3}\) is a valid sharing of \(v_1 \oplus v_2\). First, observe that the s-parts are valid since \((s_1^1 \oplus s_1^2) \oplus (s_2^1 \oplus s_2^2) \oplus (s_3^1 \oplus s_3^2) = (s_1^1 \oplus s_2^1 \oplus s_3^1) \oplus (s_1^2 \oplus s_2^2 \oplus s_3^2) = v_1 \oplus v_2\). Furthermore, for every i, \((t_i^1 \oplus t_i^2) = (s_{i-1}^1 \oplus s_i^1) \oplus (s_{i-1}^2 \oplus s_i^2) = (s_{i-1}^1 \oplus s_{i-1}^2) \oplus (s_i^1 \oplus s_i^2)\) as required.

  2. 2.

    We prove that \(\sigma \cdot [v_1] = [\sigma \cdot v_1]\) by showing that \(\{(\sigma \cdot t_i^1,\sigma \cdot s_i^1)\}_{i=1}^{i=3}\) is a valid sharing of \(\sigma \cdot v_1\). This is true since \(\sigma \cdot s_1^1 \oplus \sigma \cdot s_2^1 \oplus \sigma \cdot s_3^1 = \sigma \cdot (s_1^1 \oplus s_2^1 \oplus s_3^1) = \sigma \cdot v_1\) and \(\sigma \cdot t_i^1 = \sigma \cdot (s_{i-1}^1 \oplus s_i^1) = \sigma \cdot s_{i-1}^1 \oplus \sigma \cdot s_i^1\) as required.

  3. 3.

    We prove that \([v_1] \oplus \sigma = [v_1 \oplus \sigma ]\) by showing that \(\{(t_i^1,\sigma \oplus s_i^1)\}_{i=1}^{i=3}\) is a valid sharing of \(\sigma \oplus v_1\). This is true since \((\sigma \oplus s_1^1) \oplus (\sigma \oplus s_2^1) \oplus (\sigma \oplus s_3^1) = \sigma \oplus (s_1^1 \oplus s_2^1 \oplus s_3^1) = \sigma \oplus v_1\) and \(t_i^1 = s_{i-1}^1 \oplus s_i^1 = (\sigma \oplus s_{i-1}^1) \oplus (\sigma \oplus s_{i}^1)\).

  4. 4.

    We prove that \(\overline{[v_1]} = [\overline{v_1}]\) by showing that \(\{(t_i^1,\overline{s}_i^1)\}_{i=1}^{i=3}\) is a valid sharing of \(\overline{v}_1\). This holds since \(\overline{s}_1^1 \oplus \overline{s}_2^1 \oplus \overline{s}_3^1 =\overline{s_1^1 \oplus s_2^1 \oplus s_3^1} = \overline{v}_1\) and \(t_i^1 = s_{i-1}^1 \oplus s_i^1 = \overline{s}_{i-1}^1 \oplus \overline{s}_i^1\).   \(\blacksquare \)

Consistency. In the setting that we consider here, one of the parties may be maliciously corrupted and thus can behave in an arbitrary manner. Thus, if parties define their shares based on values received, it may be possible that the honest parties hold values that are not a valid sharing of any value. We therefore define the notion of consistency of shares. We stress that this definition relates only to the shares held by the honest parties, since the corrupted party can always change its local values. As we will show after the definition, shares are consistent if they define a unique secret v.

Definition 2.3

Let \((t_1, s_1),(t_2,s_2)\) and \((t_3,s_3)\) be the shares held by parties \(P_1,P_2\) and \(P_3\) respectively, and let \(P_i\) be the corrupted party. We say that the shares are consistent if it holds that \(s_{i+1} = s_{i+2} \oplus t_{i+2}\).

In order to understand the definition, recall that in a valid sharing of v it holds that \(t_{i+2} = s_{i+1} \oplus s_{i+2}\). Thus, we obtain that \(s_{i+1}= s_{i+1} \oplus s_{i+2} \oplus s_{i+2} = s_{i+2} \oplus t_{i+2}\) as the definition requires. The intuition behind this is that, in order to reconstruct the secret, the honest parties \(P_{i+1}\) and \(P_{i+2}\) need to learn \(t_i\) and \(t_{i+1}\) respectively. However, since \(t_i\) is held by the corrupted party, we use the fact that \(t_i = t_{i+1} \oplus t_{i+2}\) to obtain that \(P_{i+1}\) can reconstruct the secret using \(t_{i+1}\) which it knows and \(t_{i+2}\) which is held by the other honest party. The definition says that computing the secret using \(P_{i+1}\)’s share and \(t_{i+2}\); i.e., computing \(s_{i+1} \oplus t_{i+1} \oplus t_{i+2}\), yields the same value as computing the secret using \(P_{i+2}\)’s share and \(t_{i+1}\); i.e., computing \(s_{i+2} \oplus t_{i+1}\). We stress that shares may be inconsistent. For example, if \(P_1\) is the corrupted party and the shares of the honest parties \(P_2,P_3\) are (1, 1) and (1, 1) respectively, then the shares are inconsistent since \(s_2 = 1\) whereas \(s_3 \oplus t_3 = 1 \oplus 1 = 0\). Thus, these shares cannot be the result of any sharing of any value.

2.2 Computing AND Gates – One Semi-honest Corrupted Party

We review the protocol for securely computing AND (equivalently, multiplication) gates for semi-honest adversaries from [1] as it will be used in a subprotocol in our protocol for malicious adversaries. This subprotocol requires each party to send a single bit only. The protocol works in two phases: in the first phase the parties compute a simple \(3\atopwithdelims ()3\) XOR-sharing of the AND of the input bits, and in the second phase they convert the \(3\atopwithdelims ()3\)-sharing into the above-defined \(3\atopwithdelims ()2\)-sharing.

Let \((t_1,s_1),(t_2,s_2),(t_3,s_3)\) be a secret sharing of \(v_1\), and let \((u_1,w_1),(u_2,w_2)\), \((u_3,w_3)\) be a secret sharing of \(v_2\). We assume that the parties \(P_1,P_2,P_3\) hold correlated randomness \(\alpha _1,\alpha _2,\alpha _3\), respectively, where \(\alpha _1\oplus \alpha _2\oplus \alpha _3=0\). The parties compute \(3\atopwithdelims ()2\)-shares of \(v_1 v_2 = v_1 \wedge v_2\) as follows:

  1. 1.

    Step 1 – compute \(3\atopwithdelims ()3\) -sharing:

    1. (a)

      \(P_1\) computes \(r_1=t_1 u_1 \oplus s_1 w_1 \oplus \alpha _1\), and sends \(r_1\) to \(P_2\).

    2. (b)

      \(P_2\) computes \(r_2=t_2 u_2 \oplus s_2 w_2 \oplus \alpha _2\), and sends \(r_2\) to \(P_3\).

    3. (c)

      \(P_3\) computes \(r_3=t_3 u_3 \oplus s_3 w_3 \oplus \alpha _3\), and sends \(r_3\) to \(P_1\).

    These messages are computed and sent in parallel.

  2. 2.

    Step 2 – compute \(3\atopwithdelims ()2\) -sharing: In this step, the parties construct a \(3\atopwithdelims ()2\)-sharing from their given \(3\atopwithdelims ()3\)-sharing and the messages sent in the previous step. This requires local computation only.

    1. (a)

      \(P_1\) stores \((e_1,f_1)\) where \(e_1 = r_1\oplus r_3\) and \(f_1=r_1\).

    2. (b)

      \(P_2\) stores \((e_2,f_2)\) where \(e_2 = r_2\oplus r_1\) and \(f_2=r_2\).

    3. (c)

      \(P_3\) stores \((e_3,f_3)\) where \(e_3 = r_3\oplus r_2\) and \(f_3=r_3\).

It was shown in [1], that \(f_1 \oplus f_2 \oplus f_3 = r_1 \oplus r_2 \oplus r_3 = v_1 v_2\). Thus, the obtained sharing is a consistent sharing of \(v_1 v_2\). We now show something far stronger; specifically, we show that the above multiplication protocol (for semi-honest adversaries) always yields consistent shares, even when run in the presence of a malicious adversary. Depending on the adversary, the result is either a consistent sharing of the product or its complement, but it is always consistent.

Lemma 2.4

If \([v_1]\) and \([v_2]\) are consistent and \([v_3]\) was generated by executing the (semi-honest) multiplication protocol on \([v_1]\) and \([v_2]\) in the presence of one malicious party, then \([v_3]\) is a consistent sharing of either \(v_1 v_2\) or \(v_1 v_2 \oplus 1\).

Proof:

If the corrupted party follows the protocol specification then \([v_3]\) is a consistent sharing of \(v_1 v_2\). Else, since the multiplication protocol is symmetric, assume without loss of generality that \(P_1\) is the corrupted party. Then, the only way that \(P_1\) can deviate from the protocol specification is by sending \(r_1 \oplus 1\) to the honest \(P_2\) instead of \(r_1\), and in this case \(P_2\) will define its share to be \((e_2,f_2)=(r_2 \oplus r_1 \oplus 1, r_2)\). Meanwhile, \(P_3\) defines its share to be \((e_3,f_3)=(r_3 \oplus r_2, r_3)\), as it receives \(r_2\) from the honest \(P_2\). Thus, \(f_3 \oplus e_3 = r_3 \oplus (r_3 \oplus r_2) = r_2 = f_2\) meaning that \([v_3]\) is consistent by Definition 2.3. Furthermore, it is a sharing of \(v_1 v_2 \oplus 1\) since \(f_3 \oplus e_2 =r_3 \oplus (r_1 \oplus 1 \oplus r_2) = v_1 v_2 \oplus 1\) (utilizing the fact that \(r_1 \oplus r_2 \oplus r_3 =v_1 v_2\)).    \(\blacksquare \)

2.3 Generating Correlated Randomness – \(\mathcal{F}_\mathrm{cr}^1/\mathcal{F}_\mathrm{cr}^2\)

Our protocol relies strongly on the use of random bits which are correlated. We define two types of correlated randomness:

  • Type 1: Consider an ideal functionality \(\mathcal{F}_\mathrm{cr}^1\) that chooses \(\alpha _1,\alpha _2,\alpha _3\in \{0,1\}\) at random under the constraint that \(\alpha _1\oplus \alpha _2 \oplus \alpha _3 = 0\), and sends \(\alpha _i\) to \(P_i\) for every i.

  • Type 2: Consider an ideal functionality \(\mathcal{F}_\mathrm{cr}^2\) that chooses \(\alpha _1,\alpha _2,\alpha _3\in \{0,1\}\) at random, and sends \((\alpha _1,\alpha _2)\) to \(P_1\), \((\alpha _2,\alpha _3)\) to \(P_2\), and \((\alpha _3,\alpha _2)\) to \(P_3\).

Generating correlated randomness efficiently. It is possible to securely generate type-1 correlated randomness with perfect security by having each party \(P_j\) simply choose a random \(\rho _j\in \{0,1\}\) and send it to \(P_{j+1}\). Then, each \(P_j\) defines \(\alpha _j = \rho _j \oplus \rho _{j-1}\) (observe that \(\alpha _1\oplus \alpha _2\oplus \alpha _3=0\) since each \(\rho \)-value appears twice). In order to compute type-2 correlated randomness, each party \(P_j\) sends a random \(\rho _j\) as before, but now each \(P_j\) outputs the pair \((\rho _{j-1},\rho _j)\). (Formally, the ideal functionalities \(\mathcal{F}_\mathrm{cr}^1/\mathcal{F}_\mathrm{cr}^2\) must be defined so that the corrupted party \(P_i\) has some influence, but this suffices.) Despite the elegance and simplicity of this solution, we use a different approach that does not require any communication. Although the above involves sending just a single bit, this would actually double the communication per AND gate which is the bottleneck of efficiency.

Protocol 2.5 describes a method for securely compute correlated randomness computationally without any interaction beyond a short initial setup. Observe that in the output of \(\mathcal{F}_\mathrm{cr}^1\), it holds that \(\alpha _1\oplus \alpha _2\oplus \alpha _3=0\). Furthermore, for every j, \(P_j\) does not know \(k_{j+1}\) which is used to generate \(\alpha _{j+1}\) and \(\alpha _{j+2}\). Thus, \(\alpha _{j+1}\) and \(\alpha _{j+2}\) are pseudorandom to \(P_j\), under the constraint that \(\alpha _2\oplus \alpha _3=\alpha _1\). This was proven formally in [1] and the same proof holds for the malicious setting.

figure a

Formally defining the \(\mathcal{F}_\mathrm{cr}^1\)/\(\mathcal{F}_\mathrm{cr}^2\) ideal functionalities. A naive definition would be to have the ideal functionality choose \(\alpha _1,\alpha _2,\alpha _3\) and send \(\alpha _j\) to \(P_j\) for \(=j\in \{1,2,3\}\) (or send \(\alpha _j,\alpha _{j-1}\) to \(P_i\) in the \(\mathcal{F}_\mathrm{cr}^2\) functionality). However, securely realizing such a functionality would require a full-blown coin tossing protocol. In order to model our non-interactive method, which suffices for our protocol, we need to take into account that the corrupted party \(P_i\) can choose its \(k_i\) and this influences the output, as \(P_i\)’s value is generated in a very specific way using a pseudorandom function. In order for the view of the corrupted party to be like in the real protocol, we define the functionality \(\mathcal{F}_\mathrm{cr}^1\)/\(\mathcal{F}_\mathrm{cr}^2\) so that they generate the corrupted party’s value in this exact same way.

The functionalities are described formally in Functionality 2.6. The corrupted party chooses two keys \(k,k'\) for the pseudorandom function F and sends them to the functionality. These keys are used to generate the values that are influenced by the corrupted party, whereas the other values are chosen uniformly. We denote by \(\kappa \) the computational security parameter, and thus the length of the keys \(k,k'\).

figure b

The following is proved in the full version of our paper.

Proposition 2.7

If F is a pseudorandom function, then Protocol 2.5 securely computes functionalities \(\mathcal{F}_\mathrm{cr}^1\) and \(\mathcal{F}_\mathrm{cr}^2\), respectively, with abort in the presence of one malicious party.

2.4 Generating Shares of a Random Value – \({\mathcal{F}_\mathrm{rand}}\)

In this section, we show how the parties can generate a sharing of a random secret value v known to none of them. Formally, we define the functionality \({\mathcal{F}_\mathrm{rand}}\) that chooses a random \(v\in \{0,1\}\), computes a sharing \([v]\), and sends each party its share of \([v]\). However, \({\mathcal{F}_\mathrm{rand}}\) allows the corrupted party to determine its own share, and thus computes the honest parties’ shares from the corrupted party’s share and the randomly chosen v. \({\mathcal{F}_\mathrm{rand}}\) is formally specified in Functionality 2.8.

figure c

Protocol 2.9 describes how to securely compute \({\mathcal{F}_\mathrm{rand}}\) in the \(\mathcal{F}_\mathrm{cr}^2\)-hybrid model, without any interaction.

figure d

Observe that \(t_1\oplus t_2 \oplus t_3=0\). Furthermore, define \(v = s_1\oplus t_3 = r_1\oplus r_2 \oplus r_3\). Observe that \(s_2\oplus t_1\) and \(s_3\oplus t_2\) also both equal the same v. Thus, this non-interactive protocol defines a valid sharing \([v]\) for a random \(v\in \{0,1\}\). The fact that v is random follows from the fact that it equals \(r_1\oplus r_2\oplus r_3\). Now, by the definition of \(\mathcal{F}_\mathrm{cr}^2\), a corrupted \(P_i\) knows nothing of \(r_{i+1}=\alpha _{i+1}\) which is chosen uniformly at random, and thus the defined sharing is of a random value.

Proposition 2.10

Protocol 2.9 securely computes functionality \({\mathcal{F}_\mathrm{rand}}\) with abort in the \(\mathcal{F}_\mathrm{cr}^2\)-hybrid model, in the presence of one malicious party.

Proof:

Let \(\mathcal{A}\) be a real adversary; we define \(\mathcal{S}\) as follows:

  • \(\mathcal{S}\) receives \(\mathcal{A}\)’s input \(k,k'\) to \(\mathcal{F}_\mathrm{cr}^2\).

  • Upon receiving id from \(\mathcal{A}\) as intended for \(\mathcal{F}_\mathrm{cr}^2\), simulator \(\mathcal{S}\) simulates \(\mathcal{A}\) receiving back \((r_{i-1},r_i)=(F_{k'}(id),F_k(id))\) from \(\mathcal{F}_\mathrm{cr}^2\).

  • \(\mathcal{S}\) defines \(t_i=r_{i-1}\oplus r_{i}\) and \(s_i = r_i\), and externally sends \((t_i,s_i)\) to \({\mathcal{F}_\mathrm{rand}}\).

  • \(\mathcal{S}\) outputs whatever \(\mathcal{A}\) outputs.

We show that the joint distribution of the outputs of \(\mathcal{S}\) and the honest parties in an ideal execution is identical to the outputs of \(\mathcal{A}\) and the honest parties in a real execution. In order to see this, observe that in a real execution, given a fixed \(r_{i-1},r_i\) (as viewed by the adversary), the value v is fully determined by \(r_{i+1}\). In particular, by the definition of the secret-sharing scheme, \(v=s_i \oplus t_{i-1} = r_i \oplus r_{i-2} \oplus r_{i-1}= r_1 \oplus r_2 \oplus r_3\). Since \(r_{i+1}\) is randomly generated by \(\mathcal{F}_\mathrm{cr}^2\), this has the same distribution as \({\mathcal{F}_\mathrm{rand}}\) choosing v randomly (because choosing v randomly, or choosing some r randomly and setting \(v=t_i\oplus r\) is identical). Thus, the joint distributions are identical.    \(\blacksquare \)

2.5 Coin Tossing – \({\mathcal{F}_\mathrm{coin}}\)

We now present a highly-efficient three-party coin tossing protocol that is secure in the presence of one malicious adversary. We define the functionality \({\mathcal{F}_\mathrm{coin}}\) that chooses s random bits \(v_1,\ldots ,v_s\in \{0,1\}\) and sends them to each of the parties. The idea behind our protocol is simply for the parties to invoke s calls to \({\mathcal{F}_\mathrm{rand}}\) and to then open the result (by each \(P_i\) sending \(t_i\) to \(P_{i+1}\); see Sect. 2.1). Observe that this in itself is not sufficient since a malicious party may send an incorrect opening, resulting in the honest parties receiving different output. This can be solved by using a subprocedure called \(\mathsf{compareview}()\) in which each party \(P_j\) sends its output to party \(P_{j+1}\). If any party receives a different output, then it aborts. The reason why this is secure is that the protocol guarantees that if \(P_i\) is corrupted then \(P_{i+2}\) receives the correct outputs \(v_1,\ldots ,v_s\); this holds because when opening the shares, the only values received by \(P_{i+2}\) are sent by the honest \(P_{i+1}\) and are not influenced by \(P_i\). Thus, \(P_{i+2}\)’s output is guaranteed to be correct, and if \(P_{i+1}\) and \(P_{i+2}\) have the same output then \(P_{i+1}\)’s output is also correct. This is formally described in Protocol 2.11.

figure e

Proposition 2.12

Protocol 2.11 securely computes functionality \({\mathcal{F}_\mathrm{coin}}\) with abort in the \({\mathcal{F}_\mathrm{rand}}\)-hybrid model, in the presence of one malicious party.

Proof:

Let \(\mathcal{A}\) be the real adversary controlling \(P_i\); we construct the simulator \(\mathcal{S}\):

  1. 1.

    \(\mathcal{S}\) receives \(v_1,\ldots ,v_s\) from the trusted party computing \({\mathcal{F}_\mathrm{coin}}\).

  2. 2.

    \(\mathcal{S}\) invokes \(\mathcal{A}\) and simulates s calls to \({\mathcal{F}_\mathrm{rand}}\), as follows:

    1. (a)

      \(\mathcal{S}\) receives \(P_i\)’s share in every call to \({\mathcal{F}_\mathrm{rand}}\).

    2. (b)

      Given \(v_1,\ldots ,v_s\) and \(P_i\)’s shares, \(\mathcal{S}\) computes the value \(t_{i-1}\) that \(\mathcal{A}\) should receive from \(P_{i-1}\). (Specifically, for the \(\ell \)th value, let \((t^\ell _i,s^\ell _i)\) be \(P_i\)’s share and let \(v_\ell \) be the bit received from \({\mathcal{F}_\mathrm{coin}}\). Then, \(\mathcal{S}\) sets \(t_{i-1} = v_\ell \oplus s^\ell _i\). This implies that the “opening” is to \(v_\ell \).)

    3. (c)

      If \(\mathcal{A}\) sends an incorrect \(t_i\) value in any \(\mathsf{open}\) procedure, then \(\mathcal{S}\) sends \(\mathsf{abort}_{i+1}\) to \({\mathcal{F}_\mathrm{coin}}\) causing \(P_{i+1}\) to abort in the ideal model. Otherwise, it sends \(\mathsf{continue}_{i+1}\) to \({\mathcal{F}_\mathrm{coin}}\). (In all cases it sends \(\mathsf{continue}_{i-1}\) to \({\mathcal{F}_\mathrm{coin}}\) since \(P_{i-1}\) never aborts.)

By the way \({\mathcal{F}_\mathrm{rand}}\) is defined, the output distribution of \(\mathcal{A}\) and the honest parties in a real execution is identical to the output distribution of \(\mathcal{S}\) and the honest parties in an ideal execution. This is because each \(v_j\) is uniformly distributed, and \(\mathcal{S}\) can fully determine the messages that \(\mathcal{A}\) receives for any fixed \(v_1,\ldots ,v_s\).    \(\blacksquare \)

Deferring \(\mathsf{compareview}\) . Our protocol has the property that nothing is revealed to any party until the end, even if a party behaves maliciously. As such, it will suffice for us to run \(\mathsf{compareview}\) only once at the very end of the protocol before outputs are revealed. This enables us to have the parties compare their views by only sending a collision-resistant hash of their outputs, thereby reducing communication. As we will see below, this method will be used in a number of places, and all \(\mathsf{compareview}\) operations will be done together.

2.6 Random Shuffle – \({\mathcal{F}_\mathrm{perm}}\)

In our protocol, we will need to compute a random permutation of an array of elements (where each element is a “multiplication triple”). Let \({\mathcal{F}_\mathrm{perm}}\) be an ideal functionality that receives a vector \(\varvec{d}\) of length M from all parties, chooses a random permutation \(\pi \) over \(\{1,...,M\}\) and returns the vector \(\varvec{d}'\) defined by \(\varvec{d}'[i] = \varvec{d}[\pi [i]]\) for every \(i\in \{1,...,M\}\). Functionality \({\mathcal{F}_\mathrm{perm}}\) can be securely computed by the parties running the Fisher-Yates shuffle algorithm [10], and obtaining randomness via \({\mathcal{F}_\mathrm{coin}}\). This is formally described in Protocol 2.13.

figure f

The following proposition follows trivially from the security of \({\mathcal{F}_\mathrm{coin}}\):

Proposition 2.14

Protocol 2.13 securely computes functionality \({\mathcal{F}_\mathrm{perm}}\) with abort in the \({\mathcal{F}_\mathrm{coin}}\)-hybrid model, in the presence of one malicious party.

2.7 Reconstruct a Secret to One of the Parties – \(\mathcal{F}_\mathrm{reconst}\)

In this section we show how the parties can open a consistent sharing \([v]\) of a secret v to one of the parties in a secure way. We will use this subprotocol for reconstructing the outputs in our protocol. We remark that we consider security with abort only, and thus the party who should receive the output may abort. We stress that this procedure is fundamentally different to the \(\mathsf{open}\) procedure of Sect. 2.1 in two ways: first, only one party receives output; second, the \(\mathsf{open}\) procedure does not guarantee correctness. In contrast, here we ensure that the party either receives the correct value or aborts. We stress, however, that the protocol is only secure if the sharing \([v]\) is consistent; otherwise, nothing is guaranteed. We formally define \(\mathcal{F}_\mathrm{reconst}\) in Functionality 2.15.

figure g

Note that \(\mathcal{F}_\mathrm{reconst}\) also sends \(P_i\)’s share to \(\mathcal{S}\). This is needed technically in the proof to enable simulation; it reveals nothing since this is the corrupted party’s share anyway. Also, observe that the output is determined solely by the honest parties’ shares; this guarantees that the corrupted party cannot influence the output beyond causing an abort.

We show how to securely compute \(\mathcal{F}_\mathrm{reconst}\) in Protocol 2.16. Intuitively, the protocol works by the parties sending their shares to \(P_j\) who checks that they are consistent, and reconstructs if yes. In order to reduce communication, we actually show that it suffices for the parties to send only the “t” parts of their shares.

figure h

Proposition 2.17

If the honest parties’ inputs shares are consistent as in Definition 2.3, then Protocol 2.16 securely computes \(\mathcal{F}_\mathrm{reconst}\) with abort, in the presence of one malicious party.

Proof:

Let \(\mathcal{A}\) be the real adversary controlling \(P_i\), and assume that the honest parties’ shares are consistent. We first consider the case that \(P_j\) is corrupt (i.e., \(i=j\)). In this case, the simulator \(\mathcal{S}\) receives v and \((t_i,s_i)=(t_j,s_j)\) from \(\mathcal{F}_\mathrm{reconst}\). These values fully define all other shares, and in particular the values \(t_{j+1}\) and \(t_{j-1}\). Thus, \(\mathcal{S}\) can simulates \(P_{j+1}\) and \(P_{j-1}\) sending \(t_{j+1}\) and \(t_{j-1}\) to \(P_j\).

We next consider the case that \(P_j\) is honest (i.e., \(j\ne i\)). In this case, \(\mathcal{S}\) receives \(P_i\)’s share \((t_i,s_i)\) from \(\mathcal{F}_\mathrm{reconst}\). Then, \(\mathcal{S}\) invokes \(\mathcal{A}\) and receives the bit \(t'_i\) that \(P_i\) would send to \(P_j\). If \(t'_i=t_i\) (where \(t_i\) is the correct share value as received from \(\mathcal{F}_\mathrm{reconst}\)), then \(\mathcal{S}\) sends \(\mathsf continue_j\) to \(\mathcal{F}_\mathrm{reconst}\) so that the honest \(P_j\) receives v. Otherwise, \(\mathcal{S}\) sends abort \(_j\) to \(\mathcal{F}_\mathrm{reconst}\) so that the honest \(P_j\) outputs \(\bot \). Observe that in a real protocol \(P_j\) aborts unless \(t_1\oplus t_2 \oplus t_3=0\) (which is equivalent to \(t_j=t_{j+1}\oplus t_{j+2}\)). Thus, if the corrupted party sends an incorrect \(t_i\) value, then \(P_j\) will certainly abort. In contrast, if the adversary controlling \(P_i\) sends the correct \(t_i\), then the output will clearly be the correct v, again as in the ideal execution with \(\mathcal{S}\).    \(\blacksquare \)

2.8 Robust Sharing of a Secret – \({\mathcal{F}_\mathrm{share}}\)

In this section, we show how to share a secret that is held by one of the parties who may be corrupt. This sub-protocol will be used to share the parties’ inputs in the protocol. We define \({\mathcal{F}_\mathrm{share}}\) in Functionality 2.18 We note that the corrupted party always provides its share as input, as in \({\mathcal{F}_\mathrm{rand}}\). In addition, the dealer provides v and the (honest) parties receive their correct shares as defined by these values.

figure i

We show how to securely compute \({\mathcal{F}_\mathrm{share}}\) in Protocol 2.19. The idea behind the protocol is to first generate a random sharing via \({\mathcal{F}_\mathrm{rand}}\) which guarantees a consistent sharing of a random value (recall that this requires no communication). Next, the parties reconstruct the shared secret to the dealer, who can then send a single bit to “correct” the random share to its actual input. This ensures that the honest parties hold consistent shares, as long as a corrupt dealer sent the same bit to both; this is enforced by the parties comparing to ensure that they received the same bit from the dealer.

figure j

The following is proven in the full version of this paper.

Proposition 2.20

Protocol 2.19 securely computes \({\mathcal{F}_\mathrm{share}}\) with abort in the \(({\mathcal{F}_\mathrm{rand}},\mathcal{F}_\mathrm{reconst})\)-hybrid mode, in the presence of one malicious party.

Deferring \(\mathsf{compareview}\) . As in Sect. 2.5, the \(\mathsf{compareview}\) step can be deferred to the end of the execution (before any output is revealed). When using this mechanism, the bits to be compared are simply added to the parties local view and stored, and they are compared at the end.

2.9 Triple Verification with Opening

A multiplication triple is a triple of shares \(([a],[b],[c])\) with the property that \(c=a\cdot b\). Our protocol works by constructing triples and verifying that indeed \(c=a\cdot b\). We begin by defining what it means for such a triple to be correct.

Definition 2.21

\(([a],[b],[c])\) is a correct multiplication triple if \([a],[b]\) and \([c]\) are consistent sharings, and \(c=a\cdot b\).

In our main protocol for secure computation, the parties will generate multiplication triples in two steps:

  1. 1.

    The parties generate random sharings \([a]\) and \([b]\) by calling \({\mathcal{F}_\mathrm{rand}}\) twice.

  2. 2.

    The parties run the semi-honest multiplication protocol described in Sect. 2.2 to obtain \([c]\).

Recall that by Lemma 2.4, the sharing \([c]\) is always consistent. However, if one of the parties is malicious, then it may be that \(c=ab\oplus 1\). Protocol 2.22 describes a method of verifying that a triple is correct. The protocol is very simple and is based on the fact that if the shares are consistent and \(c\ne ab\), then one of the honest parties will detect this in the standard \(\mathsf{open}\) procedure defined in Sect. 2.1. This protocol is called verification “with opening” since the values abc are revealed.

figure k

Lemma 2.23

If \([a],[b],[c]\) are consistent shares, but \(([a],[b],[c])\) is not a correct multiplication triple, then both honest parties output \(\bot \) in Protocol 2.22.

Proof:

Let \(P_i\) be the corrupted party, and assume that \([a],[b],[c]\) are consistent shares, but \(([a],[b],[c])\) is not a correct multiplication triple. This implies that \(c=a \cdot b \oplus 1\). Therefore, in the \(\mathsf{open}\) procedures, party \(P_{i+2}\) will receive values \(a_{i+2},b_{i+2},c_{i+2}\) such that \(c_{i+2} \ne a_{i+2} \cdot b_{i+2}\), and will send \(\bot \) to both parties. (This holds since \(P_{i+2}\) receives messages only from \(P_{i+1}\) that are independent of what \(P_i\) sends.) Thus, both honest parties will output \(\bot \).    \(\blacksquare \)

2.10 Triple Verification Using Another (Without Opening)

We have seen how to check a triple by opening it and revealing its values abc. Such a triple can no longer be used in the protocol. In this section, we show how to verify that a multiplication triple is consistent without opening it, by using (and wasting) an additional random multiplication triple that is assumed to be consistent. The method is described in Protocol 2.24. The idea behind the protocol is as follows. Given shares of xyz and of abc, the parties compute and open shares of \(\rho =x\oplus a\) and \(\sigma =y\oplus b\); these values reveal nothing about x and y since ab are both random. As we will show in the proof below, if one of (xyz) or (abc) is correct and the other is incorrect (e.g., \(x\ne y\cdot z\) but \(c= a\cdot b\)) then \(z+c+\sigma \cdot a + \rho \cdot b + \rho \cdot \sigma =1\). Thus, this value can be computed and opened by the parties. If xyz is incorrect and abc is correct, then the honest parties will detect this and abort. In order to save on communication, we observe that if the value to be opened must equal 0 then it must hold that \(s_j=t_{j-1}\). Thus, it suffices for the parties to compare a single bit.

figure l

Lemma 2.25

If \(([a],[b],[c])\) is a correct multiplication triple and \([x],[y],[z]\) are consistent shares, but \(([x],[y],[z])\) is not a correct multiplication triple, then all honest parties output \(\bot \) in Protocol 2.24.

Proof:

Let \(P_i\) be the corrupted party. Assume that \(([a],[b],[c])\) is a correct multiplication triple, that \([x],[y],[z]\) are consistent sharings, but \(([x],[y],[z])\) is not a correct multiplication triple. This implies that all values abcxyz are well defined (from the honest parties’ shares) and that \(c=a b\) and \(z \ne x y\).

Let \(\rho =x\oplus a\) and \(\sigma = y \oplus b\). If \(P_{j+1}\) receives an incorrect bit from \(P_j\) in the openings of \(\rho \) and \(\sigma \) (i.e., if \(\rho _j\ne \rho \) or \(\sigma _j\ne \sigma \)) then it detects this in \(\mathsf{compareview}\) of Step 3 with \(P_{j+2}\) and thus both honest parties output \(\bot \). (Observe that \(P_{j+2}\) receives the openings from \(P_{j+1}\) who is honest and thus it is guaranteed that \(\rho _{j+1}=\rho \) and \(\sigma _{j+1}=\sigma \).)

We now show that if \(P_{i+1}\) and \(P_{i+2}\) did not output \(\bot \) in Step 3 (and thus \(\sigma _{i+1}=\sigma _{i+2}=\sigma \) and \(\rho _{i+1}=\rho _{i+2}=\rho \)), then \(P_{i+1}\) and \(P_{i+2}\) output \(\bot \) with probability 1 in Step 5. In order to show this, we first show that in this case, \([z] \oplus [c] \oplus \sigma _j \cdot [a] \oplus \rho _j \cdot [b] \oplus \rho _j \cdot \sigma _j=[1]\). Observe that \(z\ne xy\) and thus \(z=xy\oplus 1\), and that \(\sigma =y\oplus b\) and \(\rho =x\oplus a\). Thus, we have:

where the last equality follows from simple cancellations and the fact that by the assumption \(c=ab\). We therefore have that the honest parties hold a consistent sharing of 1. Denoting the respective shares of \(P_{i+1}\) and \(P_{i+2}\) by \((t_{i+1}, s_{i+1})\) and \((t_{i+2},s_{i+2})\), by the definition of the secret-sharing scheme we have that \(s_{i+2}=t_{i+1}\oplus 1\) and so \(s_{i+2}\ne t_{i+1}\). This implies that \(P_{i+2}\) sends \(\bot \) to all other parties in Step 5 of the protocol, and all output \(\bot \).    \(\blacksquare \)

Deferring \(\mathsf{compareview}\) :  In the first \(\mathsf{compareview}\), all parties include \(\rho _j,\sigma _j\) in their view. In the second \(\mathsf{compareview}\), \(P_j\) includes \(t_j\) in its joint view with \(P_{j+1}\) and includes \(s_j\) in its joint view with \(P_{j-1}\). Observe that the requirement that \(s_j=t_{j-1}\) is automatically fulfilled by requiring that the pairwise views be the same. This holds since \(P_j\) include \(s_j\) in its view with \(P_{j-1}\), whereas \(P_{j-1}\) includes \(t_{j-1}\) in its view with \(P_j\). As a consequence, in the protocol when \(\mathsf{compareview}\) is deferred, each party stores two strings – one for its joint view with each of the other parties – and hashes these two strings separately at the end of the protocol. We remark that it is possible to use only a universal hash function by choosing the function after the views have been fixed, if this is desired. Recall that in \(\mathsf{compareview}\), it suffices for each party to send its view to one other party. Thus, all communication in our protocol follows the pattern that \(P_i\) sends messages to \(P_{i+1}\) only, for every \(i\in \{1,2,3\}\).

3 Secure Generation of Multiplication Triples – \({\mathcal{F}_\mathrm{triples}}\)

In this section, we present a three-party protocol for generating an array of correct multiplication triples, as defined in Definition 2.21. Formally, we securely compute the functionality \({\mathcal{F}_\mathrm{triples}}\) defined in Functionality 3.1.

figure m

We show how to securely compute \({\mathcal{F}_\mathrm{triples}}\) in Protocol 3.2. The idea behind the protocol is as follows. The parties first use \({\mathcal{F}_\mathrm{rand}}\) to generate many shares of pairs of random values \([a_i],[b_i]\). Next, they run the semi-honest multiplication protocol of Sect. 2.2 to compute shares of \(c_i=a_i\cdot b_i\). However, since the multiplication protocol is only secure for semi-honest parties, a malicious adversary can cheat in this protocol. We therefore utilize the fact that even if the malicious adversary cheats, the resulting shares \([c_i]\) is consistent, but it may be the case that \(c_i=a_ib_i\oplus 1\) instead of \(c_i=a_ib_i\) (see Lemma 2.4). We therefore use cut-and-choose to check that the triples are indeed correct. We do this by opening C triples using Protocol 2.22; this protocol provides a full guarantee that the parties detect any incorrect triple that is opened. Next, the parties randomly divide the remaining triples into “buckets” of size B and use Protocol 2.24 to verify that the first triple in the bucket is correct. Recall that in Protocol 2.24, one triple is used to check another without revealing its values. Furthermore, by Lemma 2.25, if the first triple is not correct and the second is, then this is detected by the honest parties. Thus, the only way an adversary can successfully cheat is if (a) no incorrect triples are opened, and (b) there is no bucket with both correct and incorrect triples. Stated differently, the adversary can only successfully cheat if there exists a bucket where all triples are incorrect. By appropriately choosing the bucket-size B and number of triples C to be opened, the cheating probability can be made negligibly small.

Proposition 3.3

Let NBC be such that \(N=CB^2\) and \((B-1)\log _2C\ge \sigma \). Then, Protocol 3.2 securely computes \({\mathcal{F}_\mathrm{triples}}\) with abort in the \(({\mathcal{F}_\mathrm{rand}},{\mathcal{F}_\mathrm{perm}})\)-hybrid model, with statistical error \(2^{-\sigma }\) and in the presence of one malicious party.

Proof Sketch:

Intuitively, the triples generated are to random values since \({\mathcal{F}_\mathrm{rand}}\) is used to generate \([a_i],[b_i]\) for all i (note that in \({\mathcal{F}_\mathrm{triples}}\) the adversary chooses its shares in \([a_i],[b_i]\); this is inherited from its capability in \({\mathcal{F}_\mathrm{rand}}\)). Then, Protocol 2.22 is used to ensure that the first C triples are all correct (recall that by Lemma 2.4, \([a_i],[b_i],[c_i]\) are all consistent sharings, and thus by Lemma 2.23 the honest parties output \(\bot \) if \(c_i\ne a_ib_i\)). Finally, Protocol 2.24 is used to verify that all of the triples in \(\varvec{d}\) are correct multiplication triples. By Lemma 2.25, if \(([a_1],[b_1], [c_1])\) in any of the buckets is not a correct multiplication triple, and there exists a \(j\in \{2,\ldots ,B\}\) for which \(([a_j],[b_j], [c_j])\) is a correct multiplication triple, then the honest parties output \(\bot \) (note that once again by Lemma 2.4, all of the shares are guaranteed to be consistent). Thus, the only way that \(\varvec{d}_i\) for some \(i\in [N]\) contains an incorrect multiplication triple is if all of the C opened triples were correct and the entire bucket \(\varvec{D}_i\) contains incorrect multiplication triples. Denote the event that this happens for some i by \(\mathsf{bad}\). By choosing B and C so that \(\mathrm{Pr}[\mathsf{bad}]\) is negligible, the protocol is secure. Observe that the triples are all generated and fixed before \({\mathcal{F}_\mathrm{perm}}\) is called, and thus the probability that \(\mathsf{bad}\) occurs is equal to the balls-and-buckets game of [5, 19], where the adversary wins only if there exists no “mixed bucket” (containing both good and bad balls). In [5, Proof of Lemma 12], it is shown that \(\mathrm{Pr}[\mathsf{bad}]<2^{-\sigma }\) when \((B-1)\log _2C\ge \sigma \) (these parameters are derived from their notation by setting \(C=\ell \), \(B=b\) and \(N=\ell b^2\)). For any \(\sigma \), we therefore choose B and C such that \(N=CB^2\) and \((B-1)\log _2C\ge \sigma \), and the appropriate error probability is obtained.

Observe that simulation is easy; \(\mathcal{S}\) receives N triples from the trusted party and simulates the \(2(NB+C)\) calls to \({\mathcal{F}_\mathrm{rand}}\). \(\mathcal{S}\) places the appropriate values from the N triples in random places, and ensures that they will all be the first triple in each bucket (by setting the output of \({\mathcal{F}_\mathrm{perm}}\) appropriately). Then, \(\mathcal{S}\) sends \(\mathsf{continue}\) to the trusted party if and only if \(\mathcal{A}\) did not cheat in any multiplication. The only difference between the protocol execution with \(\mathcal{A}\) and the simulation with \(\mathcal{S}\) is in the case that \(\mathsf{bad}\) occurs, which happens with negligible probability.    \(\blacksquare \)

Concrete parameters. In our protocol, generation of triples is highly efficient. Thus, we can generate a very large number of triples at once (unlike [19]) which yields better parameters. In [19, Proof of Theorem 8] it was shown that when the probability of a triple being incorrect is 1/2, the adversary can cheat with probability at most \(2^{-\sigma }\) when \((1+\log _2N)(B-1)\ge \sigma \). This implies that for \(N=2^{20}\) and \(\sigma =40\), we can take \(B=3\) because \((1+\log _2N)(B-1)>21\cdot 2 > 40\). In order to make the probability of a triple being incorrect be (close to) 1/2, we can set \(C=3\cdot 2^{20}\). This implies that the overall number of triples required is \(6\cdot 2^{20}\).

An improved combinatorial analysis is provided in [5]. They show that when setting \(N=C B^2\), the adversary can cheat with probability at most \(2^{-\sigma }\) when \((B-1)\log _2C\ge \sigma \) (in [5], they write \(\ell \) instead of C and b instead of B). In order to minimize the number of triples overall, B must be kept to a minimum. For \(N=2^{20}\) and \(\sigma =40\), one can choose \(B=4\) and \(C=2^{16}\). It then follows that \((B-1)\log _2C=3 \cdot 16 > 40\). Thus, the overall number of triples required is \(NB+C=C B^3 + C = 2^{22} + 2^{16}\). Observe that these parameters derived from the analysis of [5] yield approximately 2/3 the cost of \(6\cdot 2^{20}\) as required by the analysis of [19]. (This follows because \(6\cdot 2^{20} = \frac{3}{2}\cdot 2^{22}\).)

In Sect. 5 we provide a new analysis showing that it suffices to generate \(3\cdot 2^{20}+3\) triples. This is approximately 25% less than the analysis of [5]. Concretely, to generate 1 million validated triples, the analysis of [5] requires generating 4,065,536 triples initially, whereas we require only 3,000,003.

Deferring \(\mathsf{compareview}\) . In the calls to functionalities \({\mathcal{F}_\mathrm{rand}}\) and \({\mathcal{F}_\mathrm{perm}}\) and in the execution of Protocol 2.24, the parties use the \(\mathsf{compareview}\) subprocedure. As explained before, the parties actually compare only at the end of the entire triple-generation protocol, and compare a hash of the view instead of the entire view, which reduces communication significantly.

Using pseudorandomness in \({\mathcal{F}_\mathrm{perm}}\) . In practice, in order to reduce the communication, the calls to \({\mathcal{F}_\mathrm{coin}}\) inside \({\mathcal{F}_\mathrm{perm}}\) are only used in order to generate a short seed. Each party then applies a pseudorandom generator to the seed in order to obtain all of the randomness needed for computing the permutation. This protocol actually no longer securely computes \({\mathcal{F}_\mathrm{perm}}\) since the permutation is not random, and the corrupted party actually knows that it is not random since it has the seed. Nevertheless, by the proof of Proposition 3.3, we have that the only requirement from \({\mathcal{F}_\mathrm{perm}}\) is that the probability of \(\mathsf{bad}\) happening is negligible. Now, since the triples are fixed before \({\mathcal{F}_\mathrm{perm}}\) is called, the probability of \(\mathsf{bad}\) happening is simply the probability that a specific subset of permutations occur (that map all of the incorrect triples into the same bucket/s). If this occurs with probability that is non-negligibly higher than when a truly random permutation is used, then this can be used to distinguish the result of the pseudorandom generator from random.

4 Secure Computation of Any Functionality

In this section, we show how to securely compute any three party functionality f. The idea behind the protocol is simple. The parties first use \({\mathcal{F}_\mathrm{triples}}\) to generate a vector of valid multiplication triples. Next, the parties compute the circuit using the semi-honest multiplication protocol of Sect. 2.2 for each AND gate. Recall that by Lemma 2.4, the result of this protocol is a triple of consistent shares \(([a],[b],[c])\) where \(c=ab\) or \(c=ab\oplus 1\), even when one of the parties is malicious. Thus, it remains to verify that for each gate it holds that \(c=ab\). Now, utilizing the valid triples generated within \({\mathcal{F}_\mathrm{triples}}\), the parties can use Protocol 2.24 to verify that \(([a],[b],[c])\) is a correct multiplication triple (i.e., that \(c=ab\)) without revealing anything about ab or c. See Protocol 4.2 for a full specification.

We prove the following theorem:

Theorem 4.1

Let f be a three-party functionality. Then, Protocol 4.2 securely computes f with abort in the \(({\mathcal{F}_\mathrm{triples}},{\mathcal{F}_\mathrm{share}},\mathcal{F}_\mathrm{reconst})\)-hybrid model, in the presence of one malicious party.

Proof Sketch:

Intuitively, the security of this protocol follows from the following. If the adversary cheats in any semi-honest multiplication, then by Lemma 2.25 the honest parties output \(\bot \). This holds because by Lemma 2.4 all shares \([x],[y],[z]\) are consistent (but \(z\ne xy\)), while \(([a_k],[b_k],[c_k])\) is guaranteed to be a correct multiplication triple since it was generated by \({\mathcal{F}_\mathrm{triples}}\). Thus, the adversary must behave honestly throughout, and the security is reduced to the proof of security for the semi-honest case, as proven in [1].

The simulator for \(\mathcal{A}\) works by playing the role of the trusted party for \({\mathcal{F}_\mathrm{triples}}\) and \({\mathcal{F}_\mathrm{share}}\), and then by simulating the semi-honest multiplication protocol in the circuit emulation phase for every AND gate. The verification stage involving executions of Protocol 2.24 is then simulated by \(\mathcal{S}\) internally handing random \(\rho ,\sigma \) values to \(\mathcal{A}\) as if sent by \(P_{i-1}\). Since \(\mathcal{S}\) plays the trusted party in \({\mathcal{F}_\mathrm{triples}}\) and \({\mathcal{F}_\mathrm{share}}\), it knows all of the values held and therefore can detect if \(\mathcal{A}\) tries to cheat. If yes, then it simulates the honest parties aborting, and sends \(\bot \) to the trusted party as \(P_i\)’s input. Otherwise, it sends the input of \(P_i\) sent by \(\mathcal{A}\) in \({\mathcal{F}_\mathrm{share}}\). Finally, after receiving \(P_i\)’s output from the trusted party computing f, \(\mathcal{S}\) plays the ideal functionality computing \(\mathcal{F}_\mathrm{reconst}\). If \(\mathcal{A}\) sends \(\mathsf{abort}\) to \(\mathcal{F}_\mathrm{reconst}\) then \(\mathcal{S}\) sends \(\mathsf{abort}\) to the trusted party computing f; otherwise, it sends \(\mathsf{continue}\). We stress that the above simulation works since the semi-honest multiplication protocol is private in the presence of a malicious adversary, meaning that its view can be simulated before any output is revealed. This was shown in [1, Sect. 4]. Thus, the view of a malicious \(\mathcal{A}\) in the circuit-emulation phase is simulated as in [1], and then the verification phase is simulated as described above. This completes the proof sketch.    \(\blacksquare \)

Generating many triples in the offline. In many cases, the circuit being computed is rather small. However, the highest efficiency in \({\mathcal{F}_\mathrm{triples}}\) is achieved when taking a very large N (e.g., \(N=2^{20}\)). We argue that \({\mathcal{F}_\mathrm{triples}}\) can be run once, and the triples used for multiple different computations. This is due to the fact that the honest parties abort if any multiplication is incorrect, and this makes no difference whether a single execution utilizing N gates is run, or multiple executions.

5 Improved Combinatorial Analysis

In this section we provide a tighter analysis of the probability that the adversary succeeds in circumventing the computation without being caught. This analysis allows us to reduce both the overall number of triples needed and the number of triples that are opened in the cut-and-bucket process, compared to [5, 19]. In our specific protocol, reducing the number of triples to be opened is of great importance since generation of a triple requires 3 bits of communication, while each opening requires 9 bits.

Loosely speaking, the adversary can succeed if the verification of an incorrect AND gate computation is carried using an incorrect multiplication triple. This event can only happen if no incorrect triples were opened and if the entire bucket from which the incorrect triple came contained only incorrect triples. Otherwise, the honest parties would abort in the triples-generation protocol, when running the check phase. Since the triples are randomly assigned to buckets, the probability that this event occurs is small (which is what we need to prove). Clearly, increasing the number of triples checked and the bucket size reduces the success probability of the adversary. However, increasing these parameters raises the computation and communication cost of our protocol. Thus, our goal is to minimize these costs by minimizing the number of triples generated (and opened). We denote by \(\sigma \) the statistical parameter, and our aim to guarantee that the adversary succeeds with probability at most \(2^{-\sigma }\). Recall that C is the number of triples opened in the cut-and-bucket process and B is the size of the bucket. We start by defining the following balls-and-buckets game, which is equivalent to our protocol (in the game, a “bad ball” is an incorrect multiplication triple). We say that the adversary “wins”, if the output of the game is 1.

figure n

\({\underline{\text {Game}_1(\mathcal{A},N,B,C):}}\)

  1. 1.

    The adversary \(\mathcal{A}\) prepares \(M=NB + C\) balls. Each ball can be either bad or good.

  2. 2.

    C random balls are chosen and opened. If one of the C balls is bad then output 0. Otherwise, the game proceeds to the next step.

  3. 3.

    The remaining NB balls are randomly divided into N buckets of equal size B. Denote the buckets by \(B_1,...,B_N\). We say that a bucket is fully bad if all balls inside it are bad. Similarly, a bucket is fully good if all balls inside it are good.

  4. 4.

    The output of the game is 1 if and only if there exists i such that bucket \(B_i\) is fully bad, and all other buckets are either fully bad or fully good.

figure o

Note that the condition in the last step forces the adversary to choose at least one bad ball if it wishes to win. We first show that for \(\mathcal{A}\) to win the game, the number of bad balls \(\mathcal{A}\) chooses must be a multiple of B, the size of a bucket.

Lemma 5.1

Let T be the number of bad balls chosen by the adversary \(\mathcal{A}\). Then, a necessary condition for \(\text {Game}_1(\mathcal{A},N,B,C)=1\) is that \(T=B\cdot t\) for some \(t \in {\mathbb {N}}\).

Proof:

This follows immediately from the fact that the output of the game is 1 only if no bad balls are opened and all buckets are fully bad or good. Thus, if \(T \ne B \cdot t\), then either a bad ball is opened or there must be some bucket that is mixed, meaning that it has both bad and good balls inside it. Therefore, in this case, the output of the game will be 0.    \(\blacksquare \)

Following Lemma 5.1, we derive a formula for the success probability of the adversary in the game. We say that the adversary \(\mathcal{A}\) has chosen to “corrupt” t buckets if \(t=\frac{T}{B}\), where T is the number of bad balls generated by the adversary.

Theorem 5.2

Let t be the number of buckets \(\mathcal{A}\) has chosen to corrupt. Then, for every \(0<t\le N\) it holds that

$$\begin{aligned} \mathrm{Pr}[{Game}_1(\mathcal{A},N,B,C)=1]={N \atopwithdelims ()t}{NB +C \atopwithdelims ()tB}^{-1}. \end{aligned}$$

Proof:

Assume \(\mathcal{A}\) has chosen to corrupt t buckets. i.e., \(\mathcal{A}\) has generated tB bad balls. Let \(E_c\) be the event that no bad balls were detected when opening C random balls. We have:

$$ \mathrm{Pr}[E_c]= \frac{{{NB + C - tB} \atopwithdelims (){C}}}{{{NB + C}\atopwithdelims (){C}}}= \frac{(NB + C - tB)!(NB)!}{(NB - tB)!(NB + C)!}. $$

Next, let \(E_B\) the event that the bad balls are in exactly t buckets after permuting the balls (and so there are t fully bad buckets and all other buckets are fully good). There are (NB)! ways to permute the balls, but if we require the tB bad balls to fall in exactly t buckets then we first choose t buckets out of N, permute the tB balls inside them, and finally permute the other \(NB- tB\) balls in the other buckets. Overall, we obtain that

$$ \mathrm{Pr}[E_B]= \frac{{N \atopwithdelims ()t}(tB)!(NB - tB)!}{(NB)!}. $$

Combining the above, we obtain that

$$\begin{aligned} \mathrm{Pr}[{Game}_1(\mathcal{A},N,B,C)=1]= & {} \mathrm{Pr}[E_c \wedge E_B]=\mathrm{Pr}[E_c]\cdot \mathrm{Pr}[E_B]\\= & {} {N \atopwithdelims ()t}\frac{(NB + C - tB)!(tB)!}{(NB + C)!}\\= & {} {N \atopwithdelims ()t}{NB +C \atopwithdelims ()tB}^{-1}. \end{aligned}$$

   \(\blacksquare \)

Next, we show that if \(C \ge B\), then the best strategy of the adversary is to corrupt exactly one bucket. This allows us to derive an upper bound of the success probability of the adversary.

Theorem 5.3

If \(C \ge B\), then for every adversary \(\mathcal{A}\), it holds that

$$\begin{aligned} \mathrm{Pr}[{Game}_1(\mathcal{A},N,B,C)=1] \le N{NB + C \atopwithdelims ()B}^{-1}. \end{aligned}$$

Proof:

Following Theorem 5.2, we need to show that for every \(t\ge 1\)

$$ {N \atopwithdelims ()t}{NB +C \atopwithdelims ()tB}^{-1} \le N{NB + C \atopwithdelims ()B}^{-1}. $$

First, observe that when \(t=1\), the left side of the inequality is exactly the same as its right side, and thus the theorem holds.

Next, assume that \(t \ge 2\); It is suffices to show that:

$$ {N \atopwithdelims ()t}{NB +C \atopwithdelims ()tB}^{-1} \le {NB + C \atopwithdelims ()B}^{-1} $$

which is equivalent to proving that

$$ {N \atopwithdelims ()t} \frac{(tB)!(NB+C-tB)!}{(NB+C)!} \le \frac{B!(NB+C-B)!}{(NB+C)!} $$

which is in turn equivalent to proving that

$$ {N \atopwithdelims ()t} \frac{(tB)!}{B!} \le \frac{(NB+C-B)!}{(NB+C-tB)!}. $$

By multiplying both sides of the inequality with \(\frac{1}{(tB-B)!}\) we obtain that in order to complete the proof, it suffices to show that

$$\begin{aligned} {N \atopwithdelims ()t}{tB \atopwithdelims ()tB-B} \le {NB+C-B \atopwithdelims ()tB - B}. \end{aligned}$$
(1)

Using the assumption that \(C \ge B\), we obtain that instead of proving Eq. (1), it is sufficient to prove that

$$\begin{aligned} {N \atopwithdelims ()t}{tB \atopwithdelims ()tB-B} \le {NB \atopwithdelims ()tB - B}. \end{aligned}$$
(2)

To see that Eq. (2) holds, consider the following two combinatorial processes:

  1. 1.

    Choose t buckets out of N. Then, choose \(tB - B\) out of the tB balls in these buckets.

  2. 2.

    Choose \(tB - B\) out of NB balls.

Note that since \(t \ge 2\), it holds that \(tB-B >0\), and the processes are well defined. Next, observe that both processes end with holding \(tB - B\) balls that were chosen from an initial set of NB balls. However, while in the second process we do not place any restriction on the selection process, in the first process we require that t buckets will be chosen first and then the \(tB - B\) balls are allowed to be chosen only from the t buckets. Thus, the number of choice options in the second process is strictly larger than in the first process. Finally, since the first process describes exactly the left side of Eq. (2), whereas the second process describes exactly the right side of Eq. (2), we conclude that the inequality indeed holds.    \(\blacksquare \)

Corollary 5.4

If \(C=B\) and BN are chosen such that \(\sigma \le \log \Big (\frac{(N \cdot B+B)!}{N \cdot B! \cdot (N \cdot B)!}\Big )\), then for every adversary \(\mathcal{A}\) it holds that \(\mathrm{Pr}[\text {Game}_1(\mathcal{A},N,B,C)=1] \le 2^{-\sigma }\).

Proof:

This holds directly from Theorem 5.3, which holds if \(C \ge B\). Since this holds for any \(C\ge B\), we set \(C=B\) in the bound of Theorem 5.3, and have that the bound is fulfilled if

$$ N{NB + B \atopwithdelims ()B}^{-1} \le 2^{-\sigma } \quad \text { and so } \quad \frac{1}{N} \cdot {NB + B\atopwithdelims ()B} \ge 2^\sigma . $$

Taking log of both sides yields the result.    \(\blacksquare \)

Corollary 5.4 provides us a way of computing the bucket size B for every possible N and \(\sigma \). For example, setting \(N=2^{20}\) and \(\sigma =40\) (meaning that we want to output \(2^{20}\) triples from the pre-processing protocol with error probability less than \(2^{-40}\)), we obtain that \(B=3\) suffices and we need to generate \(NB+C = 3\cdot 2^{20} +3\) triples, of which only 3 triples are opened. We performed this computation for \(N=2^{20},2^{30}\) and \(\sigma =40,80,128\) and compared the results with [5] (recall that according to their analysis, when setting \(N=C B^2\), the adversary can cheat with probability at most \(2^{-\sigma }\) when \((B-1)\log _2C\ge \sigma \)). The comparison is presented in Tables 12 and 3. In the tables, we use “M” to denote the number of triples that are initially generated, i.e., \(M=NB+C\) in our work whereas \(M=CB^3+C\) in [5].

Table 1. Parameter comparison for \(\sigma =40\)
Table 2. Parameter comparison for \(\sigma =80\)
Table 3. Parameters comparison for \(\sigma =120\)

As can be seen, in all cases, our combinatorial analysis yields a significant improvement, both in the number of generated triples and in the number of triples needed to be opened. Specifically, although only few triples are opened, we reduce the overall number of triples generated by up to 25%, compared to [5]. Recall that both improvements are important, as each triple less to generate means 1 bit less to send for each party, and each triple less to open means 3 bit less to send for each party.

6 Efficiency and Comparison

In this section, we describe the communication and computation complexity of (the computationally secure variant of) our protocol. We compare our protocol to that of [18], since this is the most efficient protocol known for our setting of three parties, malicious adversaries, and an honest majority. The complexity of the protocol in [18] is close to Yao’s two-party semi-honest protocol, and thus its communication complexity is dominated by the size of the garbled circuit and its computation complexity is dominated by the amount of work needed to prepare a garbled circuit and evaluate it. The comparison summary is shown in Table 4; for our protocol, the complexity is based on a bucket size of \(B=3\). A detailed explanation appears below.

Table 4. Average cost per party (N = number of AND gates in the circuit)

Communication Complexity. We count the number of bits sent by each party for each AND gate. The semi-honest multiplication protocol requires sending a single bit, and verifying a triple using another without opening requires sending 2 bits (only very few triples are checked with opening and so we ignore this). Now, a single multiplication and a single verification is used for each AND gate (3 bits). Furthermore, each triple is generated from B triples (generated using B multiplications) and \(B-1\) verifications, thereby costing \(B+2(B-1)\) bits. The overall cost per gate is therefore \(B+2B-2+3 = 3B+1\). For \(B=3\) (which suffices with \(N=2^{20}\) and error \(2^{-40}\)), we conclude that only 10 bits are sent by each party per gate.

In contrast, in [18], the communication is dominated by a single garbled circuit. When using the half-gates optimization of [24], such a circuits consists of two ciphertexts per AND gate with a size of 256 bits. Thus, on average, each party sends \(256/3 \approx 85\) bits per AND gate.

Number of AES computations. The computations in our protocol are very simple, and the computation complexity is therefore dominated by the AES computations needed to generated randomness (in the multiplication and to compute correlated randomness). Two bits of pseudorandomness are needed for each call to \(\mathcal{F}_\mathrm{cr}\) to generate correlated randomness. In order to generate a triple, 2 calls to \(\mathcal{F}_\mathrm{cr}\) are required and an additional call for the multiplication, for a total of 6 bits. For every AND gate, B triples are generated and one additional multiplication is carried out for the actual gate, resulting in a total of \(6B+2\) bits. In addition, \({\mathcal{F}_\mathrm{perm}}\) requires an additional \(NB\log (NB)\) bits of pseudorandomness. Thus, the total number of pseudorandom bits for N gates equals \((6B+2)N+NB\log (NB)\); taking \(B=3\) as above, we have \(20N+3N(\log 3N)\). Noting that 128-bits of pseudorandomness are generated with a single AES computation, this requires \(\frac{20N+3N(\log 3N)}{128}\approx \frac{N}{5}\) calls to AES.

In contrast, in the protocol of [18], two parties need to garble the circuit and one needs to evaluate it. Garbling a circuit requires 4 AES operations per AND gate and evaluating requires 2 operations per AND gate. Thus, the average number of AES operations is \(\frac{10}{3}\approx 3\) per party per AND gate.