Keywords

1 Introduction

Proof-carrying data (PCD) [CT10] is a cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely, while ensuring that every intermediate state of the computation can be succinctly verified. PCD supports computations defined on (possibly infinite) directed acyclic graphs, with messages passed along directed edges. Verification is facilitated by attaching to each message a succinct proof of correctness. This is a generalization of the notion of incrementally-verifiable computation (IVC) due to [Val08], which can be viewed as PCD for the path graph (i.e., for automata). PCD has found applications in enforcing language semantics [CTV13], verifiable MapReduce computations [CTV15], image authentication [NT16], succinct blockchains [Co17, KB20, BMRS20], and others.

Recursive Composition. Prior to this work, the only known method for constructing PCD was from recursive composition of succinct non-interactive arguments (SNARGs) [BCCT13, BCTV14, COS20]. This method informally works as follows. A proof that the computation was executed correctly for t steps consists of a proof of the claim “the t-th step of the computation was executed correctly, and there exists a proof that the computation was executed correctly for \(t-1\) steps”. The latter part of the claim is expressed using the SNARG verifier itself. This construction yields secure PCD (with IVC as a special case) provided the SNARG satisfies an adaptive knowledge soundness property (i.e., is a SNARK). The efficiency and security properties of the resulting PCD scheme correspond to those of a single invocation of the SNARK.

Limitations of Recursion. Recursion as realized in prior work requires proving a statement that contains a description of the SNARK verifier. In particular, for efficiency, we must ensure that the statement we are proving (essentially) does not grow with the number of recursion steps t. For example, if the representation of the verifier were to grow even linearly with the statement it is verifying, then the size of the statement to be checked would grow exponentially in t. Therefore, prior works have achieved efficiency by focusing on SNARKs which admit sublinear-time verification: either SNARKs for machine computations [BCCT13] or preprocessing SNARKs for circuit computations [BCTV14, COS20]. Requiring sublinear-time verification significantly restricts our choice of SNARK, which limits what we can achieve for PCD.

In addition to the above asymptotic considerations, recursion raises additional considerations concerning concrete efficiency. All SNARK constructions require that statements be encoded as instances of some particular (algebraic) NP-complete problem, and difficulties often arise when encoding the SNARK verifier itself as such an instance. The most well-known example of this is in recursive composition of pairing-based SNARKs, since the verifier performs operations over a finite field that is necessarily different from the field supported “natively” by the NP-complete problem [BCTV14]. This type of problem also appears when recursing SNARKs whose verifiers make heavy use of cryptographic hash functions [COS20].

A New Technique. Bowe, Grigg, and Hopwood [BGH19] suggest an exciting novel approach to recursive composition that replaces the SNARK verifier in the circuit with a simpler algorithm. This algorithm does not itself verify the previous proof \(\pi _{t-1}\). Instead, it adds the proof to an accumulator for verification at the end. The accumulator must not grow in size. A key contribution of [BGH19] is to sketch a mechanism by which this might be achieved for a particular SNARK construction. While they prove this SNARK construction secure, they do not include definitions or proofs of security for their recursive technique. Nonetheless, practitioners have already built software based on these ideas [Halo19, Pickles20].

1.1 Our Contributions

In this work we provide a collection of results that establish the theoretical foundations for the above approach. We introduce the cryptographic object, an accumulation scheme, that enables this technique, and prove that it suffices for constructing PCD. We then provide generic tools for building accumulation schemes, as well as several concrete instantiations. Our framework establishes the security of schemes that are already being used by practitioners, and we believe that it will simplify and facilitate further research in this area.

Accumulation Schemes. We introduce the notion of an accumulation scheme for a predicate \(\varPhi :X \rightarrow \{0,1\}\). This formalizes, and generalizes, an idea outlined in [BGH19]. An accumulation scheme is best understood in the context of the following process. Consider an infinite stream \(\mathsf {q}_1,\mathsf {q}_2,\ldots \) with each \(\mathsf {q}_i \in X\). We augment this stream with accumulators \(\mathsf {acc}_i\) as follows: at time i, the accumulation prover receives \((\mathsf {q}_i,\mathsf {acc}_{i-1})\) and computes \(\mathsf {acc}_i\); the accumulation verifier receives \((\mathsf {q}_i,\mathsf {acc}_{i-1},\mathsf {acc}_i)\) and checks that \(\mathsf {acc}_{i-1}\) and \(\mathsf {q}_{i}\) were correctly accumulated into \(\mathsf {acc}_i\) (if not, the process ends). Then at any time t, the decider can validate \(\mathsf {acc}_t\), which establishes that, for all \(i \in [t]\), \(\varPhi (\mathsf {q}_i) = 1\). All three algorithms are stateless. To avoid trivial constructions, we want (i) the accumulation verifier to be more efficient than \(\varPhi \), and (ii) the size of an accumulator (and hence the running time of the three algorithms) does not grow over time. Accumulation schemes are powerful, as we demonstrate next.

Recursion from Accumulation. We say that a SNARK has an accumulation scheme if the predicate corresponding to its verifier has an accumulation scheme (so X is a set of instance-proof pairs). We show that any SNARK having an accumulation scheme where the accumulation verifier is sublinear can be used to build a proof-carrying data (PCD) scheme, even if the SNARK verifier is not itself sublinear. This broadens the class of SNARKs from which PCD can be built. Similarly to [COS20], we show that if the SNARK and accumulation scheme are post-quantum secure, so is the PCD scheme. (Though it remains an open question whether there are non-trivial accumulation schemes for post-quantum SNARKs.)

Theorem 1

(informal). There is an efficient transformation that compiles any SNARK with an efficient accumulation scheme into a PCD scheme. If the SNARK and its accumulation scheme are zero knowledge, then the PCD scheme is also zero knowledge. Additionally, if the SNARK and its accumulation scheme are post-quantum secure then the PCD scheme is also post-quantum secure.

The above theorem holds in the standard model (where all parties have access to a common reference string, but no oracles). Since our construction makes non-black-box use of the accumulation scheme verifier, the theorem does not carry over to the random oracle model (ROM). It remains an intriguing open problem to determine whether or not SNARKs in the ROM imply PCD in the ROM (and if the latter is even possible).

Note that we require a suitable definition of zero knowledge for an accumulation scheme. This is not trivial, and our definition is informed by what is required for Theorem 1 and what our constructions achieve.

Proof-carrying data is a powerful primitive: it implies IVC and, further assuming collision-resistant hash functions, also efficient SNARKs for machine computations. Hence, Theorem 1 may be viewed as an extension of the “bootstrapping” theorem of [BCCT13] to certain non-succinct-verifier SNARKs.

See Sect. 2.1 for a summary of the ideas behind Theorem 1, and the full version for technical details.

Accumulation from Accumulation. Given the above, a natural question is: where do accumulation schemes for SNARKs come from? In [BGH19] it was informally observed that a specific SNARK construction, based on the hardness of the discrete logarithm problem, has an accumulation scheme. To show this, [BGH19] first observe that the verifier in the SNARK construction is sublinear except for the evaluation of a certain predicate (checking an opening of a polynomial commitment [KZG10]), then outline a construction which is essentially an accumulation scheme for that predicate.

We prove that this idea is a special case of a general paradigm for building accumulation schemes for SNARKs.

Theorem 2

(informal). There is an efficient transformation that, given a SNARK whose verifier is succinct when given oracle access to a “simpler” predicate, and an accumulation scheme for that predicate, constructs an accumulation scheme for the SNARK. Moreover, this transformation preserves zero knowledge and post-quantum security of the accumulation scheme.

The construction underlying Theorem 2 is black-box. In particular, if both the SNARK and the accumulation scheme for the predicate are secure with respect to an oracle, then the resulting accumulation scheme for the SNARK is secure with respect to that oracle.

See Sect. 2.3 for a summary of the ideas behind Theorem 2, and the full version for technical details.

Accumulating Polynomial Commitments. Several works [MBKM19, GWC19, CHM+20] have constructed SNARKs whose verifiers are succinct relative to a specific predicate: checking the opening of a polynomial commitment [KZG10]. We prove that two natural polynomial commitment schemes possess accumulation schemes in the random oracle model: \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\), a scheme based on the security of discrete logarithms [BCC+16, BBB+18, WTS+18]; and \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\), a scheme based on knowledge assumptions in bilinear groups [KZG10, CHM+20].

Theorem 3

(informal). In the random oracle model, there exist (zero knowledge) accumulation schemes for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) and \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) that achieve the efficiency outlined in the table below (n denotes the number of evaluation proofs, and \(d\) denotes the degree of committed polynomials).

Polynomial commitment

Assumption

Cost to check evaluation proofs

Cost to check an accumulation step

Cost to check final accumulator

Accumulator size

\(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\)

DLOG + RO

\(\varTheta (nd)\) \(\mathbb {G}\) mults.

\(\varTheta (n\log d)\) \(\mathbb {G}\) mults.

\(\varTheta (d)\) \(\mathbb {G}\) mults.

\(\varTheta (\log d)\) \(\mathbb {G}\)

\(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\)

AGM + RO

\(\varTheta (n)\) pairings

\(\varTheta (n)\) \(\mathbb {G}_1\) mults.

1 pairing

2 \(\mathbb {G}_1\)

For both schemes the cost of checking that an accumulation step was performed correctly is much less than the cost of checking an evaluation proof. We can apply Theorem 2 to combine either of these accumulation schemes for polynomial commitments with any of the aforementioned predicate-efficient SNARKs, which yields concrete accumulation schemes for these SNARKs with the same efficiency benefits.

We remark that our accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) is a variation of a construction presented in [BGH19], and so our result establishes the security of a type of construction used by practitioners.

We sketch the constructions underlying Theorem 3 in Sect. 2.4, and provide details in the full version of our paper.

New Constructions of PCD. By combining our results, we (heuristically) obtain constructions of PCD that achieve new properties. Namely, starting from either \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) or \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\), we can apply Theorem 2 to a suitable SNARK to obtain a SNARK with an accumulation scheme in the random oracle model. Then we can instantiate the random oracle, obtaining a SNARK and accumulation scheme with heuristic security in the standard (CRS) model, to which we apply Theorem 1 to obtain a corresponding PCD scheme. Depending on whether we started with \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) or \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\), we get a PCD scheme with different features, as summarized below.

  • From \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\): PCD based on discrete logarithms. We obtain a PCD scheme in the uniform reference string model (i.e., without secret parameters) and small argument sizes. In contrast, prior PCD schemes require structured reference strings [BCTV14] or have larger argument sizes [COS20]. Moreover, our PCD scheme can be efficiently instantiated from any cycle of elliptic curves [SS11]. In contrast, prior PCD schemes with small argument size use cycles of pairing-friendly elliptic curves [BCTV14, CCW19], which are more expensive.

  • From \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\): lightweight PCD based on bilinear groups. The recursive statement inside this PCD scheme does not involve checking any pairing computations, because pairings are deferred to a verification that occurs outside the recursive statement. In contrast, the recursive statements in prior PCD schemes based on pairing-based SNARKs were more expensive because they checked pairing computations [BCTV14].

Note again that our constructions of PCD are heuristic as they involve instantiating the random oracle of certain SNARK constructions with an appropriate hash function. This is because Theorem 3 is proven in the random oracle model, but Theorem 1 is explicitly not (as is the case for all prior IVC/PCD constructions [Val08, BCCT13, BCTV14, COS20]). There is evidence that this limitation might be inherent [CL20].

Open Problem: Accumulation in the Standard Model. All known constructions of accumulation schemes for non-interactive arguments make use of either random oracles (as in our constructions) or knowledge assumptions (e.g., the “trivial” construction from succinct-verifier SNARKs). A natural question, then, is whether there exist constructions of accumulation schemes for non-interactive arguments, or any other interesting predicate, from standard assumptions, or any assumptions which are not known to imply SNARKs. A related question is whether there is a black-box impossibility for accumulation schemes similar to the result for SNARGs of [GW11].

1.2 Related Work

Below we survey prior constructions of IVC/PCD.

PCD from SNARKs. Bitansky, Canetti, Chiesa, and Tromer [BCCT13] proved that recursive composition of SNARKs for machine computations implies PCD for constant-depth graphs, and that this in turn implies IVC for polynomial-time machine computations. From the perspective of concrete efficiency, however, one can achieve more efficient recursive composition by using preprocessing SNARKs for circuits rather than SNARKs for machines [BCTV14, COS20]; this observation has led to real-world applications [Co17, BMRS20]. The features of the PCD scheme obtained from recursion depend on the features of the underlying preprocessing SNARK. Below we summarize the features of the two known constructions.

  • PCD from Pairing-based SNARKs. Ben-Sasson, Chiesa, Tromer, and Virza [BCTV14] used pairing-based SNARKs with a special algebraic property to achieve efficient recursive composition with very small argument sizes (linear in the security parameter \(\lambda \)). The use of pairing-based SNARKs has two main downsides. First, they require sampling a structured reference string involving secret values (“toxic waste”) that, if revealed, compromise security. Second, the verifier performs operations over a finite field that is necessarily different from the field supported “natively” by the statement it is checking. To avoid expensive simulation of field arithmetic, the construction uses pairing-friendly cycles of elliptic curves, which severely restricts the choice of field in applications and requires a large base field for security.

  • PCD from IOP-based SNARKs. Chiesa, Ojha, and Spooner [COS20] used a holographic IOP to construct a preprocessing SNARK that is unconditionally secure in the (quantum) random oracle model, which heuristically implies a post-quantum preprocessing SNARK in the uniform reference string model (i.e., without toxic waste). They then proved that any post-quantum SNARK leads to a post-quantum PCD scheme via recursive composition. The downside of this construction is that, given known holographic IOPs, the argument size is larger, currently at \(O(\lambda ^2 \log ^2 N)\) bits for circuits of size N.

IVC from Homomorphic Encryption. Naor, Paneth, and Rothblum [NPR19] obtain a notion of IVC by using somewhat homomorphic encryption and an information-theoretic object called an “incremental PCP”. The key feature of their scheme is that security holds under falsifiable assumptions.

There are two drawbacks, however, that restrict the use of the notion of IVC that their scheme achieves.

First, the computation to be verified must be deterministic (this appears necessary for schemes based on falsifiable assumptions given known impossibility results [GW11]). Second, and more subtly, completeness holds only in the case where intermediate proofs were honestly generated. This means that the following attack may be possible: an adversary provides an intermediate proof that verifies, but it is impossible for honest parties to generate new proofs for subsequent computations. Our construction of PCD achieves the stronger condition that completeness holds so long as intermediate proofs verify, ruling out this attack.

Both nondeterministic computation and the stronger completeness notion (achieved by all SNARK-based PCD schemes) are necessary for many of the applications of IVC/PCD.

2 Techniques

2.1 PCD from Arguments with Accumulation Schemes

We summarize the main ideas behind Theorem 1, which obtains proof-carrying data (PCD) from any succinct non-interactive argument of knowledge (SNARK) that has an accumulation scheme. For the sake of exposition, in this section we focus on the special case of IVC, which can be viewed as repeated application of a circuit F. Specifically, we wish to check a claim of the form “\(F^{T}(z_0) = z_{T}\)” where \(F^{T}\) denotes F composed with itself T times.

Prior Work: Recursion from Succinct Verification. Recall that in previous approaches to efficient recursive composition [BCTV14, COS20], at each step i we prove a claim of the form “\(z_i = F(z_{i-1})\), and there exists a proof \(\pi _{i-1}\) that attests to the correctness of \(z_{i-1}\)”. This claim is expressed using a circuit R which is the conjunction of F with a circuit representing the SNARK verifier; in particular, the size of the claim is at least the size of the verifier circuit. If the size of the verifier circuit grows linearly (or more) with the size of the claim being checked, then verifying the final proof becomes more costly than the original computation.

For this reason, these works focus on SNARKs with succinct verification, where the verifier runs in time sublinear in the size of the claim. In this case, the size of the claim essentially does not grow with the number of recursive steps, and so checking the final proof costs roughly the same as checking a single step.

Succinct verification is a seemingly paradoxical requirement: the verifier does not even have time to read the circuit R. One way to sidestep this issue is preprocessing: one designs an algorithm that, at the beginning of the recursion, computes a small cryptographic digest of R, which the recursive verifier can use instead of reading R directly. Because this preprocessing need only be performed once for the given R in an offline phase, it has almost no effect on the performance of each recursive step (in the later online phase).

A New Paradigm: IVC from Accumulation. Even allowing for preprocessing, succinct verification remains a strong requirement, and there are many SNARKs that are not known to satisfy it (e.g., [BCC+16, BBB+18, AHIV17, BCG+17, BCR+19]). Bowe, Grigg, and Hopwood [BGH19] suggested a further relaxation of succinctness that appears to still suffice for recursive composition: a type of “post-processing”. Their observation is as follows: if a SNARK is such that we can efficiently “defer” the verification of a claim in a way that does not grow in cost with the number of claims to be checked, then we can hope to achieve recursive composition by deferring the verification of all claims to the end.

In the remainder of this section, we will give an overview of the proof of Theorem 1, our construction of PCD from SNARKs that have this “post-processing” property. We note that this relaxation of requirements is useful because, as suggested in [BGH19], it leads to new constructions of PCD with desirable properties (see discussion at the end of Sect. 1.1). In fact, some of these efficiency features are already being exploited by practitioners working on recursing SNARKs [Halo19, Pickles20].

The specific property we require, which we discuss more formally in the next section, is that the SNARK has an accumulation scheme. This is a generalization of the idea described in [BGH19]. Informally, an accumulation scheme consists of three algorithms: an accumulation prover, an accumulation verifier, and a decider. The accumulation prover is tasked with taking an instance-proof pair \((z,\pi )\) and a previous accumulator \(\mathsf {acc}\), and producing a new accumulator \(\mathsf {acc}^{\star }\) that “includes” the new instance. The accumulation verifier, given \(((z,\pi ),\mathsf {acc},\mathsf {acc}^{\star })\), checks that \(\mathsf {acc}^{\star }\) was computed correctly (i.e., that it accumulates \((z,\pi ))\) into \(\mathsf {acc}\)). Finally the decider, given a single accumulator \(\mathsf {acc}\), performs a single check that simultaneously ensures that every instance-proof pair accumulated in \(\mathsf {acc}\) verifies.Footnote 1

Given such an accumulation scheme, we can construct IVC as follows. Given a previous instance \(z_{i}\), proof \(\pi _{i}\), and accumulator \(\mathsf {acc}_{i}\), the IVC prover first accumulates \((z_{i},\pi _{i})\) with \(\mathsf {acc}_{i}\) to obtain a new accumulator \(\mathsf {acc}_{i+1}\). The IVC prover also generates a SNARK proof \(\pi _{i+1}\) of the claim: “\(z_{i+1} = F(z_{i})\), and there exist a proof \(\pi _{i}\) and an accumulator \(\mathsf {acc}_{i}\) such that the accumulation verifier accepts \(((z_{i},\pi _{i}),\mathsf {acc}_{i},\mathsf {acc}_{i+1})\)”, expressed as a circuit R. The final IVC proof then consists of \((\pi _{T},\mathsf {acc}_{T})\). The IVC verifier checks such a proof by running the SNARK verifier on \(\pi _{T}\) and the accumulation scheme decider on \(\mathsf {acc}_{T}\).

Why does this achieve IVC? Throughout the computation we maintain the invariant that if \(\mathsf {acc}_i\) is a valid accumulator (according to the decider) and \(\pi _i\) is a valid proof, then the computation is correct up to the i-th step. Clearly if this holds at time T then the IVC verifier successfully checks the entire computation. Observe that if we were able to prove that “\(z_{i+1} = F(z_i)\), \(\pi _i\) is a valid proof, and \(\mathsf {acc}_i\) is a valid accumulator”, by applying the invariant we would be able to conclude that the computation is correct up to step \(i+1\). Unfortunately we are not able to prove this directly, for two reasons: (i) proving that \(\pi _i\) is a valid proof requires proving a statement about the argument verifier, which may not be sublinear, and (ii) proving that \(\mathsf {acc}_i\) is a valid accumulator requires proving a statement about the decider, which may not be sublinear.

Instead of proving this claim directly, we “defer” it by having the prover accumulate \((z_i,\pi _i)\) into \(\mathsf {acc}_i\) to obtain a new accumulator \(\mathsf {acc}_{i+1}\). The soundness property of the accumulation scheme ensures that if \(\mathsf {acc}_{i+1}\) is valid and the accumulation verifier accepts \(((z_{i},\pi _{i}),\mathsf {acc}_{i},\mathsf {acc}_{i+1})\), then \(\pi _i\) is a valid proof and \(\mathsf {acc}_i\) is a valid accumulator. Thus all that remains to maintain the invariant is for the prover to prove that the accumulation verifier accepts; this is possible provided that the accumulation verifier is sublinear.

From Sketch to Proof. In the full version of our paper, we give the formal details of our construction and a proof of correctness. In particular, we show how to construct PCD, a more general primitive than IVC. In the PCD setting, rather than each computation step having a single input \(z_i\), it receives \(m\) inputs from different nodes. Proving correctness hence requires proving that all of these inputs were computed correctly. For our construction, this entails checking \(m\) proofs and \(m\) accumulators. To do this, we extend the definition of an accumulation scheme to allow accumulating multiple instance-proof pairs and multiple “old” accumulators.

We now informally discuss the properties of our PCD construction.

  • Efficiency requirements. Observe that the statement to be proved includes only the accumulation verifier, and so the only efficiency requirement for obtaining PCD is that this algorithm run in time sublinear in the size of the circuit R. This implies, in particular, that an accumulator must be of size sublinear in the size of R, and hence must not grow with each accumulation step. The SNARK verifier and the decider algorithm need only be efficient in the usual sense (i.e., polynomial-time).

  • Soundness. We prove that the PCD scheme is sound provided that the SNARK is knowledge sound (i.e., is an adaptively-secure argument of knowledge) and the accumulation scheme is sound (see Sect. 2.2 for more on what this means). We stress that in both cases security should be in the standard (CRS) model, without any random oracles (as in prior PCD constructions).

  • Zero knowledge. We prove that the PCD scheme is zero knowledge, if the underlying SNARK and accumulation scheme are both zero knowledge (for this part we also formulate a suitable notion of zero knowledge for accumulation schemes as discussed shortly in Sect. 2.2).

  • Post-quantum security. We also prove that if both the SNARK and accumulation scheme are post-quantum secure, then so is the resulting PCD scheme. Here by post-quantum secure we mean that the relevant security properties continue to hold even against polynomial-size quantum circuits, as opposed to just polynomial-size classical circuits.

2.2 Accumulation Schemes

A significant contribution of this work is formulating a general notion of an accumulation scheme. An accumulation scheme for a non-interactive argument as described above is a particular instance of this definition; in subsequent sections we will apply the definition in other settings.

We first give an informal definition that captures the key features of an accumulation scheme. For clarity this is stated for the (minimal) case of a single predicate input \(\mathsf {q}\) and a single “old” accumulator \(\mathsf {acc}\); we later extend this in the natural way to n predicate inputs and m “old” accumulators.

Definition 1

(informal). An accumulation scheme for a predicate \(\varPhi :X \rightarrow \{0,1\}\) consists of a triple of algorithms \((\mathrm {P},\mathrm {V},\mathrm {D})\), known as the prover, verifier, and decider, that satisfies the following properties.

  • Completeness: For all accumulators \(\mathsf {acc}\) and predicate inputs \(\mathsf {q}\in X\), if \(\mathrm {D}(\mathsf {acc}) = 1\) and \(\varPhi (\mathsf {q}) = 1\), then for \(\mathsf {acc}^{\star }\leftarrow \mathrm {P}(\mathsf {acc},\mathsf {q})\) it holds that \(\mathrm {V}(\mathsf {acc},\mathsf {q},\mathsf {acc}^{\star }) = 1\) and \(\mathrm {D}(\mathsf {acc}^{\star }) = 1\).

  • Soundness: For all efficiently-generated accumulators \(\mathsf {acc},\mathsf {acc}^{\star }\) and predicate inputs \(\mathsf {q}\in X\), if \(\mathrm {D}(\mathsf {acc}^{\star }) = 1\) and \(\mathrm {V}(\mathsf {acc},\mathsf {q},\mathsf {acc}^{\star }) = 1\) then, with all but negligible probability, \(\varPhi (\mathsf {q}) = 1\) and \(\mathrm {D}(\mathsf {acc}) = 1\).

An accumulation scheme for a SNARK is an accumulation scheme for the predicate induced by the argument verifier; in this case the predicate input \(\mathsf {q}\) consists of an instance-proof pair \((\mathbbm {x},\pi )\). Note that the completeness requirement does not place any restriction on how the previous accumulator \(\mathsf {acc}\) is generated; we require that completeness holds for any \(\mathsf {acc}\) the decider \(\mathrm {D}\) determines to be valid, and any \(\mathsf {q}\) for which the predicate \(\varPhi \) holds. This is needed to obtain a similarly strong notion of completeness for PCD, required for applications where accumulation is done by multiple parties that do not trust one another.

Zero Knowledge. For our PCD application, the notion of zero knowledge for an accumulation scheme that we use is the following: one can sample a “fake” accumulator that is indistinguishable from a real accumulator \(\mathsf {acc}^{\star }\), without knowing anything about the old accumulator \(\mathsf {acc}\) and predicate input \(\mathsf {q}\) that were accumulated in \(\mathsf {acc}^{\star }\). The existence of the accumulation verifier \(\mathrm {V}\) complicates matters here: if the adversary knows \(\mathsf {acc}\) and \(\mathsf {q}\), then it is easy to distinguish a real accumulator from a fake one using \(\mathrm {V}\). We resolve this issue by modifying Definition 1 to have the accumulation prover \(\mathrm {P}\) produce a verification proof \(\pi _{\mathrm {V}}\) in addition to the new accumulator \(\mathsf {acc}^{\star }\). Then \(\mathrm {V}\) uses \(\pi _{\mathrm {V}}\) in verifying the accumulator, but \(\pi _{\mathrm {V}}\) is not required for subsequent accumulation. In our application, the simulator then does not have to simulate \(\pi _{\mathrm {V}}\). This avoids the problem described: even if the adversary knows \(\mathsf {acc}\) and \(\mathsf {q}\), unless \(\pi _{\mathrm {V}}\) is correct, \(\mathrm {V}\) can simply reject, as it would for a “fake” accumulator. Our informal definition is as follows.

Definition 2

An accumulation scheme for \(\varPhi \) is zero knowledge if there exists an efficient simulator \(\mathrm {S}\) such that for all accumulators \(\mathsf {acc}\) and inputs \(\mathsf {q}\in X\) such that \(\mathrm {D}(\mathsf {acc})=1\) and \(\varPhi (\mathsf {q}) = 1\), the distribution of \(\mathsf {acc}^{\star }\) when \((\mathsf {acc}^{\star },\pi _{\mathrm {V}}) \leftarrow \mathrm {P}(\mathsf {acc},\mathsf {q})\) is computationally indistinguishable from \(\mathsf {acc}^{\star }\leftarrow \mathrm {S}(1^{\lambda })\).

Predicate Specification. The above informal definitions omit many important details; we now highlight some of these. Suppose that, as required for IVC/PCD, we have some fixed circuit \(R\) for which we want to accumulate pairs \((\mathbbm {x}_i,\pi _i)\), where \(\pi _i\) is a SNARK proof that there exists \(\mathbbm {w}_i\) such that \(R(\mathbbm {x}_i,\mathbbm {w}_i) = 1\). In this case the predicate corresponding to the verifier depends not only on the pair \((\mathbbm {x}_i,\pi _i)\), but also on the circuit \(R\), as well as the public parameters of the argument scheme \(\mathsf {pp}\) and (often) a random oracle \(\rho \).

Moreover, each of these inputs has different security and efficiency considerations. The security of the SNARK (and the accumulation scheme) can only be guaranteed with high probability over public parameters drawn by the generator algorithm of the SNARK, and over the random oracle. The circuit \(R\) may be chosen adversarially, but cannot be part of the input \(\mathsf {q}\) because it is too large; it must be fixed at the beginning.

These considerations lead us to define an accumulation scheme with respect to both a predicate \(\varPhi :\mathcal {U}(*) \times (\{0,1\}^{*})^{3} \rightarrow \{0,1\}\) and a predicate-specification algorithm \(\mathcal {H}\). We then adapt Definition 1 to hold for the predicate \(\varPhi (\rho ,\mathsf {pp}_{\varPhi },\mathsf {i}_{\varPhi },\cdot )\) where \(\rho \) is a random oracle, \(\mathsf {pp}_{\varPhi }\) is output by \(\mathcal {H}\), and \(\mathsf {i}_{\varPhi }\) is chosen adversarially. In our SNARK example, \(\mathcal {H}\) is equal to the SNARK generator, \(\mathsf {i}_{\varPhi }\) is the circuit \(R\), and \(\varPhi (\rho ,\mathsf {pp},R,(\mathbbm {x},\pi )) = \mathcal {V}^{\rho }(\mathsf {pp},R,\mathbbm {x},\pi )\).

Remark 1

(helped verification). We compare accumulation schemes for SNARKs with the notion of “helped verification” [MBKM19]. In a SNARK with helped verification, an untrusted party known as the helper can, given n proofs, produce an auxiliary proof that enables checking the n proofs at lower cost than that of checking each proof individually. This batching capability can be viewed as a special case of accumulation, as it applies to n “fresh” proofs only; there is no notion of batching “old” accumulators. It is unclear whether the weaker notion of helped verification alone suffices to construct IVC/PCD schemes.

2.3 Constructing Arguments with Accumulation Schemes

A key ingredient in our construction of PCD is a SNARK that has an accumulation scheme (see Sect. 2.1). Below we summarize the ideas behind Theorem 2, by explaining how to construct accumulation schemes for SNARKs whose verifier is succinct relative to an oracle predicate \(\varPhi _\circ \) that itself has an accumulation scheme.

Predicate-Efficient SNARKs. We call a SNARK \(\mathsf {ARG}\) predicate-efficient with respect to a predicate \(\varPhi _\circ \) if its verifier \(\mathcal {V}\) operates as follows: (i) run a fast “inner” verifier \(\mathcal {V}_{\mathsf {pe}}\) to produce a bit b and query set Q; (ii) accept iff \(b = 1\) and for all \(\mathsf {q}\in Q\), \(\varPhi _\circ (\mathsf {q}) = 1\). In essence, \(\mathcal {V}\) can be viewed as a circuit with “oracle gates” for \(\varPhi _\circ \).Footnote 2 The aim is for \(\mathcal {V}_{\mathsf {pe}}\) to be significantly more efficient than \(\mathcal {V}\); that is, the queries to \(\varPhi _\circ \) capture the “expensive” part of the computation of \(\mathcal {V}\).

As noted in Sect. 1.1, one can view recent SNARK constructions [MBKM19, GWC19, CHM+20] as being predicate-efficient with respect to a “polynomial commitment” predicate. We discuss how to construct accumulation schemes for these predicates below in Sect. 2.4.

Accumulation Scheme For Predicate-Efficient SNARKs. Let \(\mathsf {ARG}\) be a SNARK that is predicate-efficient with respect to a predicate \(\varPhi _\circ \), and let \(\mathsf {AS}_{\circ }\) be an accumulation scheme for \(\varPhi _\circ \). To check n proofs, instead of directly invoking the SNARK verifier \(\mathcal {V}\), we can first run \(\mathcal {V}_{\mathsf {pe}}\) n times to generate n query sets for \(\varPhi _\circ \), and then, instead of invoking \(\varPhi _\circ \) on each of these sets, we can accumulate these queries using \(\mathsf {AS}_{\circ }\). Below we sketch the construction of an accumulation scheme \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}\) for \(\mathsf {ARG}\) based on this idea.

To accumulate n instance-proof pairs \([(\mathbbm {x}_i, \pi _i)]_{i=1}^{n}\) starting from an old accumulator \(\mathsf {acc}\), the accumulation prover \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}.\mathrm {P}\) first invokes the inner verifier \(\mathcal {V}_{\mathsf {pe}}\) on each \((\mathbbm {x}_i,\pi _i)\) to generate a query set \(Q_i\) for \(\varPhi _\circ \), accumulates their union \(Q= \cup _{i=1}^{n} Q_i\) into \(\mathsf {acc}\) using \(\mathsf {AS}_{\circ }.\mathrm {P}\), and finally outputs the resulting accumulator \(\mathsf {acc}^{\star }\). To check that \(\mathsf {acc}^{\star }\) indeed accumulates \([(\mathbbm {x}_i, \pi _i)]_{i=1}^{n}\) into \(\mathsf {acc}\), the accumulation verifier \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}.\mathrm {V}\) first checks, for each i, whether the inner verifier \(\mathcal {V}_{\mathsf {pe}}\) accepts \((\mathbbm {x}_i,\pi _i)\), and then invokes \(\mathsf {AS}_{\circ }.\mathrm {V}\) to check whether \(\mathsf {acc}^{\star }\) correctly accumulates the query set \(Q= \cup _{i=1}^{n} Q_i\). Finally, to decide whether \(\mathsf {acc}^{\star }\) is a valid accumulator, the accumulation scheme decider \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}.\mathrm {D}\) simply invokes \(\mathsf {AS}_{\circ }.\mathrm {D}\).

From Sketch to Proof. The foregoing sketch omits details required to construct a scheme that satisfies the “full” definition of accumulation schemes as stated in the full version of our paper. For instance, as noted in Sect. 2.3, the predicate \(\varPhi _\circ \) may be an oracle predicate, and could depend on the public parameters of the SNARK \(\mathsf {ARG}\). We handle this by requiring that the accumulation scheme for \(\varPhi _\circ \) uses the SNARK generator \(\mathcal {G}\) as its predicate specification algorithm. We also show that zero knowledge and post-quantum security are preserved. See the full version of our paper for a formal treatment of these issues, along with security proofs.

From Predicate-Efficient SNARKs to PCD. In order to build an accumulation scheme \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}\) that suffices for PCD, \(\mathsf {ARG}\) and \(\mathsf {AS}_{\circ }\) must satisfy certain efficiency properties. In particular, when verifying satisfiability for a circuit of size N, the running time of \(\mathsf {AS}_{\scriptscriptstyle \mathsf {ARG}}.\mathrm {V}\) must be sublinear in N, which means in turn that the running times of \(\mathcal {V}_{\mathsf {pe}}\) and \(\mathsf {AS}_{\circ }.\mathrm {V}\), as well as the size of the query set \(Q\), must be sublinear in N. Crucially, however, \(\mathsf {AS}_{\circ }.\mathrm {D}\) need only run in time polynomial in N.

2.4 Accumulation Schemes for Polynomial Commitments

As noted in Sect. 2.3, several SNARK constructions (e.g., [MBKM19, GWC19, CHM+20]) are predicate-efficient with respect to an underlying polynomial commitment, which means that constructing an accumulation scheme for the latter leads (via Theorem 2) to an accumulation scheme for the whole SNARK.

Informally, a polynomial commitment scheme (PC scheme) is a cryptographic primitive that enables one to produce a commitment \(C\) to a polynomial \(p\), and then to prove that this committed polynomial evaluates to a claimed value \(v\) at a desired point \(z\). An accumulation scheme for a PC scheme thus accumulates claims of the form “\(C\) commits to \(p\) such that \(p(z) = v\)” for arbitrary polynomials \(p\) and evaluation points \(z\).

In this section, we explain the ideas behind Theorem 3, by sketching how to construct (zero knowledge) accumulation schemes for two popular (hiding) polynomial commitment schemes.

  • In Sect. 2.4.1, we sketch our accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\), a polynomial commitment scheme derived from [BCC+16, BBB+18, WTS+18] that is based on the hardness of discrete logarithms.

  • In Sect. 2.4.2, we sketch our accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\), a polynomial commitment scheme based on knowledge assumptions over bilinear groups [KZG10, CHM+20].

In each case, the running time of the accumulation verifier will be sublinear in the degree of the polynomial, and the accumulator itself will not grow with the number of accumulation steps. This allows the schemes to be used, in conjunction with a suitable predicate-efficient SNARK, to construct PCD.

We remark that each of our accumulation schemes is proved secure in the random oracle model by invoking a useful lemma about “zero-finding games” for committed polynomials. Security also requires that the random oracle used for an accumulation scheme for a PC scheme is domain-separated from the random oracle used by the PC scheme itself. See the full version for details.

2.4.1 Accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\)

We sketch our accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\). For univariate polynomials of degree less than \(d\), \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) achieves evaluation proofs of size \(O(\lambda \log d)\) in the random oracle model, and assuming the hardness of the discrete logarithm problem in a prime order group \(\mathbb {G}\). In particular, there are no secret parameters (so-called “toxic waste”). However, \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) has poor verification complexity: checking an evaluation proof requires \(\varOmega (d)\) scalar multiplications in \(\mathbb {G}\). Bowe, Grigg, and Hopwood [BGH19] suggested a way to amortize this cost across a batch of n proofs. Below we show that their idea leads to an accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\) with an accumulation verifier that uses only \(O(n\log d)\) scalar multiplications instead of the naive \(\varTheta (n \cdot d)\), and with an accumulator of size \(O(\log d)\) elements in \(\mathbb {G}\).

Summary of \({\mathbf {\mathsf{{PC}}}}_{\scriptscriptstyle {\mathbf {\mathsf{{DL}}}}}\). The committer and receiver both sample (consistently via the random oracle) a list of group elements \(\{G_0, G_1, \dotsc , G_d\} \in \mathbb {G}^{d+ 1}\) in a group \(\mathbb {G}\) of prime order \(q\) (written additively). A commitment to a polynomial \(p(X) = \sum _{i=0}^da_i X^i \in \mathbb {F}_{q}^{\le d}[X]\) is then given by \(C:= \sum _{i=0}^da_i G_i\). To prove that the committed polynomial \(p\) evaluates to \(v\) at a given point \(z\in \mathbb {F}_{q}\), it suffices to prove that the triple \((C,z,v)\) satisfies the following NP statement:

$$\begin{aligned} \textstyle \exists \, a_0,\ldots ,a_{d} \in \mathbb {F}\text { s.t. } v= \sum _{i=0}^da_iz^i \,\text { and }\, C= \sum _{i=0}^da_i G_i . \end{aligned}$$

This is a special case of an inner product argument (IPA), as defined in [BCC+16], which proves the inner product of two committed vectors. The receiver simply verifies this inner product argument to check the evaluation. The fact that the vector \((1,z,\ldots ,z^{d})\) is known to the verifier and has a certain structure is exploited in the accumulation scheme that we describe below.

Accumulation Scheme for the IPA. Our accumulation scheme relies on a special structure of the IPA verifier: it generates \(O(\log d)\) challenges using the random oracle, then performs cheap checks requiring \(O(\log d)\) field and group operations, and finally performs an expensive check requiring \(\varOmega (d)\) scalar multiplications. This latter check asserts consistency between the challenges and a group element \(U\) contained in the proof. Hence, the IPA verifier is succinct barring the expensive check, and so constructing an accumulation scheme for the IPA reduces to the task of constructing an accumulation scheme for the expensive check involving \(U\).

To do this, we rely on an idea of Bowe, Grigg, and Hopwood [BGH19], which itself builds on an observation in [BBB+18]. Namely, letting \((\xi _1, \dotsc , \xi _{\log _2d})\) be the protocol’s challenges, \(U\) can be viewed as a commitment to the polynomial \(h(X) := \prod _{i=0}^{\log _2(d) - 1}(1 + \xi _{\log _2(d) - i} X^{2^{i}})\in \mathbb {F}_{q}^{\le d}[X]\). This polynomial has the special property that it can be evaluated at any point in just \(O(\log d)\) field operations (exponentially smaller than its degree \(d\)). This allows transforming the expensive check on \(U\) into a check that is amenable to batching: instead of directly checking that \(U\) is a commitment to \(h\), one can instead check that the polynomial committed inside \(U\) agrees with \(h\) at a challenge point \(z\) sampled via the random oracle.

We leverage this idea as follows. When accumulating evaluation claims about multiple polynomials \(p_1, \dotsc , p_n\), applying the foregoing transformation results in n checks of the form “check that the polynomial contained in \(U_i\) evaluates to \(h_i(z)\) at the point \(z\)”. Because these are all claims for the correct evaluation of the polynomials \(h_i\) at the same point \(z\), we can accumulate them via standard homomorphic techniques. We now summarize how we apply this idea to construct our accumulation scheme \(\mathsf {AS}= (\mathrm {P},\mathrm {V},\mathrm {D})\) for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\).

Accumulators in our accumulation scheme have the same form as the instances to be accumulated: they are tuples of the form \((C, z, v, \pi )\) where \(\pi \) is an evaluation proof for the claim “\(p(z)=v\)” and \(p\) is the polynomial committed in \(C\). For simplicity, below we consider the case of accumulating one old accumulator \(\mathsf {acc}= (C_1, z_1, v_1, \pi _1)\) and one instance \((C_2, z_2, v_2, \pi _2)\) into a new accumulator \(\mathsf {acc}^{\star }= (C, z, v, \pi )\).

Accumulation prover \(\mathrm {P}\): compute the new accumulator \(\mathsf {acc}^{\star }= (C, z, v, \pi )\) from the old accumulator \(\mathsf {acc}= (C_1, z_1, v_1, \pi _1)\) and the instance \((C_2, z_2, v_2, \pi _2)\) as follows.

  • Compute \(U_{1},U_{2}\) from \(\pi _1,\pi _2\) respectively. As described above, these elements can be viewed as commitments to polynomials \(h_1,h_2\) defined by the challenges derived from \(\pi _1,\pi _2\).

  • Use the random oracle \(\rho \) to compute the random challenge \(\alpha := \rho ([(h_1, U_1), (h_2, U_{2})])\).

  • Compute \(C\ {:=} \ U_{1} + \alpha U_{2}\), which is a polynomial commitment to \(p(X) := h_1(X) + \alpha h_2(X)\).

  • Compute the challenge point \(z:= \rho (C, p)\), where \(p\) is uniquely represented via the tuple \(([h_1, h_2], \alpha )\).

  • Construct an evaluation proof \(\pi \) for the claim “\(p(z)=v\)”. (This step is the only expensive one.)

  • Output the new accumulator \(\mathsf {acc}^{\star }:= (C, z, v, \pi )\).

Accumulation verifier \(\mathrm {V}\): to check that the new accumulator \(\mathsf {acc}^{\star }= (C, z, v, \pi )\) was correctly generated from the old accumulator \(\mathsf {acc}= (C_1, z_1, v_1, \pi _1)\) and the instance \((C_2, z_2, v_2, \pi _2)\), first compute the challenges \(\alpha \) and \(z\) from the random oracle as above, and then check that (a) \((C_1, z_1, v_1, \pi _1)\) and \((C_2, z_2, v_2, \pi _2)\) pass the cheap checks of the IPA verifier, (b) \(C= U_{1} + \alpha U_{2}\), and (c) \(h_1(z) + \alpha h_2(z) = v\).

Decider \(\mathrm {D}\): on input the (final) accumulator \(\mathsf {acc}^{\star }= (C, z, v, \pi )\), check that \(\pi \) is a valid evaluation proof for the claim that the polynomial committed inside \(C\) evaluates to \(v\) at the point \(z\).

This construction achieves the efficiency summarized in Theorem 3.

We additionally achieve zero knowledge accumulation for the hiding variant of \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\). Informally, the accumulation prover randomizes \(\mathsf {acc}^{\star }\) by including a new random polynomial \(h_0\) in the accumulation step. This ensures that the evaluation claim in \(\mathsf {acc}^{\star }\) is for a random polynomial, thus hiding all information about the original evaluation claims. To allow the accumulation verifier to check that this randomization was performed correctly, the prover includes \(h_0\) in an auxiliary proof \(\pi _{\mathrm {V}}\).

In the full version, we show how to extend the above accumulation scheme to accumulate any number of old accumulators and instances. Our security proof for the resulting accumulation scheme relies on the hardness of zero-finding games, and the security of \(\mathsf {PC}_{\scriptscriptstyle \mathsf {DL}}\).

2.4.2 Accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\)

We sketch our accumulation scheme \(\mathsf {AS}=(\mathrm {P},\mathrm {V},\mathrm {D})\) for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\). Checking an evaluation proof in \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) requires 1 pairing, and so checking n evaluation proofs requires n pairings. \(\mathsf {AS}\) improves upon this as follows: the accumulation verifier \(\mathrm {V}\) only performs O(n) scalar multiplications in \(\mathbb {G}_1\) in order to check the accumulation of n evaluation proofs, while the decider \(\mathrm {D}\) performs only a single pairing in order to check the resulting accumulator. This is much cheaper: it reduces the number of pairings from n to 1, and also defers this single pairing to the end of the accumulation (the decider). In particular, when instantiating the PCD construction outlined in Sect. 2.1 with a \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\)-based SNARK and our accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\), we can eliminate all pairings from the circuit being verified in the PCD construction.

Below we explain how standard techniques for batching pairings using random linear combinations [CHM+20] allow us to realize an accumulation scheme for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) with these desirable properties.

Summary of \({\mathbf {\mathsf{{PC}}}}_{\scriptscriptstyle {\mathbf {\mathsf{{AGM}}}}}\). The committer key \(\mathsf {ck}\) and receiver key \(\mathsf {rk}\) for a given maximum degree bound \(D\) are group elements from a bilinear group \((\mathbb {G}_1, \mathbb {G}_2, \mathbb {G}_T, q, G, H, e)\): \(\mathsf {ck}:= \{G, \beta G, \dotsc , \beta ^DG\} \in \mathbb {G}_1^{D+ 1}\) consists of group elements encoding powers of a random field element \(\beta \), while \(\mathsf {rk}:= (G, H, \beta H) \in \mathbb {G}_1\times \mathbb {G}_2^{2}\).

A commitment to a polynomial \(p\in \mathbb {F}_{q}^{\le D}[X]\) is the group element \(C:= p(\beta )G\in \mathbb {G}_1\). To prove that \(p\) evaluates to \(v\) at a given point \(z\in \mathbb {F}_{q}\), the sender computes a “witness polynomial” \(w(X) := (p(X) - v)/(X- z)\), and outputs the evaluation proof \(\pi := w(\beta )G\in \mathbb {G}_1\). The receiver can check this proof by checking the pairing equation \(e(C- vG, H) = e(\pi , \beta H- zH)\). This pairing equation is the focus of our accumulation scheme below. (This summary omits details about degree enforcement and about hiding.)

Accumulation Scheme. We construct an accumulation scheme \(\mathsf {AS}=(\mathrm {P},\mathrm {V},\mathrm {D})\) for \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) by relying on standard techniques for batching pairing equations. Suppose that we wish to simultaneously check the validity of n instances \({[(C_i, z_i, v_i, \pi _i)]}_{i=1}^n\). First, rewrite the pairing check for the i-th instance as follows:

$$\begin{aligned} e(C_i - v_iG, H) = e(\pi _i, \beta H- z_iH) \, \iff \, e(C_i - v_iG+ z_i\pi _i, H) = e(\pi _i, \beta H) . \end{aligned}$$
(1)

After the rewrite, the \(\mathbb {G}_2\) inputs to both pairings do not depend on the claim being checked. This allows batching the pairing checks by taking a random linear combination with respect to a random challenge \(r:= \rho ({[C_i, z_i, v_i, \pi _i]}_{i=1}^n)\) computed from the random oracle, resulting in the following combined equation:

$$\begin{aligned} \textstyle e(\sum _{i=1}^{n} r^i(C_i - v_iG+ z_i\pi _i), H) = e(\sum _{i=1}^{n} r^i\pi _i, \beta H) . \end{aligned}$$
(2)

We now have a pairing equation involving an “accumulated commitment” \(C^\star := \sum _{i=1}^{n} r^i(C_i - v_iG+ z_i\pi _i)\) and an “accumulated proof” \(\pi ^\star := \sum _{i=1}^{n} r^i \pi _i\). This observation leads to the accumulation scheme below.

An accumulator in \(\mathsf {AS}\) consists of a commitment-proof pair \((C^\star ,\pi ^\star )\), which the decider \(\mathrm {D}\) validates by checking that \(e(C^\star , H) = e(\pi ^\star , \beta H)\). Moreover, observe that by Eq. (1), checking the validity of a claimed evaluation \((C,z,v,\pi )\) within \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) corresponds to checking that the “accumulator” \((C- vG+ z\pi , \pi )\) is accepted by the decider \(\mathrm {D}\). Thus we can restrict our discussion to accumulating accumulators.

The accumulation prover \(\mathrm {P}\), on input a list of old accumulators \([\mathsf {acc}_i]_{i=1}^{n}=[(C^\star _i, \pi ^\star _i)]_{i=1}^{n}\), computes a random challenge \(r:= \rho ([\mathsf {acc}_i]_{i=1}^{n})\), constructs \(C^\star := \sum _{i=1}^{n} r^{i} C^\star _{i}\) and \(\pi ^\star := \sum _{i=1}^{n} r^{i} \pi ^\star _{i}\), and outputs the new accumulator \(\mathsf {acc}^{\star }:= (C^\star , \pi ^\star ) \in \mathbb {G}_1^2\). To check that \(\mathsf {acc}^{\star }\) accumulates \([\mathsf {acc}_i]_{i=1}^{n}\), the accumulation verifier \(\mathrm {V}\) simply invokes \(\mathrm {P}\) and checks that its output matches the claimed new accumulator \(\mathsf {acc}^{\star }\).

To achieve zero knowledge accumulation, the accumulation prover randomizes \(\mathsf {acc}^{\star }\) by including in it an extra “old” accumulator corresponding to a random polynomial, which statistically hides the accumulated claims. To allow the accumulation verifier to check that this randomization was performed correctly, the prover includes this old accumulator in an auxiliary proof \(\pi _{\mathrm {V}}\).

This construction achieves the efficiency summarized in Theorem 3.

In the full version of our paper, we show how to extend the above accumulation scheme to account for additional features of \(\mathsf {PC}_{\scriptscriptstyle \mathsf {AGM}}\) (degree enforcement and hiding). Our security proof for the resulting accumulation scheme relies on the hardness of zero-finding games (see full version).