00:07: Anna Rose: Welcome to Zero Knowledge, a podcast where we talk about the latest in zero knowledge research and the decentralized web. The show is hosted by me, Anna. 00:18: Fredrik: And me, Fredrik. 00:26: Anna Rose: This week we talk with Zac Williamson and Ariel Gabizon about the protocol Plonk!. But before we start, we want to share a message from this week's sponsor, NuCypher. Here it goes. 00:42: In 1778, during the American Revolutionary War, Colonel John McIntosh and the 127 continental soldiers repudiated the British's demand of surrender with as to surrendering the fort, receive this laconic reply. Come and take it. Now fast forward to 2020, stakers developers and node operators the world over cast off the shackles of Silicon Valley's great merchants of infrastructure with the cry, come and stake it. In case you missed it, this is the announcement for NuCypher's incentivized testnet coming to CoinList, which has the motto, come and stake it. If you're interested, be sure to pre-register now; the launch date structure and price details will be shared shortly. All winners will need to complete KYC AML. Please go to nucypher.com to sign up. So thank you again, NuCypher for sponsoring our show, and for letting me try out my hardcore history impression for this ad spot. So now here's our episode on Plonk. 01:59: All right, so today we're sitting with Ariel Gabizon, previously from Zcash and Protocol Labs, and now at Aztec, and Zac Williamson, who is also at Aztec. We've had you both on the show before, and we've done actually full episodes where we did get a chance to dig into your backgrounds. We talked about all the work that Ariel's been doing, and we talked about Aztec. So yeah, welcome to the show. 02:26: Ariel Gabizo: Thanks. 02:27: Zac Williamson: Yeah, thank you. 02:29: Anna Rose: This episode, we will be covering a lot of ground and there's a number of podcasts that you might want to listen to before listening to this one. For example, you should probably listen to the episode on Sonics with Sean Bowe. Definitely the episode with Ariel, the episode with Zac. And also, one of the ZK's... Like in the ZK Study Club, we actually have videos, Justin Drake does a video series on polynomial commitments. And that might be also worth checking out, because we talk about a lot of those concepts in this episode. Cool, so let's get started. 03:03: Fredrik: So last time we talked to you, Ariel, you worked at Protocol Labs, and now you're working with Aztec, I believe. How did that move happen? And what's been going on in your guys' lives the past couple of months? 03:19: Ariel Gabizo: Yeah, so I'm still going to advise Filecoin till launch. I think it's a great project and I want to see it succeed. I guess I was just thinking of death and life being short, and I felt that Plonk was sort of where my natural focus was. So I wanted to use the few years I have with, you know, still having a clear mind to get it like really get it off the ground. I think it has really good potential. I don't want to say it's sort of the final thing, but there's a lot of simplification there, and I think it has potential to really take off. It also has this slightly abrasive one syllable name, like a lot of good Linux programs like Git and grep. So I'm hoping Plonk will be like a similar Linux command line thing people use with an abrasive one syllable name. 04:30: Anna Rose: Okay, so first of all, that's extremely… I don't even know where to go with that. It's like it's very life stuff, Ariel. This is like life and death. 04:42: Fredrik: We are existential. 04:42: Anna Rose: I wanna… I actually want to ask Zac. So last time Zac we had you on, we talked about Aztec sort of the core Aztec business. How does Plonk fit into this? Like was Plonk... So this is actually I'm not clear on this, was Plonk an Aztec initiative, or is this like a separate research project that the two of you did? How did Plonk happen? 05:07: Zac Williamson: Well, Plonk was... From my perspective, Plonk was always kind of a reaction to some of the limitations and shortfalls of our original asset protocol. We tailored that protocol to be suited to some of the weaknesses of Ethereum so that it was, for example, extremely efficient to verify proofs. But it meant that it was not as general purpose as we would have liked. And so I started kind of trying to research how to use existing techniques out there to create a slightly more universal way of doing things. That kind of segwayed into meeting Ariel at the conference and we started talking together about some common ideas and eventually wound up on what then turned into Plonk. I think it's not fair to say that I had Plonk in mind when we started developing it. I had an outcome in mind that I really wanted like an efficient universal type of proofing system. 06:02: Anna Rose: Which conference was it? Do you remember? 06:05: Ariel Gabizo: It was a Binary District, Zero Knowledge Workshop in London. 06:14: Zac Williamson: We knew each other beforehand, but we'd only talked a couple of times. 06:18: Ariel Gabizo: I think more specifically, Zac asked me a bunch of questions when we met at this event, some of which I didn't know the answer to, but he asked me one easy question. One reason I knew the answer was because I worked with Ellie on STARKs before they were called STARKs. And there was this very common technique there of, I'll get a bit technical, of using a generator of a multiplicative subgroup to sort of move between neighboring values. So this is a very common technique in a lot of the STARK/PCP literature, and it was sort of just this small missing piece in an idea of Zac and also in getting the permutation, speaking a bit imprecisely, in getting the permutation argument in Sonic to be simpler and more efficient. 07:26: Anna Rose: But what is Plonk? What is it exactly? Like maybe, I don't know if you can summarize it in a sentence or two for our listeners, but... 07:35: Fredrik: Do you want to try that? 07:36: Zac Williamson: I do want to try it. 07:37: Ariel Gabizo: Oh yeah. Well, I have this non-technical analogy that puts Plonk in some recent works, Sonic, Marlin, relative in a certain place, relative to the previous SNARKs and STARKs. So suppose there's been a murder and you're a suspect and you're taken into interrogation. And if it's too grim, you can think of the suspect as a prover and the interrogator as the verifier and forget the murder part. But if you like it, you can think about it. So suppose you're taken into interrogation. The STARK version of the interrogation is the investigator just gives you a blank piece of paper and says, write what you did last night in total free form. And then after you finish, the investigator can ask you anything they want. Like the thing you wrote in the third line contradicts the thing... He can ask you anything. So that sort of STARKs in a sense. 08:41: And then these sort of classic pairing-based SNARKs that Zcash uses is instead of a blank piece of paper, the investigator gives you sort of a survey form where you can't just write whatever you want. You can't write, well, I was actually abducted by aliens yesterday, so it couldn't have been me. He just gives you sort of a like, yesterday night, I was either at the suspect's house or at my house or at this bar. So you just... 09:11: Anna Rose: Like a multiple choice question. 09:13: Ariel Gabizo: A multiple choice thing. Yeah. But then also the investigator, after you fill this form, cannot ask you whatever he wants. He's also limited to certain questions like, oh, what you filled in question two, I think it contradicts what you filled in question five. He can't ask you free-form questions. In a way, this new wave of what we can call polynomial commitment scheme-based schemes, it's in a way the best of both worlds in terms, from the verifier or investigator's point of view that the suspect needs to fill the form and can't write what he wants, but the investigator can then ask any question. 10:01: Anna Rose: Interesting. Oh, cool. This is a great sort of mapping, Ariel. Thanks for doing that. The only thing it's missing is the zero knowledge part. 10:13: Ariel Gabizo: Right. It's totally missing that. 10:15: Anna Rose: Because it seems like.. 10:18: Ariel Gabizo: I guess what happens, how the zero knowledge maps is usually these systems. Adding the zero knowledge part is sort of easy by somehow randomizing the answers. But just to explain how it maps technically, so right, in all these systems, it always starts with you have some statement you want to prove and you do what's called arithmetization. So we move from a statement of the form, this Zcash transaction is valid and I know the private keys of the input notes, to a statement of the form, I know some polynomials that satisfy some identity. Okay, now the question is what you do from here. And sort of the STARK approach was let the prover write whatever they want and then we will check that what he wrote is really a low degree polynomial and check at some random point that this polynomial or polynomials satisfy the required equation. 11:23: Sort of what happened in these pairing-based SNARKs like Pinocchio and Groth16 is we sort of used homomorphic encryption, or it's not exactly homomorphic encryption, but let's call it that, to force the prover to like, or the suspect, to write certain things in a certain format. But then what sort of we lost from this is that the only sort of practical homomorphic encryption like thing we know is these elliptic curve pairings. And what that translates to is that then the verifier can only ask questions that correspond to what people call R1CS. 12:10: Anna Rose: So R1CS is this form that you describe in a way. It's these… Like this predefined answers that have to fall into a certain format. 12:23: Ariel Gabizo: So after the prover writes the answers, the verifier can ask consistency questions about the answers and R1CS is a type of verifier question. Now the sort of the general verifier question is a general polynomial, like is this polynomial zero on your points. And in R1CS question is the polynomial has to be of the form this sum times this sum equals this third sum. So, for starters, it's degree 2, because it's a sum times a sum, but also it's a special form of a degree 2 thing. And the reason the verifier was limited to this form of question was because pairings are sort of a very limited form of homomorphic encryption. And then now this third approach that started with Sonic is sort of saying, we'll use the sort of pairing limited homomorphic encryption not for the whole thing. We'll just use it for committing to a polynomial and then opening it. 13:48: Okay, I realized I forgot to say something. So part of the thing that's going on in the SNARKs, not the STARKs, is that the verifier is checking the equation not on a random point of his choosing, but on a secret point that's encoded in the parameters in the common reference string. What started from Sonic is we only use the pairing to commit to a polynomial and open it, but then we can open it at a... After we've opened it, we can check not only these R1CS constraints, but a general verifier equation. 14:25: Anna Rose: So that's that third example where the prover has to prove within a format. 14:33: Ariel Gabizo: Yeah. 14:33: Anna Rose: Still within this sort of SNARKs pairing-based paradigm, but then the verifier can use, is the verifier actually using a STARK-like technique to check that is more freeform, or is it a different thing? 14:50: Ariel Gabizo: The verifier is now checking some arbitrary polynomial equation, yeah, which is what the STARK verifiers would do because, yeah, they were not limited to this R1CS format. 15:05: Anna Rose: Is there also a connection to Bulletproofs, though, and Bulletproof techniques in some of these new polynomial commitment schemes? Or is that something else? 15:17: Zac Williamson: So the way that Bulletproofs can be used to encode arithmetic circuits is quite different to what we're doing. So I don't think there are any direct parallels between Bulletproofs and Plonk, other than kind of like what people iterated on with like Bulletproofs came around several years ago. People iterated on it and produced different kinds of circuit structures, and polynomial commutative schemes, and eventually that iteration led to, amongst other things, Plonk. So, whilst it's not a direct... There's nothing like directly copied across from Bulletproofs. I guess kind of spiritually it's a sorted iteration on some concepts that Bulletproofs describe. 15:57: Anna Rose: It's a great, great second cousin or something. 16:00: Ariel Gabizo: Yeah. One thing I should say, so I think what happened a little in these last two years, which relates to Bulletproofs, so this work on polynomial commitment schemes came out, this work of Kate-Zaverucha-Goldberg, and that's been extremely influential, so all these recent works, Sonic, Marlin, Plonk, are heavily based on that. But a bi-product of that is that it made people realize what was not sort of explicitly maybe in people's heads, that what's going on in the STARKs, you can think of it also as a, that one ingredient in the proofing system is a polynomial commitment scheme. And then sort of when people looked at this as a separate component, they thought, okay, so what else can I plug in as this component? 16:54: So in the context of Bulletproof, sort of, Zac maybe can correct me, but a core there, you're a part of that work, is a polynomial commitment scheme whose efficiency is sort of in the middle between the STARK-based one and the one used in Plonk, but doesn't require a trusted setup, but is not quantum resistant. And then also this recent work, this Supersonic work asked what other sort of polynomial commitment schemes can we get without a trusted setup. In that sort of that asking that question and looking at groups of unknown order was the basis for this Supersonic work. Maybe it was a tangent from the original question, but we should mention that the work of Matter Labs was maybe the first to sort of explicitly look at STARKs as, or the sort of low degree testing component of STARKs as a polynomial commitment scheme without trusted setup. 18:05: Fredrik: I wanted to ask actually, as you talk through all of these different things being developed in the history, where does Plonk fit into that story? Like was it... Is it something that's been worked on for a really long time or like an idea floating around? Is it an iteration on something? As you mentioned, there was Sonics and then Supersonics and then there's Plonks and now immediately something else building on Plonks. The whole, like past six months in the zero knowledge space has been very confusing for me because there's so many protocols, so many protocols building on other protocols... 18:42: Anna Rose: And the names are sometimes sort of similar. Supersonics is an interesting one because in conversation with Ben and actually the presentation he did at the summit, he made it very clear that like Supersonics could also be SuperPlonk or SuperMarlin. It's like it's the other side of the equation. So even though it sounds like Sonics Plus, it's actually not dealing with the same part of the problem. I guess that's what he explained. 19:09: Fredrik: So, how does Plonk fit into this world? Is it an iteration on something or is it a new innovation? I guess that nothing is ever new, but you know what I mean. 19:21: Zac Williamson: We started working on Plonk about May last year. We've basically published the paper in three months. So the time between formenting the idea and publishing it was rather short, it was a rather fast turnaround. Obviously nothing exists in a vacuum and Plonk builds on a lot of techniques that were already in use and kind of assembles a lot of them in a different way and provides something a little bit new to the mix. So I think the idea of creating a permutation argument using multiplicative subgroups wasn't something that had been done before. But the reason why that's useful is because of techniques developed for Sonic, developed by Jens Groth and I believe a few others, and in turn, the idea of using a succinct polynomial commitment scheme to kind of evaluate an arithmetic circuit that had also been around for a while and had been developed out of the commitment schemes by Kate and his colleagues. So it was a melting pot of a lot of different ideas. Maybe applied in a slightly different way to create something unique. 20:21: Fredrik: And that sounds like how most newspapers get written. It's not like some of these other things that are literally just like, oh, this paper was published and we saw how we could use this other polynomial with the same construction and now we have a third thing. 20:39: Ariel Gabizo: So I'd like to maybe cast this a little in terminology that people have been using. So sort of what's been formalized both in the Marlin paper and the Supersonic paper is that these recent proofing systems, I guess that started with Sonic, but really you can look at all the proof systems from the early PCPs and STARKs like this. They can be thought of as having two ingredients. One is what's called the polynomial IOP, as it's called in Supersonic or the algebraic holographic proof, as it's called in the Marlin paper. We defined something similar in Plonk, just called a polynomial protocol. So that's one ingredient. 21:29: And the other ingredient, this is sort of... That's sort of the high level protocol ingredient. And the other more low level ingredient is the polynomial commitment scheme. So where I would fit it in is basically what Sonic did is let's take the most efficient polynomial commitment scheme, the Kate et al. 1, and plug it into the IOP, an IOP that is a polynomial IOP that is similar to the Bulletproofs work in the paper prior to Bulletproofs whose name I forgot. 22:07: Anna Rose: Was it Buttle? I think that's the one. 22:08: Ariel Gabizo: Yeah, it was Boodle. 22:10: Anna Rose: Boodle. Yeah. 22:11: Ariel Gabizo: Boodle and a bunch of people. Bootle, et al, a bunch of people and then Bulletproofs was an optimization of that. So Sonic, so again, the works of Bulletproofs and the prior work of Boodle, you can think of that as it's a certain polynomial IOP with a polynomial commitment scheme that has no trusted setup based on discrete log. And in a sense, what Sonic did is let's take a similar a polynomial IOP but use this more efficient KZG polynomial commitment scheme to get sort of constant size proof. They had to do a lot of tricks to make it work. So Plonk is using the same polynomial commitment scheme as Sonic but heavily optimizing the polynomial IOP part. 23:06: Anna Rose: Is this where the Supersonic would actually be doing working more on the polynomial commitment side? 23:11: Ariel Gabizo: Exactly. So Supersonic is taking the Plonk polynomial IOP and replacing the KZG polynomial commitment scheme by the one they invented. 23:25: Anna Rose: Got it. 23:25: Ariel Gabizo: And the reason it's called Supersonic and not SuperPlonk is because Supersonic sounds much better. And I think at the time they wrote the paper, Plonk was not out yet. So they were using the Sonic IOP and then towards their publication, they saw that they could shave off some kilobytes if they moved to the Plonk IOP. 23:52: Anna Rose: Cool. That's so illuminating, I have to say. And I think it's... This is very, for our listeners, I mean, this is.. There is a lot of background, I think, that would be very helpful. There's the video series that Justin's doing on polynomial commitments, which I think would be a really helpful thing to check out, if this is interesting, because it goes into some of the things you're just describing, but with slides. A lot of the talks from the summit. And actually, after this, we can maybe have a chat about some other links that we can share with people to help them get inside this. 24:30: Fredrik: Should we get into talking about the properties of Plonk, maybe? I feel like we're at a time where we kind of know a little bit what it is, but putting in a high level of view of what are the proofing times, what are the proof sizes, what are the performance properties. We've already touched a little bit on like does it have a trusted setup or not, how does that work, but maybe we can run through these normal criteria that people look at when they look at zero knowledge proofs. 25:04: Zac Williamson: Yeah, happy, very happy to do that. So it's always very difficult to do a direct apples to apples comparison, but we do have some numbers for their basics. For example, for Plonk, the proof sizes are roughly 512 bytes if you compress your elliptic curve points, which is what other proof systems do when they represent their proof sizes. 25:29: Fredrik: Are they constant size, or what's the complexity? 25:35: Zac Williamson: So to be a massive pedant, I guess it's polylogarithmic size.. Basically, the security of your parameters of your elliptic curve changes, affects the proof size. But other than that, it's constant. So that 512-byte number was for the kind of the Barreto-Naehrig pairing-friendly curve that Ethereum natively supports. It gets a bit bigger, like 768 bytes, I believe, if you use the BLS12-381 curve. But that size does not really increase with the complexity of your circuit. 26:06: Fredrik: Yeah. 26:08: Zac Williamson: The verified times, well, I guess it really depends on the verification algorithm and how quickly that's implemented. Our implementation takes about 1.3 milliseconds to verify a proof. 26:19: Anna Rose: What's the comparison there? What was the previous benchmark? 26:24: Zac Williamson: Very good question. So it's always a little bit hard to compare because these are all done with different software implementations and sometimes with different hardware. But I believe the kind of the Groth16 and similar type, proofing systems like GM17 are within the same range, like between 2 to 10 milliseconds. Plonk is generally slightly faster because for two reasons. One, it requires a few pairings to get by in the pairings. The actual, the form of the pairings that we require are more primitive because a lot of the inputs into the pairings are effectively constant. They don't change depending on your circuit, so you can do a lot of pre-computations to speed things up. 27:10: Fredrik: And what's the complexity of the prover? 27:13: Zac Williamson: Very good question. Because that's kind of, in my opinion, that's kind of the, that's where the bottleneck is for all these proofing systems. Once you have a somewhat constant verifier runtime and proof size, you know, once you get below a proof size below G kilobytes, it's kind of, it's small enough, it's small enough and making it any smaller is nice, but not really critical. And the verifier runtimes also for these kind of polylogarithmic systems, they're all pretty efficient. For prover times though, that's the bottleneck. And so we released some benchmarks recently that kind of did as much of a head-to-head big comparisons as we could, where we kind of we benchmarked Plonk prover times versus Groth16 Prover times. It's highly dependent on the circuit structure. So again, it's quite difficult to do an apples to apples comparison. 27:59: But when it comes to, say, MiMC hashes, our proofing system is roughly two and half times faster than Groth16, which we were quite happy with because we have very good reasons to think that we can replicate that performance advantage more generally for a much wider range of circuit types as the year progresses. 28:18: Fredrik: Cool. 28:19: Anna Rose: So if somebody wants to find out a little bit more about these metrics, where do they actually look at it? 28:24: Zac Williamson: So we are, our Plonk Prover library, Verifier library, is all open source under our Aztec organization GitHub profile. The repository is called, okay, working title, Barretenberg. But I think we need to change it because the name is a bit. 28:41: Anna Rose: What's the reference? I don't get it. What's the reference? 28:45: Zac Williamson: Okay, so the reference is a few months ago, maybe in a fit of foolishness, I decided to start naming some of our software libraries after various confectionaries. And so Barettenberg originally was an elliptic curve library for the Barreto-Naehrig curve that Ethereum supports. And so I figured it was a cross between Barreto-Naehrig and Battenberg. 29:06: Anna Rose: Wow. This is a Zac joke, isn't it? 29:11: Zac Williamson: Yeah, I'm gonna... Not even worth the laugh track from your sound deck. 29:21: Fredrik: That one actually fit. 29:21: Anna Rose: I didn't even need to do that. 29:23: Fredrik: Hit the wrong button and you hit the right button anyway. 29:27: Zac Williamson: It's gonna become like a sitcom. Like, zero knowledge… 29:33: Anna Rose: Should we just leave that in? That would be so weird for everybody. There's one sound, that's the only thing at all. 29:43: Fredrik: In 110 episodes, there's one sound bite. 29:51: Anna Rose: All right. So, okay, where do we go from here? Ah, so there were a few, kind of in looking back at some of the videos and papers and things that I could find on Plonk, there were a few concepts that seemed to be introduced, I mean, I don't think they're introduced for the first time ever, but maybe introduced for the first time in this context. There's some ideas like the Lagrange-bases. You mentioned already the Kate commitment scheme, and I realize that that's used in a lot of different constructions. But maybe it would be a little bit helpful to first talk a little bit about that and maybe redefine these things for our listeners. Because, yeah, we mention them, but it's not always clear why they're special. 30:39: Zac Williamson: So my kind of, I came to this from the perspective of a software engineer. So I might mangle a few terms and conventions, so Ariel please interject if I'm butchering things. So a Lagrange-bases, effectively, it's a different way of encode... we use a different way of encoding a polynomial. So when it comes to, say, something like Sonic, the way that they encoded information within the polynomials was that witness values. So if you have a vector of values, then they would create a polynomial where each vector element was a coefficient. 31:14: So if you wanted a vector that stored 1, 5, and 10, you would do 1 plus 5x plus 10x squared. And what we do instead is we use the Lagrange-bases to encode vectors as polynomials. Effectively, we find a multiplicative subgroup. And so, for example, in these giant prime fields we're working with, we can use roots of unity. So if we want to encode a vector of four values, we would find four roots of unity. So if we wanted to encode a vector of four values, we would find four roots of unity. And obviously, because we're working with prime fields, the big integers that are modulus and prime. Unlike integers, what would normally be a complex number, it just has an actual value. You can find four numbers in a prime field, in this prime field where if you have your explanation, the more to the power of four, you'll get one. 31:59: Therefore, they're all four roots of unity. And so what we do is we create a polynomial where if you evaluate this polynomial at each of the four roots of unity, the result of the polynomial will be equal to each one of your vector elements. Effectively, your polynomial is an interpolation polynomial where your... Where kind of when you x-coordinate is one of these elements of a multiplicative subgroup, your y-coordinate is going to be one of your vector elements, and then you find the interpolating polynomial that connects all the dots. 32:31: Ariel Gabizo: Yeah, maybe I can try to paraphrase. So the standard way to represent a polynomial is by what we call the coefficients like you'll write the polynomial as five times x plus three times x squared plus two times x cubed, and sometimes... So you can think of these monomials these powers one x x squared x cubed, they're the basis with which you're representing the the polynomial. And sometimes it's convenient like for example in Plonk to represent your polynomial in terms of its values, not its coefficients. And that's what Lagrange-bases allows you to do. So what exactly a Lagrange-bases is, it's suppose you have some set of n points in your field. So the Lagrange-bases for this set of points is like a set of n polynomials that the i-th one is one on the i-th point out of these n and zero on all the rest. 33:46: The point is now if you write your polynomial is instead of a sum of these powers x, x squared, x cubed, you write it as the sum of the Lagrange-bases polynomials. You're in a sense representing the polynomial by its values because sort of the coefficient of the i-th Lagrange polynomial will just be the value of the polynomial the i-th point, right? Because if you'll try to evaluate it on the i-th point and you have this sum, then everything will become zero except this I-th Lagrange one, because the others are zero on the i-th point. So what the value you will get is just the coefficient of this, that you put on this i-th Lagrange polynomial. 34:38: Anna Rose: Cool. So this is like a technique. Where does it come from? 34:42: Ariel Gabizo: It's very heavily used when designing programs for STARKs. And so one reason it's very useful is, so I was talking about just a general set of points and then doing a Lagrange-bases for them. 35:05: Anna Rose: I see. 35:06: Ariel Gabizo: But usually what people do, they don't look at a general set of points, they look at what's called a multiplicative subgroup. And what that means is just a set of points that are powers of the first point. So the set of points will be g, g squared, g cubed, g to the fourth, up to g to the n, which will be, so you'll pick a g such that it's sort of what is called ana nth root of unity, so g to the n will just be one. 35:37: Anna Rose: Oh, wow. Okay. 35:38: Ariel Gabizo: So one thing that's very convenient in these sets of points is it allows you to sort of access a neighbor of a point very easily. Because we can think of... For example, g cubed as the next point after g squared. So suppose you're a g squared and you want to get to g cubed you just multiply by g. And generally wherever you are you can get to the next point by just multiplying by g. So this seems maybe a little trivial. So one reason it's good is sort of in the context of these proofing systems, it's good to have things that fulfill the functions you need in our extremely low degree. And what's good about this neighbor function is that it's just degree one, right? It's the function that goes from x and maps it to x times g, where g is just a fixed thing. 36:48: And you could say why do we care about a neighbor of a point? Well, one, just I'll give the example of the Fibonacci sequence I think that StarkWare uses a lot. Right there it's like this, you want to check... You have a bunch of values and you want to check that always the third any value is the sum of the preceding two. So that's an example of where these sort of neighbor functions are, because I sort of need a way to say in math, in low degree polynomials, look at my value compared to the next one or the proceeding one. And then these multiplicative subgroups, this notion of the next point it just corresponds to just multiplying by g or proceeding point will just correspond to divide by g. Yeah so this sort of thing has been used a lot. 37:54: Another small technical thing that is use useful in multiplicative subgroups is a lot of times you end up needing, even the verifier needs to compute this polynomial that is zero on all the points of the multiplicative subgroup, is zero on g, g squared, up to g to the n. So in general this is going to be some degree n polynomial which will take like n operations to compute. But you have also this very convenient property that we use in Plonk and many proofing systems, PCPs used it, that this polynomial that vanishes on the subgroup will just have this very sparse form x to the n minus one. So that's something that is very important in PCP literature, in STARKs that you're looking for things that although they have high degree, they have a very sparse representation. Like here, it's just two monomials, x to the n minus one. So a verifier can really quickly compute them even though their degree is much higher than what the verifier runtime should be. 39:16: Anna Rose: So now this is cool because we now understand one of the main, and as far as I understand Lagrange-bases, using that technique is one of the big differentiators in Plonk, right? This is the thing, I mean, you just sort of mentioned, this is a thing that changes Plonk from the Sonics construction. What else changes? What else has changed in Plonk? 39:36: Zac Williamson: It's mostly to do with how you structure your circuits, like what the arithmetization of the circuit is. One of the things that's been quite consistent with the universal SNARKs has been a desire to kind of replicate rank-1 constraint systems because the current, like the non-universal state of the art uses rank-1 constraint systems for the reasons Ariel mentioned. And there's a large corpus of work that constructs circuits in R1CS form, and so there's been this desire to kind of replicate that in the universal setting. The second thing which was quite apparent to us early on was basically staring at this Plonk paper and trying to figure out how to make it fast, efficient, was that actually rank-1 constraint systems are an absolutely terrible way of representing a Plonk circuit. 40:20: It's extremely inefficient and there are much more efficient ways of representing your circuit structure that kind of take advantage of some of the unique properties that Plonk has. Specifically one of them is that you can evaluate a large number of different arithmetic expressions that are operating over the same state, which might sound a bit abstract, but maybe one way to kind of give an example of that is in SNARK circuits. So imagine you have your circuits constructed of addition and multiplication gates. So you have two wires feeding into each gate and one wire feeding out, and the output wire can then be fed into multiple input wires. There's a very common requirement that some of the subset of all of these wires in your circuit need to be either one or zero. They're Boolean wires. 41:11: And typically this requires basically one extra constraint in your circuit, one extra gate for every wire that you want to constrain to be a bool. However, in Plonk, you can add another arithmetic expression into the Plonk circuit structure, so that effectively every gate can be either an addition gate or multiplication gate, and optionally also a Boolean constraint gate. So you can just decide which subset of your wires are gonna be bools and get that property for free without adding any extra prover overheads. And then you can take this to quite an extreme degree. Which is why we managed to, at the end of last year, get benchmarks which evaluated MiMC hashes over twice as fast as Groth16. And this is because you can add this, like, a lot of custom structure into your gates. So instead of a gate being a division gate or a multiplication gate, it can be a... Well, it can be a MiMC hash round gate, or it can be a Pedersen hash round gate. 42:11: You can kind of... Something that we realized relatively early on is that this kind of custom gate structure allows you to make hyper-efficient proofs for very common circuit types. It does require completely abandoning R1CS as a way of representing your circuits. So that's just one... I guess one area where Plonk is diverting from other proofing systems. 42:42: Anna Rose: Does this... Is this anyway related to Turbo-Plonk, which I've heard about through the Grapevine? 42:48: Zac Williamson: Yeah, so I called it that in a talk. I believe I gave it at your conference in California, and for want of a better name, that's still what we're calling it. But it's, yeah, Turbo-Plonk, yeah, it's basically the formalizing this custom gate structure and basically creating a few a set of custom, of these custom gates that are highly desirable for common circuit types, and kind of packing it up into a single proofing system. 43:16: Anna Rose: Is this... what state is Turbo-Plonk at though? It sounds like this is like, you're working on it, it's mid, or does it have documentation? Does it exist somewhere? 43:28: Zac Williamson: So we're writing a Git book about it that we hope at least some sections to make public soon. 43:38: Anna Rose: Cool. 43:39: Ariel Gabizo: An interesting thing we've noticed very recently is that it's actually even you don't need to think of it as a circuit anymore. The way right now I'm thinking about these Turbo-Plonk programs, and this is maybe also somewhat inspired by my sort of STARK work with Ellie many years ago, that you have a program with some state. So the state has some width. So say if the width is three, then the state is like three field elements. And then your program, the program is defined by some polynomials. And the polynomials are checking that some relation holds, that the state transition is correct, that a certain relation holds between the values in the i-th row and the values in the i plus 1-th row. 44:39: So you can think of a program like this as being sort of a program without memory, that there are just like three registers and the state needs to proceed according to certain rules. But where the power of Plonk comes in is in a sense gives you memory. So now you have... The prover will show, hey, I know a valid sequence of states for this program. But now what the Plonk permutation argument gives you is that you can also in a sense have memory in the program. You can say for instance, the second cell in the 4th row should be the same as the 15th cell in this 9th row. So you can make arbitrary demands like that. And that you can think of it as it gives the program sort of memory axis. 45:39: I do want to sidetrack a bit and give credit, so Plonk is really heavily based on something called the Bayer Groth permutation argument. I would say Plonk works well because we found a very concise version of that permutation argument using multiplicative subgroups and we sort of stripped out everything else we didn't really need. So, I think... Yeah, I think these programs could, with sort of this memory, could be a good way to design SNARKs. 46:20: Anna Rose: This is a circuit-free model that you just are describing here? Would you call it that. 46:25: Ariel Gabizo: Yeah, so circuits are a special case of this program, of these programs, where even you could say even almost degenerate case of these programs, where you're only checking... The program typically checks things between this row and the next, and in sort of the circuit case, you're only checking things inside a row. Like the row values will be the input, left input, right input, and output of the gate, and you'll have this sort of degenerate transition that only checks consistency inside a row. But of course you will heavily use this sort of memory thing for the wiring constraints of the circuit. So if for example the second gate, the output of that goes into the seventh gate, then this will translate to sort of using the memory constraints to check that something on the second row is equal to the seventh row. 47:32: Zac Williamson: So, these transition constraints that Ariel was describing, they're incredibly powerful and they allow you to represent circuits with far fewer constraints than say something like a rank-1 constraint or the basic Plonk gates that we had in the paper last year. And so that massively opens up what's practical to build with these structures, which has then allowed us to massively expand the horizons of what we want to accomplish with this at Aztec and what we're planning on building. And this is all culminating in what we are, as a working title, calling DARK contracts. 48:08: Anna Rose: Ooh, this is really good that you mentioned this, because that was something we definitely wanted to talk about in this episode, was like, how does all of this relate back to Aztec? Because now Ariel's working at Aztec with you, and we knew from our previous episode roughly what Aztec's goals were, but yeah, what's the connection then? 48:27: Well, the connection is, we were using Aztec and the protocol, which we're about to launch on Mainnet this month, enables private value transfers. You can create private tokens and trade them in a confidential way. So the values are encrypted, but the identities are not. Once that protocol had been developed, we were looking for something much more general purpose to take Aztec to the next level. Specifically, what we would very much like are privacy preserving smart contracts. So a world where not only is the identity of a smart contract sender hidden, not only is the data that they're manipulating and updating encrypted and not viewable by anybody, but also the smart contract code that they're executing is also hidden, and the act of calling one of these smart contracts is also hidden. 49:18: And despite basically having this kind of giant black box where you can't see anything, still use a public blockchain and a public blockchain consensus mechanism to enforce the correct execution of these dark contracts. So I can basically make a statement to you where I can say, I'm not going to tell you who I am, I'm not going to tell you what dark contract I'm calling, and I'm not going to tell you about any of the state variables that have changed, but I can prove to you beyond a shadow of a doubt that whatever the hell I was doing, I followed the rules. I effectively, I followed the protocol, like the logic and semantics of the smart contract program, whatever that may be. 49:55: Anna Rose: So this is super interesting because like ZEXE, they had this concept of data of privacy and then function privacy. And what it sounds like is that, like in their case, this function privacy, so the actual functioning, the purpose of the transaction is also made private. In their case, it's built into the blockchain, the ZEXE blockchain concept. But in your case, you're acting on top of an existing chain and it sounds like you're trying to bring this function privacy into a public chain, which I don't understand how it works, but it's very cool sounding. 50:37: Zac Williamson: That's exactly what we're trying to do, yes. Bring ZEXE's function privacy with a few other added features on top to public blockchains, starting specifically Ethereum. And yeah, it's quite an ambitious project, something which is going to be... It's not currently what we're currently implementing using Plonk because it's going to be a 12-month long project, but that's what we're building towards. 51:04: Anna Rose: So I want to say thank you so much to both of you for coming on and helping us to map out this space to better understand Plonk. There's so much that I've actually learned over the last hour that I think is going to be helpful for me to better understand the new kind of paradigms, new constructions that come out after this. So I want to say thank you. But before we sign off, I actually wanted to ask you what is exciting. Like, what are you excited about right now? What new things coming out are you kind of like, whoa, this is new, this is neat? 51:37: Zac Williamson: Well, I just want to say it's been an absolute pleasure to be on, so thank you for that. What are we excited about? Well, obviously, I mean, I'm biased to Hill, so I'm not going to talk about the Plonk stuff I'm excited about. With regards to how the space is developing, I think, the different kinds of polynomial commitment schemes that have been coming up recently, particularly things like DARK and being able to commit to polynomials in a transparent way, using groups of unknown-order, using potentially class groups. It's absolutely fascinating and I'm incredibly excited to see how that develops. 52:12: Ariel Gabizo: Yeah, for me personally, I feel we've gotten to a point now where we have an efficient enough system and simple enough system with Plonk and I'm excited about seeing it used in applications and how that's going to develop in the next few years. 52:34: What about these... Like so there's things like there's SLONK that Justin Drake recently released which is a variation. Are you also encouraging or excited about people taking Plonk in a different direction or? 52:46: Ariel Gabizo: Well, I'll give... So it's really great to see all the, a lot of follow-up work. It's really nice to see that. Specifically, what I think has potential is this joint work with Justin called SHPlonk. 53:06: Anna Rose: Oh it's called Sh... Really? 53:14: Fredrik: Maybe that's a working title. 53:16: Anna Rose: Your names. 53:17: Ariel Gabizo: I think it's perfect. It's perfect. 53:21: Anna Rose: So Plonk, by the way, Plonk... Can we just define this is at the very end of the episode, but Plonk? As I read... I looked it up, it means like bad wine or something like this in British English. 53:32: Zac Williamson: Yeah, it's English slang for cheap, low quality wine, which I felt it was appropriate because Plonk the cryptographic protocol and Plonk the booze, they both give me a severe headache consuming them. 53:46: Ariel Gabizo: But it's used more generally as slang for various bad things. 53:51: Anna Rose: Such cheap and bad 53:52: Zac Williamson: It's got a certain connotation. Like cheap and cheerful. 53:58: Anna Rose: But available to everybody maybe. 54:02: Ariel Gabizo: I will tell a story that I have a habit of submitting things to ePrint before they're ready because then it puts pressure on me to work on them and usually it takes ePrint three to four days to actually publish. So, we submitted to ePrint with Plonk as a working title and this time it took them only half an hour to, not four days to... And Zac wrote me in the morning, I think I'm having buyer's remorse about Plonk. Oh, it's already up. Well... 54:39: Z Yeah, this is entirely Ariel, I'm going to blame Ariel on this because yeah, he told me, he was like, yeah, we'll just post it to ePrint, we'll have a week to fix it up and figure out a name. And I'm like, yeah, okay, fine, that sounds about right. And then the next morning... 54:49: Anna Rose: I think it's funny. 54:50: Zac Williamson: Like, why are people messaging me about this paper that's on ePrint that's supposed to not be public? Anyway, it was, yeah, it kind of just took, we lost control. 55:03: Anna Rose: And now you're going in all sorts of fun other directions. Well, I guess that means we have to keep our eyes out. I want to say thank you again for coming on the show and talking to us. 55:13: Fredrik: Thank you very much. 55:13 : Ariel Gabizo: Thank you. 55:14: Zac Williamson: Thank you. 55:15: Anna Rose: I hope we get to have you back again pretty soon to hear what comes of all of this. And yeah, to our listeners, thanks for listening. 55:23: Fredrik: Thanks for listening.