00:05: Anna Rose Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in Zero Knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. 00:27: This week we are doing a bit of a special episode for you. In it, Nico from Geometry Research and I are doing the ZK Jargon Decoder episode. So you know Nico as my sometimes co-host of the show. The episode is inspired by the ZK Jargon Decoder website, as well as a live session that we hosted to kick off ZK Hack. So we decided to record a version of this for the show as well. The ZK Jargon Decoder is Nico's live document where he defines terms and concepts in ZK. And we cover some of these topics for this episode and generally offer a bit of an intro to the field of zero knowledge. The thing that is a bit unusual about this one is that we actually used visuals, so slides and some graphs. So while the ZK podcast is usually primarily an audio only show, you may wanna head over to the YouTube channel to watch this one to really follow along. And while you're there, maybe you want to subscribe. We have other content from our events, as well as the ZK Study Club videos over there, so you might want to check it out. 01:34: So we aimed to be quite accessible to the folks who are earlier on in their ZK learning journey, but in true form and classic form, we did a couple of times go down some sort of rabbit hole and went a little deeper than we meant to on some of the topics. Still, I'd say we continuously came back to higher level concepts, and I would call this episode quite introductory. So yeah, let us know what you think. If you like this and want us to do another session, leave some comments in the YouTube video or on Twitter. Now before we kick off, Tanya will share a little bit about this week's sponsor. 02:09: Tanya: Launching soon, Namada is a proof of stake L1 blockchain focused on multi-chain asset-agnostic privacy via a unified set. Namada is natively interoperable with fast finality chains via IBC and with Ethereum using a trust-minimized bridge. Any compatible assets from these ecosystems, whether fungible or non-fungible, can join Namada's unified shielded set, effectively erasing the fragmentation of privacy sets that has limited multi-chain privacy guarantees in the past. By remaining within the shielded set, users can utilize shielded actions to engage privately with applications on various chains, including Ethereum, Osmosis, and Celestia, that are not natively private. Namada's unique incentivization is embodied in its shielded set rewards. These rewards function as a bootstrapping tool, rewarding multi-chain users who enhance the overall privacy of Namada participants. Follow Namada on Twitter, @namada, for more information, and join the community on Discord, discord.gg/namada. And now, here's our episode. 03:10: Anna Rose Today, Nico and I are doing a special episode. It's called a ZK Jargon Decoder session. The goal of this is to bring up some terms that are often used in ZK land and define them in plain English. Welcome, Nico. 03:24: Nico Mohnblatt: Thanks, Anna. 03:25: Anna Rose So I think most of the audience is familiar with you. Nico is sometimes the co-host of the show. But, I mean, I think the last time you actually introduced yourself was over a year ago. So why don't we just quickly remind folks what you do? And maybe you can tell us a bit about Geometry Research. 03:42: Nico Mohnblatt: I am a cryptography researcher. So I look into protocol designs, proof systems, how do we make them secure, analyzing other people's systems and make sure those are secure as well. And I guess building the latest and greatest in the crypto papers. 04:01: Anna Rose Nice. 04:02: Nico Mohnblatt: As in, not the ones I write, but building other people's papers into code. And so right now I work at Geometry Research, where this is kind of the research team that you might have known from Geometry, but we are now spinning out into our own organization and opening up collaborations beyond the scope of what we had so far. 04:26: Anna Rose Amazing. So now let's talk about the ZK Jargon Decoder. This is an earlier project. Like when did you first start creating the ZK Jargon Decoder? 04:36: Nico Mohnblatt: So this was a few months into me starting at Geometry. 04:42: Anna Rose Okay. So what timeframe is it... Has it been around for about a year or under that? 04:46: Nico Mohnblatt: This would have been, I think, around just before summer 2020. 04:52: Anna Rose All right. 04:53: Nico Mohnblatt: No, no, no, 2020, 2021. 04:56: Anna Rose Okay. 04:58: Nico Mohnblatt: Or '22. I don't know. 04:59: Anna Rose Oh wait, how long have you been here, Nico? 05:01: Nico Mohnblatt: Yeah. I don't know. 05:04: Anna Rose Is this what happens when you work in ZK? 05:06: Nico Mohnblatt: I think so. Time... 05:07: Anna Rose Time stops. 05:08: Nico Mohnblatt: Just a concept. 05:10: Anna Rose Making sense? 05:11: Nico Mohnblatt: I forgot. Anyway, whenever I joined Geometry, that was February of some year, and then just before summer of that year is when they did it. 05:18: Anna Rose Got it. Okay, let's tell people what the ZK Jargon Decoder kind of was originally. Now we're doing a ZK Jargon Decoder session, but it's kind of an extension on something that exists. So yeah, what is it? 05:28: Nico Mohnblatt: Yeah. So it was this collection of definitions for words that I often came across and often came across in different contexts and sometimes with different meanings. So it was stuff like I would read through documentation for, let's say, Zcash and they talk about like, yeah, we have our private inputs and we have our circuits and we have our arithmetization and we have our witness. And I was very confused because then you start trying to map things like, okay, private inputs is part of the witness, but not necessarily. But then you go read someone else's documentation and they say, yes, my witness is the private inputs. Anything else is something else. What about the advice? And like all these things were very confusing. So I decided to just like, all right, write them out, have a little reference guide for myself and eventually I thought, you know what, this might be useful for others. 06:19: Anna Rose And you published it as a website. 06:21: Nico Mohnblatt: Yeah. With a very quiet tweet saying like, oh, hey, I'm doing these definitions. And then Kobi retweeted it at the time. 06:31: Anna Rose Yeah, god bless. And then I think we put it in ZK Mesh too that month. 06:35: Nico Mohnblatt: Yeah. Yeah, yeah, yeah. 06:36: Anna Rose It kind of went a little ZK viral, I guess. But I thought of it... When I first saw it, I thought of it as like a glossary of terms. But I know that you've kind of corrected me on that, where it's like, it's not a finished thing. It's like a live document, right? 06:51: Nico Mohnblatt: Yeah, and I try to be careful about calling it a dictionary or glossary, because when you do that, people expect everything to be exactly right and exactly perfect, like almost encyclopedic, which is not the goal here. I'm trying to be more informal, more practical. Like, yes, there will be some weird edge cases where my definition doesn't really match the term, especially if we're talking about very mathy objects. Like if you ask me to define elliptic curves in a very practical way, it won't be super precise. But at least we'll get a sense of what it means and how to talk about it. 07:26: Anna Rose And maybe that's why, like, even at the start of this interview, we talked about, or at the start of this episode, I've basically said that it's kind of in plain English, like in terms that people who might not be that deep in it or aren't studying it would still somehow be able to understand. 07:43: Nico Mohnblatt: It's not... It's also not fully plain English. Like there is some assumption on your understanding of math or your familiarity with some math concept, but usually it will be like either end of high school or beginning of an undergraduate degree kind of level. 07:59: Anna Rose Nice. For some additional context to folks joining us, we actually yesterday hosted kind of a live version of this session. We did it as part of ZK Hack IV and it was really good. But also quite chaotic. Like originally we were gonna actually use that version for this episode, but then we thought... Like after the session, we thought, you know what, we were kind of trying to take questions from the audience. Also, the audience's questions about topics sometimes came later, so we ended up jumping around a lot from topic to topic. We thought, why don't we sit down and do this again, kind of in a little bit more of a preplanned way where we can go through these topics. Yeah, a little bit more structured. 08:43: Nico Mohnblatt: I mean, I had so much fun yesterday and it was amazing having so much audience participation, but it also meant for me that I sort of got caught into the wind of it and like the whirlwind of like, oh my god, all these questions coming into chat, like I have to be super quick with my answers. 08:56: Anna Rose For sure. 08:59: Nico Mohnblatt: And I think I went for speed over clarity sometimes. 09:03: Anna Rose Yeah, I also as the kind of other person on stage with you, I was trying to listen to your answers so that I could maybe ask another follow up question, but I was also kind of managing a lot of these questions coming in. So I was a bit distracted. 09:19: Nico Mohnblatt: ZK multitasking is hard. 09:22: Anna Rose But yeah, this time around, we are just the two of us and we're going to be going over... Some of the questions will be similar to what we did yesterday, so there is a video out there. You're welcome to see both of them, but you don't need to I think you can also just watch this one. Some of them will be similar to that, but I know that there were a bunch of topics we didn't get to cover, and so we'll also be talking about those. So why don't we kick off? 09:44: Nico Mohnblatt: Yeah. 09:45: Anna Rose So I think we're going to start where you start with the ZK Jargon Decoder on the website, which is the first question is like, what does it mean to prove when we say that word in ZK? What are we doing? 09:57: Nico Mohnblatt: Yeah. And the reason I had that was I was writing up definitions without having this sort of intro. And I found that I needed this context to really explain things. And I couldn't have just one article where I put all the context and people go to, as in, if I just kept the one word, one definition format. So I thought, all right, let's have a bit of a broader thing. So what does it mean to prove? I have slides still from yesterday. And so what I started with was in mathematics, the statement is either true or false. And the game you want to play is to decide whether statements are true or not. And to make things more concrete, I was giving this example, like statement 1, the number 54 is even. So that statement, you can listen to it, is simple and we can very quickly decide whether it's true or not. 10:54: Anna Rose Totally. Because you're looking at one number and one rule basically. 10:58: Nico Mohnblatt: Yeah. And the nice thing is like the question, is it even? It doesn't matter how big the number is, you can solve that very easily. Like if I give you a number with 50 digits and it ends with a four, you can tell me, yeah, of course it's even. And you can prove it to me. But if I give you this next statement, like statement number 2, the number... I'm going to read this out if you're listening to the podcast, 25,890,323 can be factored. So can that be factored? So if we look at it, we have a few rules from high school that we remember like, okay, it's not even, so you can't divide by two. We can probably add all these digits up, see if it's divisible by three, but it won't be. And I can tell you because I came up with this number, so I know how it's made. And so this statement is actually hard to decide. If you look at it, the best way you have to figure out whether it's true or not is to try to come up with the numbers that make up 25,890,323. And that's basically just trial and error. The best you can do is trial and error. 12:12: Anna Rose But the proving part of this, like is the proof then just like when you have those numbers, that's proof that you that it can be factored? 12:22: Nico Mohnblatt: Exactly. 12:23: Anna Rose That's like how you could prove? 12:24: Nico Mohnblatt: Yeah, without having you do all the trial and error, I can convince you right now that this can be factored just by giving you the numbers. So my proof is, all right, the numbers 4567. So 4567 and the number 5669, When you multiply them together, we get this big 25 million number. And yeah, feel free to take out your calculator. Fact check me. So my proof is these two numbers and your work as someone verifying the proof is just multiplying them together and seeing if it matches. This is a lot less work than doing the trial and error of factoring yourself. Yeah. So this is very nice. And this is what we call in complexity theory, NP problems. So problems where it's hard to solve it on your own, but if you're given a candidate solution, it's easy to check that the solution is right. So that's what it means to prove, is to give this extra information that allows you to decide whether the statement is true or not. 13:25: So here the proof is just these numbers and you can verify blah, blah, blah, NP problems. There is a bit more jargon that's associated to these statements. So here you saw like our statement number 2, the number 25 million blah, blah, blah can be factored. We could use it also as a fill in the blank sentence, right? The number something can be factored. And you can fill that with other some things. And again, we can evaluate is the statement true or false. So in that case, we have all this jargon here. So the number 25,890,000 was a specific instance of the decision problem. Recall that the decision problem was deciding whether the statement is correct or not. 25 million was a specific instance. We could have different instances. Five would be a different instance. And that one, you can tell me quite quickly, I cannot factor it. In our case, the proof or the witness was this list of two numbers, and it witnesses the fact that the instance made a satisfying statement, right? Made a true statement. 14:33: Anna Rose Okay. 14:34: Nico Mohnblatt: So that's why we call it a witness. It's like, if we had the instance on its own, we can't judge it, but then we can call the witness to the bar, and once we have the witness, we know whether or not the instance is... 14:46: Anna Rose In this case, the witness is the proof, right? It is the... 14:50: Nico Mohnblatt: Yeah. 14:51: Anna Rose Okay. 14:51: Nico Mohnblatt: Yeah. And we'll see in a second why you might not want that. Like sometimes my witness is private, or sometimes it's too long to be sent as a proof. That kind of motivates the need for different proof systems, different ways of proving. This is what we call the trivial proof system. I send you the witness, you check that it's correct. It's trivial because it is a proof, but I haven't done work, right? I've just sent you the witness and you decide. So yeah, instance witness, very useful. These are the ones, relation, language, and predicate are jargon that you'll come across if you're reading more technical material. I don't think we should cover it on air, but I do recommend going through the article in the Jargon Decoder and it's nicely explained in my opinion. 15:39: Anna Rose Oh, I do want to... Maybe I could just highlight the predicate because I was going to ask you about like the factored. I guess that's the... Is that the predicate? Like the action, the thing that you're trying to prove is the predicate of the proof. 15:53: Nico Mohnblatt: So exactly. We have that statement. The number 25 million blah, blah, blah can be factored. If I now make it into a fill-in-the-blank form, so the number something can be factored, this is no longer a statement because until I fill that blank, I can't decide whether it's true or false. 16:07: Anna Rose True, okay. 16:08: Nico Mohnblatt: So these sort of fill-in-the-blank sentences or almost form statements, we call those predicates. 16:17: Anna Rose Are predicates ever in the form of a function? 16:21: Nico Mohnblatt: Yeah, absolutely. Well, this is a function, right? The number something can be factored and I can fill up the something with whatever inputs I give to my function. 16:29: Anna Rose Okay. Would it... Like the X would be the empty thing in a function example and then everything else would be the predicate potentially? 16:37: Nico Mohnblatt: Yes, and the output would be true or false. 16:40: Anna Rose Cool. 16:41: Nico Mohnblatt: Yeah. But yeah, this brings us nicely to since you're asking like, oh, there's no ZK here, right? We're just sending the witness. And you're absolutely right. This is the trivial proof system and this is, sometimes my witness is private and I don't want to reveal it. Like if this were, let's say we're doing RSA and these are like, these factors have to stay secret. I don't want to prove it to you in this way. Or what if, and this is an example I took, my proof was super long. And we have this Wikipedia article of list of long mathematical proofs. Yeah, and some of these are like 500 pages, 200 pages, 250 pages. And this was one of the motivating examples actually for what we call proof systems. So can we do better than sending the witness in full? And the answer is yes, otherwise, I would be not doing this. One way of doing it is having interactive proofs. 17:45: And you can think of this as a detective, the verifier is the detective and the prover is a suspect. And so the verifier is gonna ask questions. Like, where were you on the night of the 15th? Can anyone corroborate your alibi? What color is your car? I don't know. And the prover is gonna answer these questions one by one. And at the end, the verifier is just gonna go through the answers and make sure that the prover kept her story straight. 18:07: Anna Rose Yeah. But in this case, like the thing that's going back and forth is it's still... Is the witness going back and forth? 18:15: Nico Mohnblatt: No, it could be something else. So you know when, how we were talking about like 200 page long proofs? 18:21: Anna Rose Yeah. 18:22: Nico Mohnblatt: Right. I can ask you, oh, have you read X and Y paper that was related to the problem you're trying to solve? Yes. Okay. Do you know how to solve this problem? Yes. Are you aware of this theorem? Yes. Okay. With these things, I am pretty sure that your reasoning is fine. But I don't need to read through 200 pages of proof. 18:43: Anna Rose I see. So you're... But you're using, like, you're extracting something then from the predicate in a way, right? Or are you extracting something just from the witness? 18:54: Nico Mohnblatt: Kind of, yeah, from the witness. 18:56: Anna Rose From the witness, okay. 18:58: Nico Mohnblatt: Yeah. That is if my interactive proof is not zero knowledge. That's the beauty of zero knowledge proof is we can have these interactive discussions where I'm convincing you that I know a witness or that a witness exists. But I never say anything remotely linked to the witness. That's what a zero knowledge proof is, is we have this interaction, we can even do it non interactively, but nothing I say reveals even an ounce of information about the witness. 19:25: Anna Rose In the case... So I'm gonna throw back to a very old example for ZK is like the Where's Waldo example. Like what are the different pieces of that one? 19:35: Nico Mohnblatt: Yeah, so... 19:37: Anna Rose It's interactive because there's like the person who's proving it to someone and there's a person who's looking at the proof. 19:43: Nico Mohnblatt: I think it's a non-interactive proof, but you have a setup. So the setup is, we have the book with the page that we're trying to solve. And I know that you have nothing else, like you don't hide little Waldos in your pocket or whatever. That's our setup. And we also have a big sheet with a hole. That's our setup. And then you, as a prover, the only thing you do is you get behind that sheet and you place the book such that Waldo appears and then I, as a verifier can look at that proof. So I can look at Waldo through the little peeping hole. 20:16: Anna Rose So you just clarified though that's not interactive? 20:20: Nico Mohnblatt: Yeah. I think it's not interactive because there aren't back and forths going on. 20:24: Anna Rose Ah yeah, okay. So in my... The way I understood it was like an interaction between two parties but you're saying like you actually, if the interactive proof has to have that like back and forth and back and forth, in kind of extra information for it to be called interactive. 20:38: Nico Mohnblatt: Yeah. In the same way if you're speaking to someone but they never answer you, there are two parties involved, but it's kind of a non-interactive conversation. 20:50: Anna Rose I guess yeah, what you're saying here is like the proving is just going one direction. 20:55: Nico Mohnblatt: Yeah. 20:55: Anna Rose Like the person verifying never sends anything back. 20:58: Nico Mohnblatt: Exactly. We say interactive when the verifier sends something back and the prover has to send something again. 21:03: Anna Rose Got it. Okay, okay. Makes sense. 21:06: Nico Mohnblatt: Yeah. So that was one example. The next thing about interactive proofs, and what we do often is like, we start with a problem, a big problem, the prover sends something, the verifier asks the question, and at the end of that, we now have a similar problem, but it's a bit smaller. And we can keep making it smaller and smaller and smaller until it's easy to solve. So that's sort of logically why interaction is powerful because we can get to smaller and smaller problems every time. The other way of sort of avoiding long proofs and lots of time to verify them is to have what we call probabilistic proofs. So this is a different approach and this is a kind of a, all right, let's relax our requirements a bit. We no longer need to be 100% sure that the proof is correct. Like I am willing to sometimes accept a proof that is wrong. So what I can do is either make sure that this probability is very small, or if it's too big, we can repeat the proof a bunch of times. 22:11: Anna Rose In the case then of a probabilistic proof, is your proof close to correct? Or is it... Do you know what I mean? Like doesn't it sort of mean that the proof itself is slightly off? 22:24: Nico Mohnblatt: It doesn't have to be. It's more that... It just means once in a while I will read a wrong proof, a completely wrong proof and think that it's completely right. It's not just a closeness thing. It's really like sometimes the verifier gets unlucky. 22:43: Anna Rose All right. 22:44: Nico Mohnblatt: And yeah, the nice thing about doing this is that we don't have to read the whole proof, right? I get to just read part of your proof and I then decide whether it's fine or not. 22:54: Anna Rose And I guess, is there sort of a threshold of probability that you accept? Or is it like, you sort of said it's okay to accept a wrong proof once in a while, but is that gonna give you a percentage wrongness or a percentage rightness somehow that you're aiming for? 23:09: Nico Mohnblatt: Exactly. Yeah, so some of the very old proofs used to have like a 50-50 chance. But you would do it a bunch of times. Like we'd repeat it like 20 times. So for a cheating prover to win this 50-50 20 times is extremely unlikely. 23:29: Anna Rose I think this is the billiard ball example. So this is the one where you have the two balls, you put them behind your back and you're supposed to like, it's something like you're proving... I forget like one of them's blind I think. I'm gonna do... I'm gonna butcher this but it's like... 23:43: Nico Mohnblatt: There is... 23:44: Anna Rose It's always like... Okay, here's what it is, you're the blind friend, you put... You're like, you have a red ball and a blue ball in your hands but you can't see them, but you're asking your friend in front of you to prove... To tell you if they're different, that's the whole thing. So you put them behind your back, and you pull out one, and then you put it back behind your back and you switch it, you pull out another and the person is telling you, it's red, it's blue, it's red, it's blue, it's blue, it's red. 24:12: Nico Mohnblatt: Yeah. Okay, so they would only be able to keep track of the ball. 24:15: Anna Rose Yeah, you do it enough times so that you know that there are different colors because your friend could never have predicted how many times... Like when you switched it or didn't switch it behind your back. Yeah. I think that's a probabilistic proof, right? 24:29: Nico Mohnblatt: Yeah, absolutely. 24:30: Anna Rose Yeah. 24:31: Nico Mohnblatt: Yeah, yeah. 24:32: Anna Rose I think that, that example taught me more about probabilistic proof than it did about zero knowledge, to be honest. Because it's a weird setup with the blind friend and the yeah. Anyway… 24:44: Nico Mohnblatt: Yeah. So the other nice thing is we can combine both, right? Like if you recall interactive proofs and probabilistic proofs, there is nothing that's excluding one, like excluding them from working together. And so what we can do is interactive proofs where at every round of the proof, the prover says something and the verifier only reads a small part of what the prover said and we repeat that. And that's what we call interactive oracle proofs, IOPs. Which is a very powerful tool and most of our proof systems today are built on IOPs. 25:23: Anna Rose So you just mentioned here IOP, but we often hear polynomial IOP. 25:29: Nico Mohnblatt: Yeah. 25:30: Anna Rose Should we maybe talk a little bit about polynomials? 25:34: Nico Mohnblatt: Yeah, absolutely. A polynomial IOP is even a stricter requirement. So now we're saying, all right, it is a conversation, interactive, prover says stuff, verifier only reads some of the things that a prover has sent. But now we are restricting the prover even further to only be able to send polynomials. So the prover sends polynomials, then the verifier gets to evaluate the polynomials at points where it likes. 26:02: Anna Rose So this is sort of like in that back and forth part that interaction, like the only format that will be accepted is this polynomial format. 26:10: Nico Mohnblatt: Yeah. 26:12: Anna Rose Okay. 26:12: Nico Mohnblatt: Exactly. 26:12: Anna Rose You're still probabilistic, you're still using an interactive setup, but it's just like the form of what's going back and forth is going to be polynomial. 26:21: Nico Mohnblatt: Yeah. So why are we probabilistic? It's because the verifier is not going to look at the polynomials completely. The verifier is just going to evaluate the polynomial in maybe one, two or three points, like a few points. 26:33: Anna Rose Cool. 26:33: Nico Mohnblatt: Why do we care about polynomials? Why do we like them? It comes from this sort of fundamental theorem of algebra, I guess, which is if two polynomials of the same degree are just slightly different, like even one coefficient different, and you evaluate them at a random point, their evaluation is extremely unlikely to be the same. So again, extremely unlikely. We get this probabilistic element. And so we have this nice test. Essentially, if you give me two polynomials to check that they are the same, I don't need to read the polynomials in full. I can just evaluate them at a random point, see if that matches, and then I'll be like, sure, there's extreme confidence that these are the same polynomial. 27:25: Anna Rose Cool. 27:26: Nico Mohnblatt: So that's why we like them so much. 27:28: Anna Rose We keep talking about proofs here, but there is another kind of topic, which is like arguments. And I think we've defined proofs pretty well, but like what's an argument and are they different? How are they different? 27:41: Nico Mohnblatt: Yeah, yeah. The famous SNARK. What's the ARK in there? 27:45: Anna Rose Oh yeah. 27:47: Nico Mohnblatt: So proofs versus arguments. And this is kind of, it comes down to the security properties. So you know how I said there's this probabilistic element or even with a proof like a classic non-interactive, non-probabilistic proof. Like can you fool the verifier? Can you send something where the verifier is going to be like, yeah, I believe your proof is correct, but your proof wasn't. And so this is what proof versus argument comes down to is who are we protected against? So in the case of a proof, the adversary has unlimited computational power. So I'm protected against everyone and anyone. 28:29: Anna Rose So a proof is like more... It's like very... Like if it's correct, this adversary could throw anything they want at it and there's no way for them to break it or... 28:40: Nico Mohnblatt: So the other way around, the adversary can throw as much power as they want. 28:44: Anna Rose Okay, got it. 28:45: Nico Mohnblatt: They will never be able to prove something that was not true. 28:48: Anna Rose Yeah. 28:49: Nico Mohnblatt: The only way they could is if we're doing a probabilistic proof and the verifier got unlucky. 28:54: Anna Rose Or if you did a probabilistic proof, maybe where you like allow for ver... Like you only need 60% guarantee or something. 29:01: Nico Mohnblatt: Yeah. But then we'd repeat that many, many times. Yeah, no, exactly. So that's what we're saying. In a proof, the only way to fool a verifier is by getting lucky essentially, is by having the probabilistic aspect of the proof fail. And that's why we also talk about statistical soundness. The only way to beat this is with statistics. 29:24: Anna Rose Like, are there standard probabilistic thresholds that we're using a lot or that like... Yeah, like, what is it? 29:33: Nico Mohnblatt: So in the literature, they say one third. 29:36: Anna Rose Wait, one third wrong. 29:37: Nico Mohnblatt: One third wrong. 29:38: Anna Rose Okay, wow, that's higher than I'd expect. 29:41: Nico Mohnblatt: Because with one third, you can repeat many, many times until it gets to what we call something negligible. 29:49: Anna Rose Oh, because it would keep getting smaller because you're one third, one third, one third. Okay. 29:53: Nico Mohnblatt: In practice, we like negligible probabilities. That is to say, if we have some baseline security parameter that we start with, the probability of you cheating is exponentially small. So one over something exponential. So this probability very quickly becomes tiny, tiny, tiny. And this is what allows us to tailor our proof systems to different computational environments? Let's say, so today we have a certain computational power, but in 50 years we're going to have a lot more. Are our proof systems going to be broken then? Well, no, because the probability is a function of how much power the verifier has, or how much power the setup has. So in 50 years we'll have much more power, but because these much more powerful verifiers will be out there, the proofs will have to do a lot more as well. Like the cheating proofs will have to do a lot more. 30:56: Anna Rose Got it. 30:55: Nico Mohnblatt: So that's what we mean by computational power. It's not like pure instant power, like, oh, if I have enough GPUs, I'll break the proof. It's more like we allow or don't allow asymmetries in the cheating prover and the verifier. 31:12: Anna Rose In the case of the proof, you don't allow for asymmetry, right? 31:16: Nico Mohnblatt: We do. We do. 31:17: Anna Rose You do allow for asymmetry. 31:18: Nico Mohnblatt: You say unlimited computational power. So no matter how much power you have, you won't get anything better than the statistical probability of our probabilistic proof. 31:34: Anna Rose Okay, now I want to hear what an argument is, though. I think we need to hear the difference to better understand that. 31:40: Nico Mohnblatt: Yeah, absolutely. With an argument, it's different. We say that the cheating prover, the adversary is computationally bounded. So it means there is no huge asymmetry between the prover and the verifier. There can be to some degree an asymmetry. Like if I'm running a setup or I'm the verifier and the adversary is, I don't know, some huge government agency, not gonna name names, but imagine, right? Even then, we think that it should be fine because they are not exponentially stronger than I am. They're only like some polynomial stronger than I am. 32:20: Anna Rose I want to go back to that example though of the verifier and the adversary kind of like as they both get more strong, they're equally more strong. Whereas in the argument, it's bounded. So if the verifier gets stronger, does the adversary stay less strong? 32:36: Nico Mohnblatt: Sorry. Maybe I got things confused a bit. With the argument, we say the adversary cannot be incredibly more powerful than the verifier. It can be more powerful, but not exponentially more. With the proof, we say it can be. 32:55: Anna Rose So in this case, it's the comparison to the verifier itself. 32:59: Nico Mohnblatt: Yeah, exactly. Or to the setup. 33:02: Anna Rose Okay. But I still... Okay, so going back, so the proof. The proof you have the power of the verifier and then you have this unlimited power of the adversary, but you're saying it's so strong that it doesn't matter. 33:15: Nico Mohnblatt: Exactly. 33:16: Anna Rose It'll still be correct because it's so good. 33:19: Nico Mohnblatt: The system is so strong that even though the adversary is also infinitely powerful, it can't beat it. 33:27: Anna Rose But in the case of an argument, you're saying actually it's only as... It's only strong as long as there's this bounded computational power. 33:35: Nico Mohnblatt: Exactly. 33:36: Anna Rose It's not allowing for just keep running it as much as you want. Here it's like, actually, we're only going to let you run it three times, but this argument will stand, if you only run it three times or something. I know it's not maybe rounds, but as an example. 33:49: Nico Mohnblatt: No, no, but yeah, it makes sense. 33:50: Anna Rose Okay. 33:51: Nico Mohnblatt: I'll give you an example. If you really wanted to break like a STARK, for example, you would need we've estimated somewhere around like 2 to the 90 or 2 to the 100 hashes. Like that is the number of hashes you need to do. And that is more than, I guess, the number of atoms in the universe or something. So we've assumed that that kind of power is out of range of humanity today. But maybe in 20 years, it won't be. But our proof systems are such... Sorry, our arguments are such that we can tweak them and we can now require, all right, you're going to have to do 200 hashes... 2 to the 200 hashes. And for that period of time, it will be just beyond what's reachable. 34:43: Anna Rose But then in this case, R is a stark... I mean, I guess it is, it's an argument. 34:47: Nico Mohnblatt: Yeah. Yeah, yeah. 34:49: Anna Rose Which is the AR. Yeah. Okay. 34:51: Nico Mohnblatt: So the K is actually of knowledge, and we can talk about that as well. Actually, I'll show you this little chart that we made for the live session yesterday. This is like a 2 by 2 matrix. So the bottom axis is whether your adversary is bounded or not. So bottom left, I have a computational adversary. So one that is bounded in this computation and bottom right, I have a statistical adversary, one that is unbounded. And the property, the security property that we want to uphold is to say if a verifier accepts a proof, then the witness must have existed. This is what we call soundness. So I'll say it again. If the proof is accepted, then the witness existed. And so that is where arguments and proof live. So arguments, ARG, not ARK, ARG, gives you computational soundness. So soundness against an adversary that has limited power. The proof gives you statistical soundness, or people also like to call it information theoretic soundness. And why statistical? Because the best way to beat it is with the statistics of the probabilistic proof. 36:03: Anna Rose The soundness here though, just going back to it, so the witness exists, but you don't know what it is necessarily, I guess? 36:10: Nico Mohnblatt: No, it just exists. 36:11: Anna Rose It just exists in the world. There is a solution, kind of. There is a proof. Okay. 36:16: Nico Mohnblatt: Coming back to our statements from the very beginning, if I give you a 50-digit number that ends with a 2 and I ask you, can this be factored? Well, you can very quickly answer yes. 36:26: Anna Rose Yes, but I won't know. 36:27: Nico Mohnblatt: Exactly. 36:28: Anna Rose Yeah, I won't know what it is. 36:30: Nico Mohnblatt: Yeah. So in this case, if our proof system or argument system is sound, you will be able to produce a valid proof or a valid argument. If I wanted you to really know the factorization of this big 50-digit number that ends with a 2, I would need what we call knowledge soundness. So it's a stronger property. And so this is now in my 4x4 quadrants. Bottom, we had computational statistical. On the left, we have soundness. And as we go up, knowledge soundness. So knowledge soundness says not only the witness exists, the prover also knows it. And so that's where the ARK happens. ARK means argument of knowledge. So argument, computational, bounded adversary of knowledge, knowledge soundness. And we also have proof of knowledge, which is statistical, unbounded adversary and knowledge soundness. So the NARK in SNARK is in the knowledge soundness computational realm? 37:27: Nico Mohnblatt: Exactly. 37:27: Anna Rose So in a SNARK, the prover always knows the correct answer? The witness. 37:32: Nico Mohnblatt: Yeah, in a SNARK, yeah, absolutely. 37:34: Anna Rose Okay. In a STARK, is it also argument of knowledge at the end of it? 37:39: Nico Mohnblatt: Yeah. 37:40: Anna Rose Okay. So it also falls in that quadrant. 37:43: Nico Mohnblatt: So SNARK, the S is succinct and STARK, it's scalable. But then the ARK is the same, it's argument of knowledge. 37:49: Anna Rose Got it. In previous presentations and stuff, we have sometimes heard people refer to like SNARGs with a G. Are there systems that we know today like ZK SNARG systems or is it like we kind of put them under the ZK SNARK umbrella, but they're actually soundness and computational or it's everything that we're working on in that knowledge soundness computational realm? 38:12: Nico Mohnblatt: Yeah, we are usually in the knowledge soundness computational realm. It would actually be quite dangerous in a way to have just a SNARG because it really provides you with a foot gun. Because sometimes statements do need knowledge soundness. So if I'm asking you to prove that you know the private key for a wallet, I need that knowledge. I need you to know it because every wallet has a private key. So the fact that the private key exists doesn't mean anything to me. 38:43: Anna Rose True, true. 38:44: Nico Mohnblatt: So if you give me a SNARG that there is a private key for a wallet, that doesn't mean anything. So yeah, we work pretty much exclusively, at least in the Web3 space exclusively with SNARKs or STARKs or NARKs. 39:00: Anna Rose In this knowledge soundness realm. But then, okay, just going back, if you look at the definition of zero knowledge proof, you have the completeness, soundness and zero knowledge. This soundness that we have is the equivalent, right? It's the same. It's that soundness. But how come a ZKP can just have soundness, but you're saying like everything we have to do has to be in this knowledge soundness place? 39:23: Nico Mohnblatt: So it comes from... And this is why I like to show this chart or this is why I'm happy I made this chart. It's because we do abuse language a bit. And often, we talk about SNARK proofs. Like what do I mean by SNARK proof? Like SNARK already means argument of knowledge. So it's just we sort of abuse language and we say proof for anything that we send in this way. We say ZK proof, we omit the of knowledge even though it's important, like it's a bit tricky. But it makes sense that we can't speak in paper English all the time. We have to use common English to express ourselves. 40:05: Anna Rose Does the knowledge soundness only exist for the SNARK world? Like with those three that I listed, completeness, soundness and zero knowledge, that's in that fundamental definition of zero knowledge. Zero knowledge proof, yeah. So then where kind of does the knowledge soundness term come from? 40:23: Nico Mohnblatt: I'm not entirely sure. I think it came from this need of proving statements where sometimes the witness always exists. And what's important is whether or not the prover knew it. 40:33: Anna Rose Knows that. Okay. All right. So it's from this world then probably it's from the SNARK world. 40:38: Nico Mohnblatt: Yes, absolutely. 40:39: Anna Rose Okay. Okay. 40:39: Nico Mohnblatt: Yeah. And there's this funny sort of thing like how can I be zero knowledge, but also knowledge sound. Like how can I make a proof that reveals nothing about the witness, but yet convinces you that I know it. Like there's this crazy duality. 40:54: Anna Rose Yeah. What is a proof system? So far we've talked about a proof, but not the thing that we often, like when you hear about Plonk, it's a proof system. Or maybe it's not a proof actually, maybe it's an argument of knowledge system. 41:10: Nico Mohnblatt: Yeah, technically Plonk is an argument system. And what do we mean by system? We just mean argument or proof we like to use to refer to the actual proof, the thing that can be verified, whereas the whole system, like what is the algorithm for proving? What is the algorithm for verifying? What kind of setup do I need? That together forms the system. 41:31: Anna Rose Okay. So I sort of mentioned this earlier, the polynomial IOP. There's kind of a bigger question there, which is like, we're looking at this proof system or argument system, but then we hear about polynomials, polynomial commitments, polynomial IOPs. Maybe we can talk a little bit about what pieces go into these systems. 41:57: Nico Mohnblatt: Actually, while we're still on this chart, I do have IOP and polynomial IOP in here. They are in the statistical realm. So they're in the realm where the prover has unlimited power. And we can design them to either have soundness or knowledge soundness. And you'll see that as we said, we also have SNARKs and STARKs, but maybe people here already know that you can from an IOP get a SNARK, from a polynomial IOP get a SNARK. So how does that happen? Why do we have this thing that is statistical and then move into the computational land? 42:31: Anna Rose Yeah, that's interesting. 42:32: Nico Mohnblatt: Yeah. So that is sort of this little... Wait, let me bring up the whiteboard, this little expression that we see all the time. Like PIOP plus polynomial commitment scheme, I'll abbreviate this as PCS equals a SNARK. And maybe people have seen this on slides with a nice little blender. They show you that you can blend both and get a SNARK. Those are Dan Boneh's. That's his stock slide for the polynomial IOP and the polynomial commitment scheme. Nice. 43:04: Nico Mohnblatt: So we've already talked about why we like polynomials. So hopefully that gave you motivation for why we have polynomial IOPs. But then the question is, how do we make this real? Because we said, all right, the prover sends a polynomial. What does that mean? And the verifier doesn't look at it, just evaluates it in one point. How do we do that? So in theory, we have what we call these oracles. So all right, I send you an oracle, and an oracle is kind of like a little black box, and you can ask it questions like, hey, evaluate what's inside the black box at five. And it's going to give me an answer. But these things don't exist in real life. So how do we mimic these oracles? We commit to polynomials. So polynomial commitment scheme is something where you can commit and it gives you a commitment. And then with the commitment... So you can also evaluate, so it gives you an evaluation. And then you can sort of verify. So the verifier can I say, all right, I want to verify that this commitment, I'll just write COM for commitment, evaluated at the point that I asked for my evaluation, and the evaluation that the prover gave me, I want to verify whether this is true or false. 44:45: Anna Rose Yeah. Something you just said though. So the oracle is sort of this unknown. It's like we're mimicking it. We're using the polynomial commitment scheme as a... 44:56: Nico Mohnblatt: As a wannabe oracle, as a make believe oracle. 44:59: Anna Rose Is it a make believe oracle or is it like a format? Something that you're running something through? 45:05: Nico Mohnblatt: I like to think of it as a make believe oracle. Like instead of sending you this black box that can magically evaluate the polynomial, I am sending you a commitment. And there's no magical evaluation. You have to ask me for the evaluation, I'll give it to you, and then from the commitment, the evaluation, you can check whether it's true or false. 45:26: Anna Rose This is interesting to me because like the O being connected to the polynomial commitment scheme, this is kind of the first time I'm putting that together, that's what that's trying to do. I've known those two parts. I mean, we learned about that in the whiteboard session, but I don't know if... In the ZK whiteboard sessions, I should say, I don't know that I ever really understood how the two of them work together. I feel like I just learned them independently. 45:49: Nico Mohnblatt: Well, I'm glad this helps. But yes, this is why we have these commitment schemes is to replace this oracle. The thing is, we know how to make efficient commitment schemes against bounded adversaries. And so this is why when we combine a polynomial IOP with a PCS that only works for bounded adversaries, we get something that is an argument. 46:17: Anna Rose I see. Okay. Because yeah, in that sort of quadrant image that you had, the polynomial IOPs, they live in the proof category. But when you use them in a SNARK, you're moving them then into the argument or computational category. 46:33: Nico Mohnblatt: Absolutely. 46:34: Anna Rose Are they like a little less strong then, because you're moving them that way? 46:38: Nico Mohnblatt: Yeah, yeah, yeah. Definitely. We now have... We can't use these against adversaries that have unlimited power. The other thing I want to point out, we use a polynomial commitment scheme to replace the O in PIOP, the oracle. And that's what gives us the argument part of the SNARK. But how do we get the knowledge part? And this comes from another property of the polynomial commitment scheme, which is, is it extractable? From a commitment, is it possible to extract the polynomial if we had access to whoever committed? 47:14: Anna Rose Like this is this knowledge, what was it? Soundness. It's like that you could actually know the answer, not just know that there is an answer. 47:24: Nico Mohnblatt: Yes. And it comes down to the fact like, all right, if you, Anna, have produced a commitment to polynomial, is it just because the polynomial existed or is it because you really knew the polynomial? And that's this extractability property. 47:36: Anna Rose In the case of a SNARK, you do need to know it, right? So you do like, yeah. 47:43: Nico Mohnblatt: Yeah, absolutely. So yeah, hopefully that kind of explains. 47:45: Anna Rose Cool. Okay, one follow up question here, though. The polynomial in PIOP and PCS, are all of the ways that we're making SNARKs, are they actually always using polynomials? Because I understood FRI as not being like, really in that category that you couldn't put it under polynomial commitment scheme, and yet it is often used in that exact role in SNARK-like constructions or STARKs. Or maybe it's not. Wait, actually one quick clarification. In STARKs, is FRI the PCS? 48:26: Nico Mohnblatt: So that's the funny thing. At first it wasn't. And that's not how it was described, and then we slowly introduced new techniques and we realized, ah, this is actually also a polynomial IOP plus some kind of commitment scheme. And I'm sort of going to explain a bit more, but the way you constructed a STARK in the original like STARK paper, you don't send polynomial, you send code, code words from error-correcting codes. So this is sort of another branch of mathematics. You can use polynomials as error-correcting codes, but there are other codes. So the prover would be sending code words, so P sends code word, and V would send challenges, and P would send more code words. And at the end, you have this protocol called ALI to sort of check that these code words were somehow meaningful with respect to the problem that we had, sort of how we would use our polynomial IOP. And then, so this was part one. 49:36: And part two is they used FRI. And FRI is Fast Reed-Solomon Interactive Oracle Proof of Proximity. So the Reed-Solomon is these code words. These are actually RS code words, Reed-Solomon code words, and FRI was just here to check that, did the prover actually send Reed-Solomon code words? So in the same way with our polynomial IOP, we had this polynomial protocol where the prover sends polynomials and we then define stuff. And then we check, did the prover actually send polynomials? With the original STARK paper, we would send these RS code words, and then we would check that the prover actually send code words. But then what happened in, I think it's 2018, 2019, you have this deep technique, and this DEEP-ALI protocol, which DEEP stands for Domain Extension for the Elimination of Pretenders. It's one of those fun acronyms. And what that meant was, all right, we have these RS code words, and these are actually, they're kind of the same as polynomials. They are polynomials, but evaluated over a specific number of points. And with DEEP, what you could do is sort of extract the polynomial from this code word and evaluate it somewhere else. 50:54: So this starts to sound again, like a polynomial IOP. I sent something that allows you to evaluate the polynomial. And then how do I check that you did send code words? Well, we use FRI. So this is where the STARK, as soon as you introduce this DEEP technique and this DEEP-ALI protocol, you start actually behaving like a polynomial IOP. And as long as you have some way of extracting the underlying polynomial, you again have some way of compounding your PIOP into an argument of knowledge. 51:26: Anna Rose But why did we hear more about like DEEP-FRI? Is DEEP-FRI, DEEP-ALI plus FRI? 51:31: Nico Mohnblatt: So you could also say DEEP-ALI plus DEEP-FRI. You can apply the same technique to FRI, but it turned out that it didn't give you that much gains. You did a lot more work, but didn't get much more security. So we scratched the DEEP on the DEEP-FRI side. But for the DEEP-ALI part, it gave you a lot more security. 51:49: Anna Rose So in this case, though, are you saying that, like, on the STARK front, the research showed that if you made it DEEP-ALI, then FRI was actually able to work together with it. And that's why people then said, oh, maybe we can also start incorporating them into the original SNARK constructions. Like, why does FRI move into the PCS side of things? Because you're saying DEEP-ALI is like the PIOP. 52:13: Nico Mohnblatt: There's a small link between DEEP-ALI and FRI, which is that we no longer apply FRI directly to the provers messages. We apply them to something computed from the provers messages. So where previously with ALI and FRI, the verifier would say like probabilistic proof. With every message you sent me, I want the fifth little element in that message for all the proofs you sent me. That would have been classic FRI Now with DEEP-ALI and FRI, the Verifier says, I want the fifth element, but what I'm going to do is I'm going to add something to it and then divide something. And now only then am I going to run FRI. And so it's this extra bit of work that we realized, ah, if we incorporate that extra bit of work with FRI, it is now a polynomial commitment scheme. 53:02: Anna Rose I see. Okay. And that's that in between, because like all that got ported back to SNARKs is the FRI part, right? Like DEEP-ALI is not in Plonk, right? 53:12: Nico Mohnblatt: No, no, no. But DEEP-ALI made us realize that, hey, these Reed-Solomon-based protocols aren't so different from PIOPs. 53:22: Anna Rose Okay. Yeah. Interesting. It's where the pattern matching happened, I guess, in a way. Here FRI is being used with this addition to almost emulate... Can I say it's emulating a polynomial commitment scheme, or is it a polynomial commitment scheme? 53:40: Nico Mohnblatt: So the other subtlety why people refrain from saying FRI is a polynomial commitment scheme is because like we said, FRI, the I in FRI stands for IOPP. So IOP is the same as before, interactive oracle proof, but the P is proximity. And it turns out that if we are very close to a polynomial, then there is only one. But if we're somewhat far, but not too far, then there are multiple polynomials that I'm close to. Does that make sense? 54:12: Anna Rose No. 54:14: Nico Mohnblatt: Okay. 54:14: Anna Rose What's wrong with multiple or is it good to have multiple? 54:17: Nico Mohnblatt: Well, there's nothing wrong with it actually and that's what we realized. As long as the list of multiple polynomials is small enough that you can read through them and understand which one was the actual witness, then we're good. 54:29: Anna Rose Okay. So you want a low number and the problem is the further you get away there's a lot, but if you keep the proximity low then you can use it in that context. 54:38: Nico Mohnblatt: Exactly. And that's where the trade-off comes in. To keep the proximity low, the verifier needs to do more work. 54:44: Anna Rose Cool. By the way, now I'm really curious, what is the word FRI then? 54:47: Nico Mohnblatt: FRI is an acronym for an acronym. 54:49: Anna Rose I know it's an acronym, but it's like, it should actually be like FRIIOPP. 54:58: Nico Mohnblatt: No, just FRIOPP. Like the I in FRI is like the shortened version of IOPP. 55:06: Anna Rose That's amazing. So yeah, FRIOPP. But I think FRI is better. 55:10: Nico Mohnblatt: But yeah, just one thing. When you said, what's the problem of having a list? When we talk about a polynomial commitment scheme, I want to make sure that you only committed to one polynomial. I want to make sure that you're bound to a single polynomial. Whereas actually what we need to make this knowledge sound proof is I don't care whether it was just one or multiple polynomials, as long as I can recover which polynomial was the witness. And that's the small subtlety of why people say, yes, you can use FRI as a polynomial commitment scheme, but actually in these FRI-based argument systems we don't, we are a bit more relaxed. It's because we don't care about binding you to a single polynomial, but we do care about being able to extract the one that was a witness. This is very much in the weeds, by the way. 55:55: Anna Rose Yeah, we got more in the weeds than we meant to. I actually want to bring us a little bit out. We have another topic that we wanted to cover in this particular ZK Jargon session, and that's all about circuits. So, so far we've been talking about mathy polynomial commitments and like all of this, yeah, polynomials generally. And then, we hear about circuits. I think it would be kind of interesting to try to understand like, how do those things connect? And what is a circuit? 56:24: Nico Mohnblatt: So people like to have this divide of frontend and backend of a SNARK. So backend is everything we've been talking about so far, like, what is your polynomial IOP? What is your polynomial commitment scheme? And then frontend is how do I go from a real world problem to something, some polynomial, something that I can prove with my backend. And so if we were to draw this out, we'd have real world problem, and somehow we convert that into a circuit. And we'll see in a second what that is and why we do that and why it is that we can. And then from circuit, we go to something. So I put a question mark there. And this something we decide. And then this something we feed into backend. And now we can make our proof. 57:11: Anna Rose Real world problem to circuit to something to backend. Okay. 57:15: Nico Mohnblatt: Yeah. And there's something we'll see in a second that you get to choose, kind of. What is a circuit? I'll bring up actually the entry that I have in the decoder. Circuit looks like this. Yeah, it's like a little collection of additions and multiplications, they have an order and they have wires going into them and out of them. So in this example, I wrote a little relation up here, y equals 5X0+3 x X1+X2 . And then I write it as a circuit. So I have y equals... So I have y on one side, and on the other side I'll have some things flowing in, the x0s to x2. And you can see that inside my circuit I have these gates, addition and multiplication, and I have these hard-coded numbers, because sometimes that's useful. The values that these wires can take are usually elements of a finite field, and that's a whole other topic. The simple terms is these are numbers from 0 to some big prime number. And when we get to the big prime, we loop back to zero. 58:27: Anna Rose Like a fixed number. 58:28: Nico Mohnblatt: Exactly. So there's only a fixed number of things, or we can do it small numbers as well. We could have people who have studied maybe electrical engineering or computer science will be familiar with binary circuits. So circuits where the wires can only be zero or one. So here we allow zero up to some big prime. 58:46: Anna Rose Got it. 58:48: Nico Mohnblatt: And the question is like, why are we so obsessed with these? Like why this little diagram? This was something that bugged me for a while actually. 58:58: Anna Rose Why are we obsessed with these? That's great. 59:01: Nico Mohnblatt: But we are, right? Like circuit, circuit, circuit. 59:03: Anna Rose Yeah, yeah, yeah. 59:05: Nico Mohnblatt: And it's because we have this result from complexity theory, which is any provable statement. So any... if you recall like our very first slide, what is it to prove? It's these statements where solving it yourself is hard, but if you're given an answer, you can check it quickly. Any of these statements can be expressed as a circuit. So this is why, if I go back to my whiteboard, this link here is possible. It's because circuits are what we call NP complete. That is any problem that's in NP, which is problems that accept a proof, can be expressed as a circuit. 59:48: Anna Rose Is the backend NP complete? 59:51: Nico Mohnblatt: Yes, absolutely. The backend actually can do much more than proving NP statements. Often we have backends that can prove stronger types of problems, but we don't even know how to deal with those. 1:00:05: Anna Rose So the circuit is a little bit... 1:00:07: Nico Mohnblatt: It's a bit weaker than the backend. 1:00:08: Anna Rose Okay, okay. Interesting. 1:00:09: Nico Mohnblatt: Yeah, absolutely. And then circuits, we still have to go from these nice little diagrams of gates and numbers and wires into polynomials. And this is where you have a choice. And this is what people call the arithmetization. And this is one that I find a bit confusing because arithmetization can be understood as the process of going from real world problem to backend, but sometimes we also use it for this intermediate step between circuit and backend. 1:00:39: Anna Rose Interesting. 1:00:40: Nico Mohnblatt: And examples here, we have R1CS, AIR, Plonk, and we can talk about this. 1:00:49: Anna Rose That was exactly what I was about to ask was the R1CS. When you show that circuit on the other page, I mean, when people have shown me what an R1CS circuit looks like, it looks like that. Kind of it has like the inputs, outputs, something happening inside. There's also a system called AIR, which you just listed, which doesn't look like that. Doesn't look like this circuit, but it's still a circuit. 1:01:18: Nico Mohnblatt: It is a translation of a circuit, yeah. A different one. 1:01:22: Anna Rose OK. And Plonk does look like that when you draw it, but it has something else, right? 1:01:28: Nico Mohnblatt: Yeah, so Plonk is also a translation of a circuit, but a different one. 1:01:33: Anna Rose I mean, the big difference, right, is the gates. 1:01:36: Nico Mohnblatt: Yeah. The mapping from Plonk to circuit is a lot faster. So Plonk is kind of like a table, where, let's say, our gates only have two inputs and one output. So left input, right input, and output. So with Plonk, you write left, right, out. You say, is it on or off? Or you say what value goes into it, like 5, 7, and 12. And I'm gonna say, you also get to choose what kind of gate it is. So here I'm gonna say it's an addition gate. Then I'm gonna have another gate that takes in 12, 10, and outputs 120, and I'm gonna say it's a multiplication gate. So Plonk is a table of gates one by one, literally, unordered one by one gates. And then they have this thing that they call the copy constraints, where you can say, all right, this value 12 is actually the same as that value 12. Like my gates are connected. 1:02:29: Anna Rose I see. Okay. 1:02:30: Nico Mohnblatt: So Plonk is very much like... Translating from circuit to Plonk is easy. You take a gate by gate, and then you wire them together, and you're done. 1:02:37: Anna Rose Interesting. 1:02:39: Nico Mohnblatt: With AIR, you're not allowed to do these copy constraints, you're not allowed to wire things. So you have to be a bit more careful, but you also define circuits in this way. 1:02:48: Anna Rose Is Plonk R1CS? 1:02:51: Nico Mohnblatt: Plonk is not R1CS. R1CS is a different format. 1:02:55: Anna Rose Yeah, so in R1CS, can you not make those connections? 1:02:58: Nico Mohnblatt: No, R1CS works differently. R1CS kind of... So you have three matrices, A, B, and C. A represents all the left inputs to your circuit, B represents all the right inputs, and C represents all the outputs. And we say that you also have like this, I call it the menu. Like it's, it's a vector Z, which is literally a menu of all your variables. It is all the inputs. So if we go back to the circuit, the vectors that is going to be all the wires actually. So we're going to have the variables, we're going to have these intermediate wires and we're also going to have the output. So you have this menu Z and I'll show you in a second why I call it a menu. And we say AZ, so A applied to the menu multiplied by B applied to the menu equals the outputs applied to the menu. And why do I call it the menu? Why do I keep doing this? 1:03:57: So if we look here, we can sort of write it out. We have these built-in numbers, 5 and 3. We have the X0s, X1, X2, and we have the output Y. And we also have some intermediate wires. So I'm going to call them W1, W2. So if I go back to my whiteboard, I've now written out my menu. And what I can do is, with my matrix, sorry, I'm going to rewrite this on the side, by the way, this is a bit not super clean, because I'm kind of making it up as I go. But there is a great episode of the whiteboard series where Mary Maller does this. And she shows you how to use R1C as matrices. And it makes a lot more sense. So if you're familiar with how we multiply a matrix with a vector. 1:04:44: Anna Rose So in this case, the menu is the vector? 1:04:46: Nico Mohnblatt: Yeah, the menu is the vector and the matrix will be selecting things. 1:04:50: Anna Rose I see. 1:04:51: Nico Mohnblatt: So when we multiply a matrix by a vector, I do this times this plus this times this, like I moved down in that way. So I work along the rows of the matrix and down the vector. And so if I say, well, I want the first variable, so I put a one on five and zeros everywhere else, I have now selected just this first variable from my menu. 1:05:13: Anna Rose By putting in numbers here, are you kind of defining the shape of what the R1CS looks like? 1:05:19: Nico Mohnblatt: Yeah, I'm deciding, all right, for the first gate, which ones of my menu variables go into it? So, if this was my A matrix, this would be my first gate, left input. 1:05:33: Anna Rose What is the A though, again? The A is the input? 1:05:36: Nico Mohnblatt: Yeah, A is left inputs, B is right inputs, and C is out. 1:05:40: Anna Rose Okay, because like the R1CS will have lots of these circuits or these... Not circuits, but the input, output, input, output. So here you're sort of saying here are the different kinds of inputs we have at different points. Then for first input, here's the types of inputs for the second input, but like, they're matching... I guess here, does it show us the ordering like the bottom to the top of one of those circuits that you just showed? 1:06:06: Nico Mohnblatt: So, recall how Plonk needed to say like this 12 is the same 12 that was here and there, R1CS doesn’t need to do that because I am always picking the same part of the menu. If I want two gates to use the same intermediate value, so for example, in our circuit, here I have this output of this multiplication and the other output of the multiplication have to go to Y. So I could say this corresponds to what I called here W1, W2 and Y. So I am going to somewhere have a constraint that involves W1 on the left, doesn't involve W2 on the right, I'll have my B matrix that does involve W2 on the right and not W1 on the left. And they've been selected. 1:06:52: Anna Rose I see. So it's like they'll match up in a way. 1:06:56: Nico Mohnblatt: Exactly. 1:06:56: Anna Rose Because each one of those elements are represented in that graphic that you just showed, and if it has two inputs, then the A and the B will both show up kind of in the correct line. 1:07:07: Nico Mohnblatt: Exactly. 1:07:08: Anna Rose Can I see that again, actually? Can we flip back to the circuit again? Okay, so the inputs and outputs, the X, Y, like... The X1 is an input in this case, the menu. And if you head back, do you also have each of the, like multiply? 1:07:26: Nico Mohnblatt: So what do you mean the multiply? 1:07:27: Anna Rose There's add and in the case of the circuit, there's actually the multiply. So in this case, you have to multiply everything or you have to add everything, or can you actually have variations on what you're doing? 1:07:39: Nico Mohnblatt: So this is where R1CS is a bit tricky as a mental model or like if I were to print out the R1CS that is equivalent to the circuit, it would take me a bit of time to read through it, because I can add variables with just one matrix. I don't need this AZ, BZ, C to get addition. I only need it for multiplication. Yeah, because like we said, matrix-vector multiplication, we have row of the matrix and the vector and we match elements one by one and we keep adding them. So we do like in this example, one times five, plus three times zero, plus zero times zero, et cetera. 1:08:24: Anna Rose So the addition side of it is happening within this, and then the multiplication is between them. 1:08:29: Nico Mohnblatt: Between them. Exactly. And this is why you'll hear things like in Groth16, like addition is free. We are getting into weeds again, I'm sorry. 1:08:39: Anna Rose Yeah, exactly. And actually, I'm not able to follow it too closely, maybe we should wrap up with just a last question and definition is constraints. So I think we've done good on the R1CS. I mean, it's so funny because like Plonk and AIR, AIR we just sort of like hand-waved, but R1CS. 1:08:59: Nico Mohnblatt: Yes, R1CS, I got into the weeds a bit, but it is a good segue into what is a constraint. 1:09:05: Anna Rose Okay, good. What is a constraint then? 1:09:08: Nico Mohnblatt: So maybe you've already sort of seen, but in Plonk we said every row of our table is a single gate and we say that's one constraint and then we also have the copy constraints on top. So all these elements that I wired together, that's an extra constraint. So this two row table with one copy constraint has a total of two plus one, three constraints. Great. With AIR, every row can apply multiple constraints. So that's also a thing. And with R1CS, it depends what proof system you're in. In some proof systems, what matters is how long is your menu. In other proof systems, what matters is how many non-zero elements you have in the A, B and C matrices. But usually when people say constraints, they will just count the rows of the matrix, which can be a bit misleading. 1:10:04: Anna Rose How are constraints actually depicted though? Because I've seen them depicted more like mathematical lines. Like they don't look like circuits. They don't look like matrices either, at least the way I've seen them. 1:10:15: Nico Mohnblatt: Yeah. So usually we're saying, here are a bunch of variables. Some of them are inputs, some of them are outputs, how do they relate to each other? In R1CS, the constraints is we are only allowed to use something that is square, or we're only allowed to use one multiplication essentially. That is one constraint. In Plonkish type of stuff, the constraint can be anything we want, because like we said, we can define the gates as we want them to be. So constraint is one of those terms that I am very hesitant about writing into the Decoder, because it is hard to decode. And it is... Depending on what system you're in, it has a very different meaning and has very different implications for efficiency. 1:11:02 Anna Rose Got it. Cool. Well, Nico, I think we've reached the end of our questions for this particular ZK Jargon Decoder session. It was our first time, I mean, this and yesterday was our first time trying this format. I think... I mean, I want to sort of throw it out to the audience. If you liked this and might want us to do another one of these, we're obviously still working out the format. But I feel like having some sort of like video or, I don't know, like presentation component, in addition to the actual ZK Jargon Decoder might be useful. I also do very much recommend that folks watch the ZK Whiteboard sessions, which we've mentioned a couple times through this, because I almost think watching the ZK Whiteboard sessions or listening to this episode or potentially watching it on YouTube, I think would be potentially something good to do together. We're kind of describing these things in slightly different ways. Some of the definitions are a bit more clear, I think. 1:12:01: Nico Mohnblatt: Exactly. I was gonna say it requires a lot of patience to sit through things that you think you know, and that you probably do know. But sometimes the person speaking will have like a small bit of insight or a different way of conceptualizing the same thing. And suddenly it might shine a different light and you might understand something differently. At least I know it's been useful for me. I've learned a lot from re-learning material I knew, but it's so hard to get into the mental space of like, all right, let me sit through what a polynomial IOP is for the 50th time. 1:12:34: Anna Rose Also, actually, Nico, if people want to contribute or like... Yeah, like you're doing a lot of this on your own, but I wonder if people want to join in, is there ways for them to help out? 1:12:45: Nico Mohnblatt: Yeah, probably the easiest way. So if you go to the website itself, top right corner, there's a little GitHub icon. You can click there and we can start collaborating there, maybe open some issues or some PRs, like happy to keep this collaborative. 1:13:00: Anna Rose Cool. 1:13:01: Nico Mohnblatt: The things that help the most is if people send in suggestions or things that they really want to learn about. 1:13:06: Anna Rose Sounds good. Cool. Nico, thanks so much for coming on. 1:13:10: Nico Mohnblatt: Thanks for having me on. 1:13:11: Anna Rose So I want to say thank you to the podcast team, Rachel, Henrik, Jonas and Tanya, and to our listeners, thanks for listening.