Anna Rose (00:00:05): Welcome to zero knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online Anna Rose (00:00:28): This week, Kobi and I chat with guest Jens Groth, director of research and formal security DFINITY previously, a professor at the department of computer science UCL. Jens Groth is a leading ZK researcher and is the person behind a number of advances in the field. We chat about his earlier career, the cryptographic problems he tackled in his research, how he solved these, the results of this work, as well as his move from research into industry and what he works on today. Before we kick off, I just wanna highlight the ZK link tree. There you can find links to all of our channels, including the ZK jobs board, the ZK community board, and also the ZK HACK Discord. If you're looking to learn more about ZK tech, keep an eye on that space. We have a lot of really cool things coming this summer, as well as into the fall. So if you're looking to jump in, this might be a great place to start. I've added the link in the show notes, hope to see you in our channels. Now I wanna invite Tanya to share a little bit about this week's sponsor. Tanya (00:01:25): Today's episode is sponsored by Anoma. Anoma is a set of protocols that enable self-sovereign coordination. Their unique architecture facilitates sufficiently the simplest forms of economic coordination, such as two parties transferring an asset to each other as well as more sophisticated ones, like an asset agnostic bartering system involving multiple parties without direct coincidence of wants, or even more complex ones, such as N-party collective commitments to solve multi-polar traps where an interaction can be performed with an adjustable zero knowledge privacy. Anoma's first fractal instance, Namada is planned for later in 2022, and it focuses on enabling shielded transfers for any assets with a few second transaction latency and near zero fees. Visit anoma.net for more information. So thanks again Anoma. Now here is Anna and Kobi's interview with Jens Groth. Anna Rose (00:02:18): Today, I'm very excited to introduce our guest Jens Groth. He's the director of research and formal security at DFINITY. He's previously professor at the department of computer science at UCL. Welcome to the show Jens. Jens Groth (00:02:30): Thanks Anna, and very nice to be here. Anna Rose (00:02:34): For today's episode. Kobi is also joining me as a cohost. Hi Kobi. Kobi (00:02:38): Hello. Anna Rose (00:02:39): So I think before we start in, I want to talk about your name Jens, because I think a lot of people are, will be very familiar with the work that you've done, but maybe not the way that I just pronounced your last name. We just recently learned on an episode that we have been pronouncing your name wrong all these years. The term that we're familiar with would be Groth16, a very important proving system. That's used by many projects, but we just learned that it's actually pronounced Groth and maybe even like Groth 16, that's how I've started to use it since then. But I wanna ask you what you make of this? Jens Groth (00:03:18): Yeah, yeah. I'm also mispronouncing my own name. Right. So, so I'm, I'm, go my name is Groth, but, but especially in English speakers find that very difficult to, to pronounce and, and I've given up, right. And I just say Groth as well. And whenever I talk about my own works, so I guess that's all like I still pronounce my own name when I just introduce myself as Groth. But when I report to my scientific works, I tend to, to say Groth just so everybody's on the same page. Anna Rose (00:03:52): Everyone can say it. I think it's really important for our community to know that we've been saying it wrong. I can't on now. And because Mary actually says Groth, when she pronounces it, now I do to as well, so Jens Groth (00:04:05): I'm still practicing Anna Rose (00:04:06): But anyway, cool. So why don't we hear a little bit more about you and your background? What led you also to this really important work? I'm very excited to dig in. Jens Groth (00:04:18): So it started at the university of UoAarhus. When I was an undergraduate student, I was studying maths and I kind of got a little fed up with the abstraction that was there and I wanted to do something applied and cryptography was sort of an opportunity to do applied maths. So I took some courses in cryptography. I really liked it. I decided to write my master thesis in cryptography and, and that's sort of how I got into cryptography. Anna Rose (00:04:47): Nice. What was the main topic in cryptography at that time? What were people excited about or worried about? Kobi (00:04:54): Yeah, I think, I think I've seen a lot of your early work was actually around voting in cryptography. Right? Jens Groth (00:05:00): Right. Yeah. So after my master's thesis, I, I sort of got into an, doing an industrial PhD. So I was working with a company Cryptomathic and they had a project on, on voting. So that's sort of how I got started on, on doing research and in voting at that time it would take a, a lot of time actually. So, so that these voting systems and you have to, they're very good at telling up votes they're based on homomorphic encryptions, you can sort like submit your votes in encrypted form, right. So they're kept private and you can tell them up and you can use a threshold decryption to get out the result of the election. Right. but when you do that, you know, there's nothing that prevents a voter from submitting, you know a thousand votes under a candidate, right. And that is essentially stuffing the ballot box. Jens Groth (00:05:48): So you want to prevent that. And in order to prevent that, you can ask the voter to submit a, a zero knowledge proof together, build the vote that they have voted basically in a valid way. Right. So it doesn't review which candidate they voted for, but it does ensure that they can only submit a valid vote. And at that time that was a process. I mean, if you had like a large election, you will not have candidates right. Then that could take up to two minutes on, on a, a process at that time, right. To create that vote and then, and submit it. Right. So the bottleneck was really the zero knowledge proof. And especially the time it took to prove that you have submitted a correct vote. So, so I, I kind of got into zero knowledge proof because I wanted to optimize that step for the vote. Anna Rose (00:06:33): Was zero knowledge proof at the time that you were doing it. Were they a key topic in generally like in cryptography or was the focus on something else at that time? Jens Groth (00:06:41): I feel it's always been a key topic from an academic perspective. Right. People have been interested in, in zero knowledge proofs and, and I think you also have like concrete things people do with zero knowledge proof. Like Schnorr signatures is a same zero knowledge proof. Right. So, so also something that has really, you know, some, a practical dimension to it. I'm not sure it was sort like the hottest topic at, at that time. Right. So at that time it was sort like a few years after Cramer-shoup encryption came out. Right. So until then you didn't really have like efficient provable chosen cypher text for secure encryption. Right. And that was a big result at that time. And actually something I wrote my master thesis on chosen cipher attacks and security Anna Rose (00:07:28): Was post quantum an issue then? Were people already sort of looking to the future as something to work on or think about? Jens Groth (00:07:38): I, I think not, not that I recall. I mean, basically most of my cryptographic career, I've just been working with number theoretical based primitives. Right. And, and not really been thinking much, much about post quantum securities. I think that's some awareness that, I mean, it, it may have been, so like in the background, I know that people were doing sort like both quantum cryptography and post quantum secure cryptography also back then. Right. But, but it was certainly much less of an, an issue. Kobi (00:08:11): I think like maybe, maybe for going back to the topic of the zero know proofs and that era maybe it's also fair to say that that time, zero knowledge proofs that were created were for very specific use cases. So if you look at voting, it would be for something extremely concrete and small and very different from what we have today for let's say generic programs. So, Jens Groth (00:08:38): Right. Yeah. Kobi (00:08:39): At least they think that was even less looked at back then. Is that correct? Jens Groth (00:08:45): So there has been works that looked at also at generic programs. Right. So not so much programs. Right. I mean, the model has typically been been that you have a circuit, right. Either a boolean circuit or maybe an arithmetic circuit. Right. If you were really fancy. Right. And, and people did not think much about all like how to translate, so your program execution into a circuit and, and the cost involved in that. Right. So that was sort of like the model because it's NP complete. Right. And it's, you know, I think at least for, for arithmetic circuits also relates somewhat to cryptographic operations. You could do with Cyclic groups. Right. It made it made sense. Right. so there were papers that's what, like did general proves that, you know, said, you could prove that an arithmetic circuit is, is satisfied. Right. And, and that sort, like got that thing down to sort of like a linear number of group operation in cost. Anna Rose (00:09:39): Going back to your story. So you kind of talked about like some of this initial work, which was in ZK, which was very early. I, it seems like it was quite early in the field. Were there many ZK researchers at that point? Jens Groth (00:09:52): I don't think there were people that would really identify as purely ZK researchers. Right. Okay. It was more like something you would pick up along with other things that you were doing. Anna Rose (00:10:03): Ah, interesting. But what was your next step after that? So where did you kind of go after that initial project? Jens Groth (00:10:10): So, so I did, I mean, I did quite a lot of work on zero knowledge proofs in, in voting, right. So there's both the, the paradigm of, of doing voting based on homomorphic encryption and telling encrypted ballots. So I did had several papers that optimized that and, and also sort of use different techniques, but also like typically based on, on groups with a hidden order. So you can sort like also do some integer proofs around that. There were some, some papers by Hilga that sort of like initiated that kind of direction. And so like use sort of very clever tricks with, you know, the Le grange foursquare theorem and so like prime visibility things to prove that you had something that was in a particular set. So that was sort of one area I was exploring. And then there's another voting paradigm based on mixnets. Jens Groth (00:11:05): So that could be used both voting, but also for anonymous communication. So I also started looking at, at mixnets and, and did quite a lot of work in, in that area. And then I had sort of like some strategic thinking about like, I was thinking, well, I'm good at zero knowledge rules, you know, where else can I use that? So I also did some work on group signatures because they also rely on zero knowledge proofs to sort of like glue everything together. So that was sort of the, the early days of, of working with zero knowledge, which, which always sort of like applications driven, right. That there was some, some direct application I had in mind when I was trying to optimize. And I always liked optimizing things. A lot of this was like in squeezing up this last piece of performance in these zero knowledge proofs. And then after that I went after my PhD, I went to do a postdoc at UCL. That's sort of where working with them and that I got into. So like using pairings for zero knowledge proofs. Kobi (00:12:12): If we're talking about mixnets and the techniques that are used in both mix nets and voting, there is this topic of shuffle arguments. Right? Yeah. So maybe it would be nice to describe what it is and what difference have you made there? Jens Groth (00:12:29): So the idea is that you, you have a bunch of people, they want to say publish some, some, some message, but they want to be anonymous, right? So, so you have some servers that will help them. And the idea is that you, everybody encrypts their message sends that to the first server, the server then takes a random permutation. And so like permuts these cipertext, and if you use, for instance, an encryption scheme, that's homomorphic, then the server can also re randomize those cipertext. So you cannot, so like see from the input cipertext, which ones met the output cipertext, right? So now this server sends out a bunch of cipertext that have these encrypted messages. Right. But nobody can anymore see that connection. And you could pass that through multiple servers in, in sequence, right. To make sure that even if one of the servers is so like, curious about who submitted, which message, then it doesn't know either because some other servers are put in a, a random permutation. Jens Groth (00:13:30): And it's clear that, you know, if, if the servers can do anything, then they might not re randomize at all. Right. They could do something else. They could substitute in new cipertext. They could also try to, so like somehow mark the message so they could, can later see who was it that submitted. Right. So, so there you want, again, some sort of check that the server actually acting, honestly, that check can be then done with zero knowledge proof. So the idea is then that each server will not only do the permutation and, and the re-encryption of, of the side attacks, so that what is called a shuffle. Right. But it also proved that it has done a correct shuffle. Kobi (00:14:07): Yeah. And I, I think that one of the works that were pretty influential in that respect was with Stephanie Bayer, right. Yeah. That you had a zero knowledge argument for correctness, super shuffle, and I think it's still quite being used today. Right. Like it's, it's also maybe one of the basis of existing zero knowledge, general zero knowledge schemes today, right? Yeah, Jens Groth (00:14:32): Yeah, yeah. So it, Stephanie was the first PhD student I had. Right. And we were working on mixed nets among other things. And, and indeed we had this paper and what was nice about that paper was, I mean, it's sort of like optimized several things. Right. So it had sort of like a reasonable prover complexity. It had reasonable verify complexity. Right. But then it was also sublinear in communication. So it was a square root cost in, in, so, so basically what it means is that, you know, if you think of mean, you have to communicate the shuffle anyway. Right. You have to say, these are the new output cipertext. Right. so you will, if you have in cipertext, right. Then when you make a shuffle, you output in new cipertext. Right. But then the proof can be much smaller than that. Right. So in terms of communication, you are now you, you're saying that, you know, there's no cost at all, essentially for the zero knowledge proof and communicating that, right. The, the cost is born mainly by just the shuffle itself, which is unavoidable Anna Rose (00:15:34): That first PhD student. And that work was that when you were at UCL still? Jens Groth (00:15:40): So that was at, at UCL. I was not. So like, I suppose that even directly permitted to advise PhD students and you don't have like sufficient time to do that either, but I was working with some of and Raphie's students at UCL. Anna Rose (00:15:59): But let's talk about that. Move over to UCL. You became a professor at UCL what made you choose to do it there? Jens Groth (00:16:06): Oh, so that was sort of like for family reasons. So, so we wanted to move to Europe. We wanted to be in an English speaking country and I knew some of the people at UCL. So that's sort of how that, that got set up. Anna Rose (00:16:20): Was that already a center for this kind of work? Like I know that ... Is like a center for cryptography and stuff, but like UCL, I now know a few people that are there, but was it already the case at the time? Jens Groth (00:16:34): So there were some, some good cryptographers at UCL. So at that time it was a man and cryptographer, and that were cryptographers. There it's sort like changed over time who's in the group. Right. So and should be said that at that time it was also outside of London. The idea was that UCL wanted to do industrial collaboration, collaboration with British telecom, which had a research and an I switch. So it was a little also remote from the university. And eventually the university found out that that was not the right strategy and called all of us back to, to London. Anna Rose (00:17:13): Okay. Well, let's talk about the work that you did while you were there. I mean, this is the work that sort of leads to some of the most famous work that a lot of people in our space are very familiar with. What were the topics that you started in on there? I mean, we know that at some point trusted setups became like a big theme, but yeah. Where, where was your starting point on that sort of line of research? Jens Groth (00:17:36): Right. I think for, for all research, right. You have to think that it doesn't really depend that much on the institution. It's more like a lot of it takes a long time to, you know, come to fruition. And so like have some ideas in the back of your head. Right. So I do think of this as sort of like a continuous process from, you know, I mean, as a PhD student, I was thinking about their knowledge proof. I came to UCL, I started thinking about some different types of zero knowledge proof people there. Right. I came to UCL and was on still some of those ideas that were coming to, to fruition. So, so, so one example of that, I guess. So, so the first pairing based SNARK right, which saw like gave, got us, sublinear in your communication complexity. So you could get down to, to constant number of, of group elements to, to prove a, a statement. Right. Was something that I, I published in, in 2010, I think at but it was something that I already had some ideas of for that early on. I said, UCL, right. So like work that came from there. And then, so like, you know, was published by that's it at UCL. Kobi (00:18:52): And where, where does the the work with Amista? That was something that I often see as you know, celebrated topic and the proving system for bilinear equations. And what, what was that about? I remember reading that it was created because general NP computations seemed to be too expensive. So what, what was done here? Jens Groth (00:19:19): Right. So, so it started with a, a paper I had with Amista and Raphle which was essentially doing non interactive, zero knowledge proofs based on pairings in this case, it was over groups of composite order. Right. and I see that as, so like the starting point that realized you could use this all in encryption techniques. And so like use ideas from that paper to build non interactive zero knowledge proof, right. The, the pairing sort like gives you an multiplication index exponate and that sort like allows you to do general proofs. And, and that point, we were really thinking about boolean circuit satisfiability right. And you get so like a linear number of group elements per, per gate in that boolean circuit. And then it's sort like natural when, when you have those kind of problems you think about, okay, how can I extend that? Jens Groth (00:20:13): How can I generalize that? Right. So one direction was okay, can you do this over time order groups? Right. and, and turned out that you could that and do that. Right. And that was also driven by a question of, so we know that you cannot do just like, non-interactive zero knowledge proof without any sort of setup, you need something that so like allows you to simulate, but you can actually do that for non interactive witness in distinguishability. Right. So if you give a weaker privacy guarantee, then you can do that. And for that, you couldn't really use composite order groups. Right. Because then somebody needs to know the factorization and things break down. So that was also a motivation for using prime order groups. So that was so like a next step. Right. And then, and then after that, it was also like a question. Jens Groth (00:21:03): Okay. But, you know, if you want to apply this in practice, right. Then maybe you don't want to go through an NP with complete reduction, right. Of whatever you're trying to prove because that's expensive. Right. So so it was sort of natural to think about, okay. But what if we have something more concrete? So there were a couple of papers there. So I had one, one paper that is about group signatures again. Right. And so like both gave, so like general results which are applied to group signatures that, so like started this, this question of, okay, can you do, instead of looking at, say Arithmetic circuits, boolean circuits, can you define a language of appearance that sort of as natural to both for what cryptographers would want to build and also something that you can prove things about. Jens Groth (00:21:54): Right. So that came up out of that. And, and then that was something to together with Henricks that. So like did a lot of improvements on and really got the performance. Cause that was like a very heavy construction that had, you know, hundreds of group elements. Right. And, and so like, then we did boiled it down to what is the core and the essence. Right. and there were a couple of, of related work. I also had some discussions with Brent Waters, so Brent Waters and, but, and also at the same time also realized that, you know, well, we kind of prove sort of like naturally speak this, this group language. They also have some nice non interactive zero knowledge proofs that they used in the context or group signatures. Kobi (00:22:38): Do you still say used for this now that we have these more generic proving systems or do, do you think it's already a bit of split? What, what, what's your thought about it? Jens Groth (00:22:51): So I, I, I think there are still the applications of that. I think people are using it. So I think there's a general question, right? When we talk about zero knowledge proofs, which is, are you talking about small statements or big statements, right. Because, you know, so, so all the things about getting sub linear complexity and, and really squeezing out every piece of performance is something that makes sense when you have a big complex statement, right. Then you, you want to, to get, get that right. But if you have something which is small, right. Then, you know, it may not be that big, a deal that you, you know, pay in a few extra group elements. Right. And, and then nice thing about these proves is that they're in, in this standard model, right. You don't have to make assumptions about, you know, knowledge extractors and things like that. So I think that people that appreciate that and, and think of, of these earlier proof. So like being, being, I mean, more, more reliable or trustworthy because they just use standard assumptions in pairings. Anna Rose (00:23:56): So I know that we want to speak about kind of what led to the kind of, one of your most famous works, the graph 16 paper. But was there anything kind of in the lead up that you think listeners should know about to understand some context to how that came, came to be? Jens Groth (00:24:12): Yeah. So, so I think I can think of, of two pieces of context that I think interesting. Right. So, so one is that there's been sort of two tracks of, of work. One is based on non interactive proofs based on pairings. And another track is sort like interactive proofs based on the discret logarithm problem. And, and there's been some sort of like interplay in my mind about, you know, porting techniques from one side to the other and, and so forth. Right. so I think that is perhaps an interesting point. I think another interesting point is I really like the paper I had at Asia Crypt that sort like introduced these succinct pairing based arguments to start with. And I thought of that as saw like a big scientific breakthrough. It was actually rejected several times and people didn't like it quite as much. Jens Groth (00:25:00): That's a, so there's a bit of a interesting gossip, I guess, what could do about that. But I think, I mean, I actually, personally I think less of Groth 16, right. I mean, it's been pick a lot, a lot of in industry it's had a lot of applications. Right. But from a scientific perspective, right. You already had like sort of like shaving off the group element and from a scientific perspective, I think the interesting thing was also like, I also started thinking about lower band and you know, how good is this, is this optimal or not. Right. And I actually, the title is so like on, you know, on, on the size of non interactive zero knowledge. Right. And, and of course, I think there's this distinction between what does a scientist find interesting in a paper. Right. And what does an engineer find interesting about a paper, right. And, and those are somewhat distinct notions. Right. Anna Rose (00:25:53): What you're sort of saying is like a lot of the pieces had been prepared in advance and this was, would you say that this was just like the formalization in a way that was easier for an engineer to be able to use or were there advances in that work as well? Jens Groth (00:26:08): So there was this nice line of work in 2013, and then in 2013, right. Where they actually gave, so like concrete pairing based SNARKs. Right. That were succinct. And it was so like, you know, I was reading that and I wanted to understand it. Right. And I wanted to optimize it and see if I could get something out more out of that. Right. So that's sort like the concrete background of why I did that. And they introduced some really nice things. So, and I think one thing there was a little back and forth discussion about that was one of the reviewers of that paper. So I guess it's okay that I reviewed my anonymity many years afterwards. Right. And ah, and maybe they, they thought of me as a reviewer too. Jens Groth (00:26:59): The one that always gives these horrible comments. Right. Yeah. But some of the discussion was that, so in, in this the first SNARK I had proposed in 2010 and also Halima had some ideas for more efficient SNARKs and, and those were universal in the sense you could take any circuit or any arithmetic circuit. Right. And then you could have the inputs and then you could prove things about that. And what, what came up in the GGPR paper was the idea that, you know, well, if we fix the circuit right, then we can actually do something more efficient. Right. And they had this introduced the idea of, of quadratic span programs and also in the appendix, they had the ideas of quadratic arithmetic programs. Right. So there was a bit of a, a so like discussion back and forth, right. Jens Groth (00:27:49): Are you comparing apples to oranges because this is not universal, right. This is just like for fixed circuit. Right. But of course you can have universal circuits and then you pay an overhead and write, then you give it approved. So, so there was a little bit discussion back and forth, and also a bit of confusion about that point. Right. But I think in the end, it's, it's been a super fruitful piece of work they had. Right. I mean, they introduced the idea of quadratic span programs and arithmetic programs. Right. And, and I think that was also like a really good idea, right. Because they did get much better efficiency because they specialized to a specific circuit. Kobi (00:28:29): I think this discussion around the university is still happening, but maybe in a different level today with the introduction of, let's say you have circuits, but you also have circuits that implement zero knowledge, virtual machines and all that. So it's still happening, but in a more complex way. Anna Rose (00:28:47): So you were just describing the quadratic arithmetic programs, I guess these are sometimes referred to as QAPs. Jens Groth (00:28:54): Right? Exactly. Yeah. Anna Rose (00:28:55): Was this the predecessor, like, was this the model that you kind of transformed or changed from? Or would you say you borrowed something from this research? Jens Groth (00:29:06): So there, I, I think, I think I basically, I, I adopted that model. Right. That's, that's what I'm building on as well. I think as, as we've become better at understanding and buildings that zero knowledge proofs merge, this, this paradigm, right. That you saw, like you can take an arithmetic circuit, take general computation, you can arithmetize it right. And you can describe it as a set of constraints, and then you can build that into, or then to a quadratic arithmetic program. And then you can prove things about quadratic arithmetic programs, because well, they are just polynomials and you can have polynomials in these secret exponents. Right. so that's so like, I think on a very high level the idea, right. And, and the idea is essentially it it's so like goes back to the short and simple, right. That you can check polynomial identities by just taking a random point and evaluating that. Right. And then for the idea for these SNARKs that, well, you know, that random coin can be built into the common reference string. And since it's in exponent and nobody can compute discrete locks, then nobody knows what is this point that you're evaluating. Right. So that prevents the proves from cheating, Anna Rose (00:30:20): You just mentioned the common reference string. This was actually going to be, my question is like, because I've always understood the paper as also transformative in terms of the trusted setup. And I'm wondering if this is where that link happens, where it's like using this sort of the, the way you constructed the common reference string allowed for a different trusted setup or not like, this is kind of actually a question to you. Jens Groth (00:30:43): So in the early literature on non interactive zero knowledge proof, right. I mean, there was a bit of this discussion. I mean, a common reference string sometimes meant a uniformly, a random string, right. And when you do non interaction zero knowledge proof based on, permutations you just need that uniformly random string. Right. And then, I mean, for, for these pairing based SNARKs, you need some structure to the reference string. You basically need sort of ways to evaluate polynomials. So you need sort of powers of some secret,value that you are evaluating these polynomials in. Right. So, so it's really key to these trusted setups here that, that you have a, a, a particular type of structured, common reference doing it, which does not look uniformly random at all. Right. Because it has embedded some very specific powers. Right. Uand that also implies that you need somebody who constructed that trusted reference string, because if you just pick a random string, you're not going to get something which has these random powers. Anna Rose (00:31:46): Got it. And I guess then that also just like changed the way you did a trusted setup. Right. Like it, it altered what you needed to do in order to do it. Right. Yeah. But there were trusted setups before, or was this the first time that like a SNARK introduces it? Jens Groth (00:32:02): So I guess there's always been a question of where do you get that common reference string, right. I mean, and, and the question of why should you trust that, that common reference string? Right. So if it's a, a uniformly, random reference string, though, you may try things as like, you know, I will get it from some common physical source I'll use, look at the sun variations of some parts and things like that. And maybe that could be used that, right. Or maybe I could just run a hash function. Right. And it produces some outputs that look sort of random. Right. And maybe I can use that. Right. I mean, and the alternative to that is okay, if you actually need a structure to think of using multiparty computation to do that. I mean, I also had a, a, a work that I did where it was basically trying to sort of get something in between. Jens Groth (00:32:55): Right. So you say there are multiple parties that can contribute a common reference string or, in this case it was uniformly random reference string. Right. But you don't have to trust any of them. Right. But it was a little simpler model. You don't have to go through a multi-party computation. It's just every party who wants to contribute. They can say, here's my common reference string. Right. And then when you want to prove something non-interactive, you can just collect some of these common reference strings and say here's a proof with respect to this set of common reference string. Right. And then the verify, if the verify trust that, you know, enough of these providers are honest, then you can believe the proof. Anna Rose (00:33:38): But, so at what point was that MPC model of a trusted setup introduced? Jens Groth (00:33:43): That? I, I don't actually know whether it's like it. I think it's, it's just been, for me, it's always been law right. That, you know, well, if you need some, some common for string that, and, and somebody has to, to set it up somehow, right. Multipart computation would be a natural way to do it. Anna Rose (00:33:58): Could do it. Yeah. But actually, this is the question is like, was that the first time that was formalized? Like, I, I'm just curious if there had been work before using MPC as a, for a trusted setup or if, if it was the, the Groth16 paper. Jens Groth (00:34:13): So I don't think I want to take any credit for, for the combination of, of common reference strings and, and multiparty computation. I think it's never something I've really described in that step in my paper, as I've just said, well, you can use multiparty computation then left other people to, to actually figure out how they would want to do that. Anna Rose (00:34:34): Understood. Kobi (00:34:36): Yeah. And I guess, like you, you can see that result, you know, in, in all the smart papers, it's very neatly abstracted, the way you, you have this secret, and then you have complete other papers and huge amounts of codes to do the multiparty computation. It's it's becomes quite complex. Anna Rose (00:34:52): This is something Kobi does quite well. Yeah. He was, he he's been our, like, you know, our local MPC trusted setup expert. Kobi (00:35:04): I have to do, I have had to do a bunch of that. Jens Groth (00:35:06): I think, I think it speaks to sort like one of the nice things of, of this era, which is that, that there's a lot of things that you really can compartmentalize into different disciplines. Right. So, so, so a lot of the work I've done in discrete logarithm groups or pairings I'm not an expert in, on pairings in the underlying math of how you do that computation other people are experts in, in that. Right. I'm I wouldn't call, consider myself an expert in a multiparty computation, but again, other people know that and they can sort like build these common reference strings. Right. And then of course, there's all the implementation work. Again. I would not consider myself an expert of, you know, how should you take like a general piece of code and, and, and convert that into, to a, a quadratic arithmetic programs. Right. But that's also something that people picked up. So I think it's actually really nice that you can sort like have these different fields of expertise and you also like have some nice interfaces between they can all work together. Kobi (00:36:06): Can we touch a bit on the Groth 16 paper? Yeah, yeah. Itself. I, I think that the Groth 16 paper has kind of a misleading presentation in the sense that it's very low key it's on the size of pairing based you know, proof. It sounds yeah. It's, it's like, it sounds very humble. Right. But it's actually quite a big result. And that became something that most deployments use today. Did, did you expect it to be that way or like, did you expect it to be a big result? Jens Groth (00:36:42): I did not. Right. I, I think, and, and the reason is that I'm thinking of this as, as a scientist, not as an engineer, I think this, as you know, there was so in some sense when it comes to pairing based SNARKs, I would, you know, I'm very happy with the paper I had in 2010, but first introduced this idea. Right. And I thought that also like a, for me a big deal. Right. Because it was something new. Right. And then, you know, Pinocchio really did a really good job of like, you know, saying this is so like, you know, getting really close to efficient. Right. And, and I thought of this as an incremental piece of work that sort like took some of the ideas in Pinocchio and, and cleaned them up and squeezed things a little more together. And that, you know, gave you better efficiency and, and gave you more compact proves. But I did not think of that as a, as a big scientific step. I saw that as a sort like an incremental scientific step. Right. And then I sort like matched it up with the lower bounds because I also want to understand, like, you know, are we getting close to the limit of, of what we can do? And it still haunts me this question of, you know, is this something one can do in type three pairings that has two group elements? And I just don't know that the answer to that question Kobi (00:37:53): Can you also say something about the intuition of how, like the construction Groth 16 is made, because maybe when you look at earlier papers like Pinocchio and it feels more departmentalized that you have here, improving knowledge here, I'm doing a check, but in Groth 16, as everything is very much back together and neatly organized and, you know, in the paper, you obviously present the final construction that works really well. So can you give something about the intuition, Jens Groth (00:38:31): Right. I mean, so actually I think that is, I think you already explained some of the intuition, right. Because part of what gives you the better performances is, I mean, Pinocchio is more cleanly structured than saying, this is what we are checking here. And that's what we're checking here. Right. and, and I think it turns out that a lot of these things, you can actually squeeze together. You basically work with polynomial. If a lot of this is sort of like engineering with polynomials and coming up with clever ways to sort like, get everything together. Right. So it's, it's really about, you know, designing some, some, some polynomials where you sort of like check everything at once. Right. Anna Rose (00:39:09): What was the impact of the paper? Once it came out was Zcash the first team to implement it? Jens Groth (00:39:17): Okay. So this is actually really funny. It's, it's the fastest implementation I've ever seen. Right. So I, I put, I put Groth 16 on, on E print. It was in the evening. I woke up the morning the next day, and I had an email from Alexandre said , which said we have implemented it. And these are the performance estimates that, that we have. Right. So they really literally implemented it a few hours. Right. Wow. And I was like, you know, wow, that's amazing. Right. And of course, the reason they could do that is because they already had an enormous amount of tooling they had built. So it was basically just, they just needed to change some constraints in the description of those constraints. Right. And then everything just so just showed that they had really impressive tool chain for building zero knwledge proofs. Ah, so yeah, that was that was super impressive. Anna Rose (00:40:13): But that implementation would've been still sort of like a, an academic implementation for testing. Right. Like this was not industry grade, right? Yeah. So what, Jens Groth (00:40:21): That's libSNARK one. Anna Rose (00:40:22): Okay. Interesting. And so what, like what actually, I mean, at what point did you start to see it actually get used in production and was there a lot of work that had to happen before that point? Or was it pretty quick for people to start using it? Jens Groth (00:40:41): I think, I think that went, went pretty quick and it's probably the Zcash that, the first ones that, that did that I was not very involved in that process. Right. I mean, I was just this lucky academic that, you know, turns out that I was in the field, that people were starting, starting to get interested in adopting it. Right. So, I mean, some sense I'm, I'm really the, the, I think probably the opposite of the typical advice is, you know, that as an academic, you have to do quite a lot of work to get people actually be interested in your work and, and picking it up. Right. And I was just like, I put out a paper, I, you know, I could read back and now, you know, and then somebody else said, look, this is great. We'll pick that up and, and use it. Right. Anna Rose (00:41:25): Did this work come out before the first Zcash, Sprout was the first one. Right? Jens Groth (00:41:30): So Z cash already had a different one. And I don't remember it was so like Pinocchio or it was some, some other version of, of sort like similar or complexity. Right. So it was, it was a later update where they said, now we will switch and, and use a Groth 16. Right. Anna Rose (00:41:50): Um, maybe Sapling. Kobi (00:41:50): Yeah. They end using Sapling Jens Groth (00:41:52): Yeah. And, you know, and of course I was so like very happy to see that, you know, that it was getting out there and being used and, and people liked the construction. Kobi (00:42:01): Yeah. By the way, it was also part of the kind of excuse to move to Sapling, to hide the, the big bug they had. Jens Groth (00:42:09): So, right. Yeah. So they had, was it a, a group element, too many in the common reference during that they should have not had Kobi (00:42:18): And too many. Jens Groth (00:42:19): I see. Anna Rose (00:42:19): Okay. And too many Jens Groth (00:42:20): And too many. Yeah. Anna Rose (00:42:22): But so this, that was potentially the first time, I mean, and we might be wrong here, but was at least, at least the first kind of high profile implementation. But after that, I mean, it really became the standard for the industry. Yeah. Yeah. Even now I know that like work continues to be done on incorporating new things, tagging it into things. I mean, I think there's languages, like, tell me if I'm wrong here, Kobi, but like language is like Circom, which are like very much around Groth 16 and like how to interact with that particular proving system. Kobi (00:42:53): Yeah, exactly. And Circom, and you, you other huge deployments, like Filecoin, and it's, it's very much widely used and it's implemented in pretty much every proving system implementation is out there, so, right. It's definitely very popular. Jens Groth (00:43:11): And I think that makes sense. Right. I mean, it's also a question of, I think there's a lot of cross contamination and, and help in, in setting on the standard. Right. And, and being on the same page and you can sort of get ideas from, from each other implementations. Right. So I think that's something you see quite often in that space. Kobi (00:43:32): Yeah. And it's, it's really, also really helps that that they've been for the most time, extremely efficient compared to many other constructions. So in resource constrained environments, like the EVM, it was very useful to, to use Groth 16. Yeah. You, you could do it in like hundreds of thousands of gas, which is very reasonable. Anna Rose (00:43:55): I wanna talk a little bit about what came after and some of the work that you would've continued on. As mentioned, we had Mary on the show just a few weeks ago, talking about the work that she had done with you on trusted setup stuff. And then how that actually sort of opened up these ideas of universal SNARKs after you published this paper, what kind of track were you exploring? What kind of direction were you looking to add incrementally to this particular direction? Or were you looking at other problems? Jens Groth (00:44:26): Right. So I had a paper with Mary the, the year after which thought about, can you add more stronger security properties to the zero knowledge proofs. Right. And, and so, so we had paper where we had simulation extractable SNARKs. So again, based on pairings, it was again, three group elements. Right. But it gave you some, some extra security properties in, in terms of what, what an attacker can do. So it's not amenable, you cannot just change it. Right. Whereas Groth 16 is, is re randomized. So that was sort of one, one piece of, of work under the hood, right. So there were the, the notions of quadratic span programs and, and quadratic arithmetic programs that was stage back to GDPR 13, but we've also, so like, along the way, come up with square span programs and square arithmetic programs, square span programs. Jens Groth (00:45:25): That was an earlier paper I did had with George and Markov and Cedrik. So that was like for boolean circuits. Right. And, and basically the idea is that, okay, so in a general span programs and quadratic arithmetic programs, you boil everything down to a couple of polynomials. Well, they're multiply, they give some other polynomial. And the idea here is that instead of having a multiplication of polynomials, you have a squaring of polynomials. And that means that some of the group elements, the exponents of the group elements becomes the same. You can check that and it sort of like puts an extra restriction of what you can do. So that's sort like what prevents an attacker from the paper I have was Mary from, you know, mauling the proof that you have that extra constraint. Yeah. Kobi (00:46:15): But eventually it's actually not that restricting. Right. Because you can quite naturally translate let's say that the arithmetization or the, the work that you do in our quadratic arithmetic programs to square arithmetic programs. Right. Right. And so, like, you it's like a two X factor or something of that sort or four X. Jens Groth (00:46:36): Exactly. Yeah. Kobi (00:46:38): Yeah. And and I think, I think that exactly what you are do in the paper with Mary, right, the schnor based signatures. Jens Groth (00:46:46): Yeah. So, so you changed some, some things and sort like, yeah. How you do it, but I mean, it really is a sort like standard math trick, right. That you can sort of like translate it multiplication into a couple of squares and some factors and, and so forth. Right. So that's all like stand, and then it, it depends a little on, on the circuit, right. Or whatever initial statement you have, whether that's expensive or it comes comes, comes cheap, that you can do that transformation. Kobi (00:47:14): Maybe one, one other thing about the paper with Mary. So the title for it is Snarky Signatures. So why signatures that's I think, would be interesting to discuss. Jens Groth (00:47:26): Yeah. Right. Yeah. That, that's a, a good question. Right? So there's this, this notion of, of signatures of knowledge, right. And this is something that they dates back all the way back to, to signatures, right. That you can think, think about those as being, being signatures and knowledge, in some sense, the non interactive proofs of, of knowledge. And it's like, you're not just signing, but you also so sense proving that, you know, some underlying secret key and, and that's so like the core idea, right. And this was essentially the same kind of thing we were trying to get to. Right. It has the same properties as signatures of knowledge, but now it was pairing based, not based on, on hash functions in the fiat-shamir heuristic. So, so we thought that was an, an interesting connection to make. And that's sort of where, where it comes from, Anna Rose (00:48:20): What were some of the other work that you were doing around this time? And, and we should know, like Mary Maller was one of your PhD students and also, and Jonathan Boodle who's been on the study club was also working with you at the time. Yeah. But yeah. What other work were you doing? Jens Groth (00:48:34): So, so another direction that we have been exploring is, is getting, making interactive proofs more efficient. Right. and I think at that time, so I had an ERC grant that gave me a lot of freedom to, to do research. Another direction was really trying to say, okay, what is the best proof complexity we can get? Right. So, because once you get down to sublinear complexity, I mean, the verifiers can be potentially super efficient. If, I mean, the verify had to read the whole statement in principle. Right. But if you have some encoding of, of the statement, then you can even have like a sublinear verified complexity. Right. And, and communication complexity can also be really small. Right. And the remaining bottleneck is really on the prover side. Right. I think that's what everybody struggles with that it's, you know, it's expensive and, and it's expensive on many places in the, the pipeline know when you do the arithmetization and, and also when you do the, the proof itself. Jens Groth (00:49:37): Right. So and whenever you do these kind of proofs based on the cyclic groups, right. Then you typically pay some exponentiations or something like that, right. To, to do the proofs. So, so one direction of, of research we looked into was okay, right. What is really the best you could imagine from a prover perspective, right? The best you could could hope for is, is I think is really getting down to something that was more or less the same as if you just compute directly from the witness you have and the statement you have. Right. I mean, so can you get down to something that, you know, is, is close to that. And, and we had a paper which was using error, correcting codes, and then some very efficient hash functions that could gave really a constant overhead for the prover. And, and I think there's one thing I I've always struggled a little with. Jens Groth (00:50:27): Right. So I think in the old days of, of zero knowledge proof, people would say polynomial time, right. That's efficient. You don't have to worry about, you know, is it group operations or is it something else? Right. And I think it really matters what is the metric by which you are measuring the performance. Right. It, it really makes a difference is if it's like, you know, is it, you know, bit operations? Is it field operations? Is it exponentiations or group multiplication, all that has an impact on the performance. Right. and there's a kind of a paper that's say, well, we can prove things sro linear time for the prover. Right. But what they mean is that it's linear number of field operations, a linear number of group operations or something like that. Right. And all of that is expensive. Right. so what we got in this paper was that it was really a true constant, right. Jens Groth (00:51:18): It was independent of the field size or the security parameter. Right. So it really is. So you can do a proof arithmetic circuit size-ability with a prover that takes the same, does the same number of field operations, linear number of field operations. Right. So it really is a true constant overhead. It would compared to just having the witness and going through it and doing the computations. So, so there are also like a really nice iterator, I mean, in, in that paper, the construction is complex. Right. And, and in practice, it would be, you know, a very large constant, right. But it was from a theoretical perspective. I think it was sort of like very satisfying to get that down to a true constant overhead for the prover. And we then had a, a follow up paper where we looked at for like, concretely, if you wanted to, to computation based on say, Tiny Ram, right. Then, you know, what would be the overhead. And that was actually a little bit more overhead because there, you, you saw like need, you don't do it in a, the field size is bigger than the unit size of the elements you're operating on, on the tinyram. But, you know, depending on, on what that gap is, right. You can also get very close in performance. Anna Rose (00:52:34): I wanna kind of go back to this work that, you know, led to the universal SNARKs like, I understand that you were very involved in that as it was kind of being produced, but it wasn't like, and stop me if I'm wrong here, but it wasn't really coming out of your lab. It wasn't necessarily like your initial work, but rather you were brought in on this work. Right. Right. Yeah. And this is the work that led to Sonic. And I guess one can say down the line led to Marlin and led to Plonk. What was kind of exciting about that for you? And what was your, what was sort of your take and contribution? What did you kind of add to that? Jens Groth (00:53:07): Right. Yeah. So, so part about, yeah, it was an interesting way it came about, so so Mary was my student, she was working with, at Microsoft. I think it was Microsoft that sponsored her, her PhD. So, so she came back with this interesting problem, right. I mean, well, you know, maybe you can, you know, do something more general about, about the common reference strings. Maybe it can be updated and, and then things would be fine. I don't remember if they did not have a construction or they had a very cumbersome construction anyway, but it was interesting because it was all like also taking me back to the 2010 SNARK. Right. Because that SNARK was based on, on monomials in the exponents. And instead of having more structured set up where you actually have polynomials and it turns out that monomials, you can re randomize. Jens Groth (00:54:01): So like I knew, so like exactly the right thing. And I could take back in, in my memory and say, oh, there's some ideas here. I can use to do that. Right. So, so it was actually something, and it really took me, I think, a long weekend or something like that. So, so like get, get, so like the main structure up and running. Right. And then trying to optimize again and, and getting down to as few group elements as possible. So, but ideas when you've like worked in long time in a field, right. There's a lot of ideas coming around in your brain. And sometimes you can sort like go back in that pool and fish up and then maybe used in, in a different context. Right. So I think few people, other than me had, so like, thought much about that, because at that point you had Groth16 and so forth. Right. And people probably just jumped to that straight away rather than then think back but, well, since I had that old paper, I also knew about, about that. Right. And I could sort like those ideas back and, and use that. And they had exactly this advantage, right. That it was universal. Right. In terms of the circuit you could use as opposed to being specific to a, to a circuit. Right. So also, so like ties back, back that loop Kobi (00:55:14): Just to touch. But I think that we talked about it with Mary, that, that work specifically was not practical to use. Right. Because it was quadratic in size of the common reference string. Right? Jens Groth (00:55:28): Right. Yeah. Yeah. So the, the first paper, right. It was so basically, yeah. You have the universal common reference strings, you have to construct this specific common reference string. Right. That there was also like a matrix multiplication in, in the experiments that you had to do. I think that was expensive, Kobi (00:55:43): Right. Yeah. I think it would be interesting to, to move the DFINITY. Cause there's also some interesting work there. Anna Rose (00:55:50): Yeah, let's do it. So you're currently the director of research and formal security of DFINITIY. Talk to us a little bit about that move. And do you get to do ZK stuff at DFINITIY? That's my real question. Jens Groth (00:56:01): Right. Yeah. So the move was basically, you know, I thought it would be exciting to do something else. I had, so like just wrapped up a big grant and my students were graduating. So it was also a place where I could sort like do a natural move without harming anybody in, in, in the process. I joined DFINITIY I knew Jank from before. And my PhD student and Andrea Cerulli had also just joined DFINITIY. So it was also like that through him that the connection was established. And as I think all the academics that have joined DFINITIY, right. You so like start out with, you know, taking a leave of absence from the university. So you can always back later on if it doesn't work out. And, but, you know, I think we've all stayed DFINITIY so general. It works out pretty well. Anna Rose (00:56:53): So what kind of, what kind of work do you do? Do you get to do ZK stuff? I guess that's the real question. Jens Groth (00:56:58): Yeah. So, so I have not actually done that much work on zero knowledge in definitely. So, so one piece of work I did was designing our non-directed distributed key generation protocol. We have a sharded blockchain, right? So, so there's a bunch of shards or we call them subnets, right. Where you have nodes and they, they havea secret key. Right. And, and they do threshold signatures on behalf of the shard, whenever something is updated about the states, you can sort like, just verify that signature and see that, that it's the right thing you have. And the question is, okay, when you have a new node that that's being replaced or something like that. Right. or also maybe for proactive security, you want to update. So select those shares that the servers hold once in a while. So, so there we use an non interactive distributed key generation protocol. Jens Groth (00:57:53): And it's nice that it's non interactive. So like makes the logic easier behind the process. Right. So the idea is that you have a share. So like that describes how the secret for this threshold signature that we're using. And you want to, to be able to, so like each node has share of, of that secret, right. They want to contribute a resharing of, of that, that secret. Right. So, so a new set of nodes, maybe the old set of nodes, or maybe set where some of the nodes have been replaced can sort like pick up and get a new refreshed share of, of the secret key. And that there's a bit of Zero knowledge proofs in that. That's also, so, so a lot of optimization again. So it's for like a multi encryption scheme, that's all like encrypt from multiple parties at the same time, and you still want to have chosen ciphertext attack security. Jens Groth (00:58:46): So there's a lot of thinking going into that as well. And zero knowledge proofs are used to basically prove that they're doing the right thing in, in, in that operation. And, and it is a SNARK it's so like, but it's a hand tailored SNARK to this specific problem. Right. So it's so like optimized for exactly this, this use case. Right. So it's kind of thing because of my background, it was so like you had to put a little in, we need to do that. Right. Yeah. So, so I was happy to, to get that, that into the protocol. Anna Rose (00:59:17): I have a question, Kobi, this, because you just did some DKG work as well. Is it related, do you see any sort of similarities in this? Kobi (00:59:25): So I think that maybe there are some overlapping motivations in the sense that you want this non interactivity, which is really useful, and it simplifies handling a lot of failure cases, and parties can go offline online. And it's, it's really nice to have this very valuable encryption part. But what I think is really interesting about Jen's work here is that it's done for field elements, which, you know, you had some prior works around that with others, but this one really pushed the boundaries in optimizing it and the work, for example, that Mary and I have been doing with the group back then was for group elements. So in some sense, you, you are very strongly then deciding what kind of signature schemes you can use. So with field elements, you can still use ECDSA signatures, smart signatures, but with group elements, you are now restricted maybe to producing you know, random beacons or some new signatures came like the one that we had in that paper. So one of the most recent work you published was around this asynchronous ECDSA signing service. What can you share about it? Jens Groth (01:00:48): Yeah. So there's been some interesting questions there. So this has work with Victor Shup, who's also working at DFINITY. He saw like the lead on, on this project on, so like have been helping him, him in that. So the motivation is that we want to integrate the internet computer with Bitcoin. So basically you can sort like send Bitcoin to, to an address on the Bitcoin network. It get picked up by the internet computer. You can do things with it on the internet computer, maybe later on, you want to send Bitcoin to somebody else. And the internet computer can sort of send Bitcoin directly on, on the Bitcoin network. So we are very excited about that. So DFINITY is, is the foundation that's building the internet computer, right? And the internet, computer's a decentralized blockchain, the ideas that people can use the internet computer to put, build smart contracts, applications on the internet computer. That should be very easy process. It should be very performant. It is super performant, right. And then it also has on chain web. So you can solve like serve web directly from the blockchain and from the smart contract. So that's something we are also very excited about. Anna Rose (01:02:01): Got it. And bringing it back to what you just described. So you're talking about bringing Bitcoin into this environment. Jens Groth (01:02:08): Exactly. Right. So, so the idea is that you can send Bitcoin to an address that's all like is accessible by the internet computer. So every smart contract. So, so you have key duration with respect to ECDSA, that's a 32 standard, right? So you can actually every smart contract from, for every smart contract, you can derive Bitcoin address on the internet computer. And if you send Bitcoin to that one and tell the internet computer, Hey, I send some Bitcoin right. Then the internet computer can pick up that Bitcoin. It may later on send Bitcoin again, to, to different addresses on the Bitcoin network. And then on the internet computer, you can imagine them having wrapped Bitcoin and, and basically you could build dexes and, and things like that for,uusing, using Bitcoin. So, so I think it's very exciting integration, right. Because you can basically build smart contracts now on the internet computer that has all the sort of like the nice interfaces on the internet computer, the web, and, and so forth. Right. That, and, and you can use those applications to send and receive Bitcoin. Kobi (01:03:15): In this work, this was one of the first work that we ECDSA signing, which works in an asynchronous model. So what what's, what's like unique and important about that. Jens Groth (01:03:26): Yeah. So, so I think there's been a lot of interest in, in threshold, ECDSA signatures, and exactly because then you can do Bitcoin transactions and other types of transactions. We specifically opted for the asynchronous model because we think that's the most realistic model, I mean, is the internet and things can get lost and not arrived at the destination. Right. So we think that's all like the right conceptual model. And so, so that's so like sort of going, stepping back in, in the process, right. We wanted to do an inspiration Bitcoin for that. We wanted to be able to threshold sign ECDSA signatures, and then stepping back, we started looking at, so like the protocols for threshold ECDSA and, and there's actually quite a lot of work that had not been done. So, so one thing we published at this year was a, a security analysis of ECDSA itself. Jens Groth (01:04:17): Right. To just give a see, look at those properties that people have been used. Right. So one property for instance, is that people have used is you can sort like there's a random element. You first compute, and then you saw like, do some other steps and ECDSA signing process. And, and people have suggested, well, instead of computing that element after you see the message, why don't you pre-compute that right. And just make it public for everybody. And then you can use that. Okay. But that's not actually been analyzed in very careful in, in the literature. Right. So, so the people using these ideas that have not been analyzed that, that carefully, I mean, there is another work that solar analyzes that. But yeah, and then there were, we had some ideas there as well. So you may actually take that public element, but if you re randomize, before you use it with a message, then you actually get stronger security guarantees. So I think there was some nice, some nice analysis going all the way back to, to the ECDSA that we encountered in that process. Anna Rose (01:05:17): Very cool. I think we're pretty much at the end of the interview, but is there anything that people should look out for something like future work that they, they should keep their eyes open for? Jens Groth (01:05:27): That then I guess we have a lot of things going on at DFINITY for the internet computer. Right. So we are also using, looking into a lot like advanced DAOs then that will allow applications to do tokenization and have decentralized governance and you can on chain vote and decide on upgrades to the protocol and so forth. So there's a lot of like really nice functionality working on. So like more my, perhaps so like the, the crypto specific things. Right. So, some things that we're interested in looking into the future is, or multipart computation for confidentiality of, of data and, and post quantum security. Anna Rose (01:06:05): Cool. Kobi (01:06:06): So we look out for Groth 23, I guess Anna Rose (01:06:11): Groth 23. Kobi (01:06:13): Yeah. To 23. Yeah. Fine. Kobi (01:06:14): I'm sorry. Anna Rose (01:06:18): Thank you so much for coming on the show and sharing with us, your journey, talking about some of these really important works in our space and yeah. I'm very excited to see what's next. Jens Groth (01:06:29): Yeah. Thank you Anna, and thank you Kobi. It's been real pleasure to be here. Kobi (01:06:33): Thank you. Anna Rose (01:06:34): So I wanna say a big thank you to the zero knowledge podcast, production team, Henrik, Tanya, and Chris, and to our listeners. Thanks for listening.