Anna Rose (00:00:05): Welcome to zero knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. Anna Rose (00:00:28): This week Tarun and I chat with Ye Zhang and Haichen Shen from Scroll. We discussed the founding of the Scroll project, the problems they aim to solve with their native zkEVM layer 2 solution. Why scaling Ethereum is important, the philosophy of their zkEVM, and how this differs from other proposals, as well as potential use cases or products that could benefit from the system. We wrap up with an exploration of how Dapps and products on top of such as zkEVM may interact with the mainchain and with other applications. But before we start in, I wanna let you know that the ZK podcast crew is growing. We are currently taking on a number of new projects, and we're looking to hire an additional content producer to join us. There's a job posting for this content producer over on the ZK jobs board. Anna Rose (00:01:14): Just a reminder, the ZK jobs board is a spot where you can find jobs from all sorts of top ZK teams, not only the podcast. So you might wanna check that out either way for this role of content producer, the job requires you to have at least two years of experience working on regular content production. Ideally you would be organized good at project management and somewhat familiar with the field. There's no need to be an expert on ZK, but some familiarity with our community would be ideal. If you or someone, you know, fits the bill, please apply. You can find the links in the description. And I hope to hear from you. Now, I wanna invite Tanya to share a little bit about this week's sponsor. Tanya (00:01:52): Today's episode is sponsored by Anoma. Anoma is a set of protocols that enable self sovereign coordination. Their unique architecture facilitates sufficiently the simplest forms of economic coordination, such as two parties transferring an asset to each other as well as more sophisticated ones, like an asset agnostic bartering system involving multiple parties without direct coincidence of wants, or even more complex ones, such as end party collective commitments to solve multipolar traps, where any interaction can be performed with an adjustable zero knowledge privacy. Anoma's first fractal instance, Nemada is planned for later in 2022, and it focuses on enabling shield to transfers for any asset with a few second transaction latency and near zero fees visit anoma.net to learn more. So thanks again Anoma. Now here is Anna and Tarun's interview with Scroll. Anna Rose (00:02:49): So this week Tarun and I are here with two members of the founding team of Scroll. Welcome Ye Zhang & Haichen Shen. Ye Zhang (00:02:57): Hi Anna. Thanks for having us. Haichen Shen (00:03:00): Hi, Anna and Tarun, glad to be here. Tarun (00:03:03): Great. Definitely excited to have you guys on Anna Rose (00:03:05): Just a quick disclosure, both Tarun and I were both investors in Scroll kind early. We invested a while back myself through the ZK validator and Tarun through Robot Ventures, but I know that there's like, you know, a lot has changed since we did those early meetings and it would be really great for us to learn a little bit about what's happened since let's start off with a little bit of an intro to each of you and to find out how you arrived on this problem or what you were working on. Ye Zhang (00:03:30): So my name is Ye, I'm the co-founder of Scroll. Most specifically I work on ZK research. So I work on hardware acceleration for zero-knowledge proof and protocol design behind the ZK algorithm. So before starting Scroll, my background is more in academics. I started to work on zero knowledge proof four years ago, back in 2018, where the biggest bottleneck of using zero knowledge proof in practice is a large proving overhead. And I was researching the area of computer architecture and computer architecture and combining with my hardware knowledge, I started my first research on hardware acceleration for zero knowledge proof. So the idea is just using FPGA ASIC to make computing faster and, having more hands on experience from the hardware project. I got much better at ZK. I was very lucky to work with professor Andrew Miller at UIUC and dive deeper into the theoretical side of ZK. Ye Zhang (00:04:21): It was exactly during that time, the second half of 2019, there were a bunch of new ZK protocols coming out, including Sonic, Plonk, Marlin and Halo based on the polynomial commitment. And I was following the literature very, very clearly. I became really addicted to those polynomials. It's much more interesting than just hardware. And, so the math construction behind the ZK protocol is just very beautiful and elegant. So besides the hardware and theoretical research, I have also done some application level stuff with my advisor, Mike and other collaborators at NYU. So that's my experience, before starting Scroll. Ye Zhang (00:05:01): So the chance to start Scroll this back to 2020, when there is an explosion in DeFi and, I met Sandy and Haichen, the two other co-founders through some mutual friends in the field of competitive math and also ETH community. We, we find that Ethereum needs a scaling solution, but all the existing ZK rollups have the problem of being too application specific and also like the general proof in a centralized way, which is very hard to scale. So we want to solve this problem and build a new, totally new layer 2, with a more general EVM support and a decentralized prover. So it's also exciting to see some of the research, outcome coming to practice. Yeah. So that's basically the high, high level overview of how Scroll started. Anna Rose (00:05:50): Cool. Haichen. What about you? Where, where did you start? Haichen Shen (00:05:53): So previously, I wasn't working in web3, so I previously worked web2 for a few years. I worked in the Amazon AWS for a few years, and then we were focusing on working in the AI compiler to optimize the model development. So actually like about one and a half year, or like two years ago, like I met with Ye, like, as you mentioned, some common friends and at that time, Ye was finishing up his hardware accelerator papers. And then that's like where he got me into this zero knowledge proof. and then teach me a lot about math and then the protocol behind that. At that time I was very fascinated by the zero knowledge proof. I think that's a very magic collector, which you can just do like a very small computation to verify some, very large competition. Haichen Shen (00:06:40): So I think I previously like, know a little bit about that, but I didn't know the details like after like learning more details, like getting more and more attracted with that. and then we were kind of discussing like what we can do using this zero knowledge proof. And then, it turns out like the ZK Rollup and then building the zkEVM is kind of one of the ways we should like, use that powerful tool in math. Mm. And then before that, before, like I joined Amazon, like I was, getting my PhD at the university of Washington. but my background is more in system optimization, like building distributed systems and then building the compiler optimization stuff. And then, yeah, that's like how I get started. Anna Rose (00:07:21): Nice. It sounds like your background was very well suited to later do this when you were doing distributed system stuff though. Were you touching blockchain or was it like general distributed systems? Haichen Shen (00:07:31): It's more general distributed systems like building, some distributed serving systems for the machine learning workload. But at that time, like during the PhD, we also kind of explored some of the blockchains and then the crypto stuff. At that time, we were mostly looking at the Bitcoin and thinking how we can extend the Bitcoin a little bit. and then we were even working on some prototypes, trying to see, like, we can build some more object oriented language for the new blockchain, but it turns out like we didn't like the work at full time during the PhD time. Anna Rose (00:08:06): Got it. Let's go back to the beginning of Scroll the company. So there's three co-founders I guess. Yes. Okay. And so you'd all met and you decided to take on a project you wanted to work with zero knowledge proofs. What part of the ecosystem were you in when you decided to sort of start working on a zkEVM? Ye Zhang (00:08:25): Yeah, I think we are mostly part of the Ethereum community. For example, like my background is more in the ZK stuff, but, you know, I have read a lot of research from Ethereum community and I know like it's a very creative community, you know, we are pretty value aligned with Ethereum community, this open source and decentralized, and we are pretty value with this. And also like, you know, our mutual friend also in the ETH community and thinking Ethereum is the best settlement layer and, especially it ETH scaling. So that's the reason why we choose to build on top of that. Yeah. Anna Rose (00:08:59): Were you connected to the EF, like, did you work closely with them? And the reason I'm asking this is, and we're gonna get to this a little bit, like a little while later, but like there were designs and ideas around zkEVM coming out of there. So that's why I'm curious if that's where some of your original thoughts on it come from. Ye Zhang (00:09:16): Yeah, yeah, yeah. That's a good question. So it's very interesting. Like, you know, it's also a nice coincidence. Like I was chatting with Barry Whitehat from Ethereum foundation and seeking help for the first review of our Scroll first version. And it happens that he's also looking at the general zkEVM problem. Yeah. And so the collaboration between us and the ETH team, which used to be called apply ZKP team in Ethereum foundation happens very organically because they want to build this. And we want to build this. Like, you know, at that time, like, you know, everything we have is just scratch and we are pretty value aligned. And we also do believe that developing in the open source way owned by the community is the best way to really make it. And we are very excited to co-build a solution together to literally scale Ethereum. And, we are, we are also proud that in this community, nearly half of the contributors are from our team. And we have been contributing to this effort for around one year now. Anna Rose (00:10:17): Wow. This is super interesting though, that you were actually tackling the problem separately, met Barry turns out they were trying to tackle the same problem. What inspired the problem in the first place? Maybe here we can start to break down what even zkEVM means, like why zkEVM? Ye Zhang (00:10:35): I think, you know, it's too far, like one is why we are, how to build toward this EVM equivalence approach? Because, you know, there are some other teams building some zkEVM and some other solutions. So firstly, I like from us EVM equivalency really matters because it's secure because you can inherit the security property from EVM model, which was well defined years ago and has been test off time. And we have exactly the same behavior as EVM to be very secure. And secondly, that EVM has a very strong network effect. It has the largest developer community and also numerous steps built on that. And also there are a lot of infrastructure and tooling around. We can support all of them seamlessly without any delegate labor. And also Ethereum has a very active research community. As I mentioned previously, they are proposing a lot of innovations around like many EIP to improve Ethereum. Ye Zhang (00:11:30): And if you build, you know, something which is EVM equivalent, you can adopt those innovations ahead of time because you have the same environment. And also like, you know, comparing against, you know, there are some other solutions to, to go into the way of zkEVM. Ye Zhang (00:11:46): But I think in our case, in terms of there two zkEVM is the best way to really scale Ethereum because, from our side, the highest priority is not looking for a new virtual machine to support more completely computation. But instead the urgency is migrating existing dApps from layer 1 to layer 2. So we think the best way to do that is provide the same environment with a seamless migration experience. And, I know there are also some complaints about EVM versus other virtual machines, but we believe that EVM is still the best practice for smart contract execution, because it has been years of exploration. Ye Zhang (00:12:24): And I think Vitalik also has some articles talking about the design trade off behind EVM and why they would still choose the EVM path. Like even if they know there are a lot of other virtual machines. And, we do believe that some, you know, some others, the zkEVM can be useful in some applications. And, we have a research team actively looking to this direction and our next step, but we are thinking about how to add more advanced features, zkEVM and then extension to our, the zkEVM. Also another technical difference is that building a zkEVM is much harder than the zkVM. And, but we decided to take that using more advanced crypto and hardware optimization and, to, to provide the best developer experience. Tarun (00:13:08): I think maybe one thing that would be worth talking through is comparison of the different EVM equivalence methods and sort of what it means to be like a fully equivalency zkEVM. I think there's obviously a lot of confusion around what qualifies as being fully EVM compatible versus strictly solidity Bytecode compatible versus sort of like the proof generation process is separate from the execution trace directly, stuff like that. So maybe let's walk through what Scroll means by EVM equivalence. And then how that compares to say some of the other protocols that are talking about EVM equivalence, like say matter labs or, or other places. Yeah. Because I do think that it can be quite confusing when you first hear zkEVM, cuz a lot of people kind of are like, oh, like someone else already said they were doing that. But then it turns out to have some limitations. Ye Zhang (00:14:01): So technically speaking zkEVM is stimulating the behavior of EVM in circuits. And so in our definition and also the zkEVM, we are building with the open source community. We are targeting Bytecode-level compatibility, which means the zkEVM should follow the definition of the EVM virtual machine, yellow paper. They define some opcodes. There are a lot of definitions in that one. And we think if you are really building a zkEVM using the term EVM, you should follow their standard and it can provide exactly the same environment as their current EVM on Ethereum. And, as for some comparison, I think, in terms of the technology stack and also the compatibility side, we are pretty close to Polygon Hermez. They are an incredible team led by Jordi and are building something also fantastic. Ye Zhang (00:14:51): Both of us are targeting bytecode-level compatibility and have a very similar architecture, but there are some technical differences from the implementation side. So we are more close to the native Ethereum notation. For example, we are directly using a fork of native geth to produce our layer 2 block. A geth is short for goEtheturm, it is the most robust and well known Ethereum node implementation and has been used across a lot of places. We also designed some subscircuit to prove for each opcode in gEth execution trace. And it's easier to verify that the scale of the circuit has exactly the same behavior as native Ethereum. but for Polygon Hermez, I think they are rewriting each EVM opcode using a new assembly language and a general proof for their underlying state machine. I think it is more from the implementation side instead of because both of us are targeting bytecode level compatibility. Ye Zhang (00:15:45): I think their approach might need more work to build everything from scratch as for the comparison between StarkWare and zkSync it's more different because as I mentioned previously, we are targeting an EVM equivalence and to achieve bytecode level compatibility. But for them, they are building their own virtual machine, which is different from Ethereum virtual machine and also building a compiler to compile solidity to this underlying VM. So their VM has a totally new set of opcodes and also they need to build on top of a compiler and to achieve language compilable. So I think that's the biggest difference. Anna Rose (00:16:22): There is actually like last year I did an episode with Polygon Hermez at the time now Polygon Hermez, where we did kind of a look deep into zkEVM in the way that they were building it. I remember that they did highlight that each opcode would be equivalent. How is it working for you if it isn't like you're not rewriting each op code. So how do you make it actually work without having to compile? Ye Zhang (00:16:45): I think for their approach for each opcode, they will have some like a relatively small state machine or some rewrite union or assembly code. So what we do is that for each opcode, we will directly build some customized circuit for this opcode. So it's a one to one mapping like for add, we directly build some circuit for add. And instead of, you know, running ads as a state machine. And so that's, that's some technical implementation difference, but I think both us can achieve all the opcodes level compatibility. Haichen Shen (00:17:15): So the way like we're building the zkEVM is that we can take each opcode and then write a gadget, like a small component, which inside the ZK, circuit that you write a gadget for each opcode and then simulate the behavior, like according to the implementation, the actual implementation from the specs. So it can, say like the, so then add an opcode sample in the EVM, you need to pop two values from the stack and then do addition, within the 256 bit range. And then you push back the results to that. So in the circuit, you need to do the exact same things you need to verify that the two operands to the add opcode are actually popped from the stack. And then you like to write a constraint to say like the result is actually the addition of these two operands and then push back that back the stack. Tarun (00:18:07): One thing that's important to also remember is that the proof is actually the execution trace versus sort of like something that does a transpile step, and you really are proving execution of the transpile thing. Not the actual EVM can be quite different or like having a bigger attack surface in some ways. Anna Rose (00:18:27): Why attack surface? Tarun (00:18:28): Well, they're more moving parts, right? So like, let's say I have a high, another language, right? That's compiling to EVM bytecode, but I only prove what happens in the other languages execution environment before it gets converted. There's this problem of like, well, is my translation perfect. Mm. And it's not necessarily that there doesn't exist a perfect translation, right? These things are still kind of bounded Turing machines and whatever, but it's more that implementing one that catches every single edge case is actually extremely difficult. You know, an example of this is that, you know, in the early days of iPhones and sort of Android phones, there was this huge movement to like, have people make these transpilers that were like, you could just write an app in Android and compile it to iOS and vice versa. And like they had way more security problems than the native code in general. And so there is sort of that from, from a security standpoint, I do think it is also worth keeping in mind the fewer moving parts, the more you can be confident in the code that's written. Haichen Shen (00:19:38): Yeah, exactly. I think that the one importance of the zkEVM is that it doesn't require any additional infrastructure to make your smart code, like the smart contract code, to be compatible with Ethereum. And then also you make sure that, your smart contract is execute exactly on the EVM compatible like equivalent, environment so that you can make sure all of the codes should be behaved as like what you expected from the specs of the ACM Anna Rose (00:20:05): In that case though, why wouldn't everyone do like an EVM with EVM opcodes? Like, what are the trade offs that you have to face? Like, is it harder to build? Is it longer to process like, yeah, I'm just curious, like why, why doesn't everyone do that? Is there a faster path if you don't? Ye Zhang (00:20:23): I think, as mentioned, the VM is much harder to build because, you know, EVM is a stack based virtual machine, but for the owner proof, it can naturally support a register based virtual machine. So it's, you know, just for this step, it will already add a lot of overhead. And also like for EVM, there is, there are a lot of ZK un-friendly opcodes, like SHA-3 and also inefficient data structures. For example, like EVM word sizes is not on prime fields. It sounds like 256 bit. So you need ring checks everywhere, which just explore your circuit as a very huge overhead for ZK. And especially for this data storage, you need a merkle potential tree, which is another huge overhead, but for zkEVM, you get more flexibility because you can have your own defined instruction set. Ye Zhang (00:21:13): And, you, you can build that to be more zk friendly and have a much smaller proving overhead. The zkEVM was thought to be impossible in the past. I think it's just with recent breakthroughs, like advanced circuit optimization, hardware acceleration, and some recursive proof we can massively improve the performance of prover and finally make that feasible because, you know, it's, even if I had, it has a very large overhead, like, you know, people still like it because it's still very developer friendly, but, you know, there are some technical challenges in the past. Tarun (00:21:45): Yeah. Actually maybe we could walk through, I don't know, a year or two ago when people thought a zkEVM was not possible, what's sort of the chronology and timeline of the events that changed people's minds? Ye Zhang (00:21:59): I think, you know, first is from the circuit arithmetization part, because first in the past, we just had R1CS and which, you know, it's hard to build something which is more customized. And also, especially for EVM, like, you know, you have 256 bit, it needs very specialized gadgets to prove for, you know, range. And I think with some breakthrough, like Aztec proposed plookup, which is a nice way to prove for some zk un-friendly primitive, you just need to prove that something is within a table and this belonging relationship and that's it. So I think that's very important primitive to deal with those, you know, friendly opcodes at the circuit part. And also like another important thing. Like you can also use this lookup to link different components. For example, like in a stake circuit, you prove that you read and write a correct over the same elements. Ye Zhang (00:22:53): And in another circuit, you prove that your execution is correct and you need to prove that, you know, the elements in different circuits are the same. So you need lookup to prove that this belonging relationship and the plookup can provide a very efficient way to do this. And I think both us and, like Polygon Hermez, even some team buildings, the zkEVM are using the same approach to link the execution and also the storage. I think that's the biggest like improvement. And, on the backend side, I think there is hardware acceleration. It's just drawing more and more people's attention. It just happens simultaneously. We find, you know, not only from the frontend circumstances, this side, like, you know, you' prooving is still very large and hardware because, you know, even if your prooving takes very long time, it's highly paralyzable because it only contains some group operation or ability curve and also ts. Ye Zhang (00:23:44): So both two parts are highly parallelizable. And we have done a lot of previous research around GPU acceleration and also ASIC acceleration. So we exactly know like, you know, using such kind of technology, you can improve the prover performance for maybe two orders of magnitude to really make that feasible. So I think it just happens. And, like before that people don't know, like, you know, this can be leveraged here because I think, you know, zero-knowledge proof it's very hard for you to all sources proving to a third party because you know, ZKs case mostly in the past used to be some privacy preserving application. So you can't, you know, give your secret key and your secret information to a hardware provider, but, you know, in ZK rollup there is no ZK, there is just validity proof needed. So you can easily outsource this proof generation to a hardware provider. Ye Zhang (00:24:31): They can run GPU clusters, data centers, or even produce ASIC. So I think that's a huge opportunity for this prover to cut in and solve the efficiency problem. And finally, the recursive proof is that I think also like using the sum of the optimization, you can generally proof for proof efficiently, and especially for some non-native field operation, you can aggregate that. I think that's also important to reduce the overall like verification costs because ZK circuits are composed of many sub circuits. And for each sub circuit, you will result in proof. If you verify all the proof on a chain, it'll be very large overhead and you need to aggregate them to make your final proof smaller. I think that's also important, but that's, that's also like, you know, build on top of, you know, the arithmetization. So I think that's the biggest thing, like, you know, in making this possible. Anna Rose (00:25:22): So you just mentioned that it's circuits upon, and then there's sub circuits. Would you say this is still in the realm of a SNARK that you're using at this core? Or is this some like modified semi STARK, like SNARK? I mean, I know when I was talking to Jordi from Hermez, like there was also like STARK, like techniques used with SNARKs, so you didn't really know what the word for this thing was anymore, so yeah. Yeah. Franken, Franken, SNARK STARK or something. Ye Zhang (00:25:50): Yeah, yeah. That's a very good question. So like, you know, yeah. I think, you know, both SNARK and STARK are just some, some ZK protocol. And if you are describing a ZK protocol a better way, just to describe what kind of circuit arithmetization you are using and what the polynomial commitment you are using, because you can have different combinations for those, for those parts. For STARK, they usually like aire as their circuit arithmetization and use FRI as their polynomial commitment. And, but for us we are using, you know, both sides. On the circuit arithmetization side we are using Plonky arithmetization, implemented in Halo2 by the Zcash team. And in the backend polynomic commitment. So the initial version in Halo2 tool, they use some Bulletproof style inner product argument, but we replace that with qeg because, you know, although like the previous one implementing Halo2 to had many nice properties, but we refuse that because we want to verify our proof on Ethereum and the Pasta curve is not supported directly on Ethereum. So that's a reason why we choose to, to change that to KCG, to make our proof more, more efficiently verified on Ethereum. So basically it's Plonkish arithmetization plus KCG arithmetization commitment. And for both the EVM circuits and the aggregation circuit, we are not using any STARK related technology. Anna Rose (00:27:06): You said Plonky or Plonk 2? Ye Zhang (00:27:09): It's Plonkish Arithmetization. It's it's a new name proposed by, Anna Rose (00:27:14): Yeah. Wait, is this a different one? This is Plonkish? Ye Zhang (00:27:19): Plonkish it's I think it's a word like proposed by Dara, like on Twitter, like there are some words like to describe what kind of arithmetization you can use is called Plonkish. Anna Rose (00:27:29): Oh, it's a kind of Arith... Okay. Okay. Okay. Yes, but it's not Plonk specifically or is Plonk at the heart of what you're doing as well? Ye Zhang (00:27:37): So basically the literature looks like this. So basically there is Plonk, which only supports the addition and the multiplication gate. And then like, there is plookup, which is a separate prime to prove this lookup relationship. Yes. But how do you combine this plookup with the Plonk and the ad tech people propose some semantics called turbo plonk to combine two different prime tests in one in one way. And then, like, I think Zcash team use a more flexible way to represent this, you know, different relationships called Ultra Plonk, and then like they rename this ultra plonk Anna Rose (00:28:09): Ultra Plonk Ye Zhang (00:28:10): Yes. To Plonkish arithmetization. So yeah, that's how this names, Anna Rose (00:28:14): Oh my gosh. I have, I actually wanna do a whole session at some point, which just maps all the derivations of the word Plonk and what they mean and how they work together, but Plonkish. Okay. That was one part, but then you also said Halo and I wanted to just double check if you were saying techniques from Halo or Halo 2. Ye Zhang (00:28:30): Yeah. We are using Halo 2. Anna Rose (00:28:32): Okay. Except that you're using KCG instead of the Pasta curve that they're using. Ye Zhang (00:28:38): Yes. Yes. Anna Rose (00:28:39): Okay. And are there any other variations on this? Like, are there any other additions or changes that you've incorporated into your model? Ye Zhang (00:28:48): Yeah, I think that's mostly about the ideas both in the zkEVM. So in zkEVM, we, we have multiple sub circuits and the lookup relationships and in aggregation circuit, we handle some non native field operations and general proof for that. Haichen Shen (00:29:03): I think for the Halo 2, we also extended their API a little bit. So to be more flexible, use case in the, in the ZK event during our building, we find like there's some limitation from the API. And then, so we modify a little bit, and also we, since we also change their curve. So we also changed the recursive scheme in our version of the Halo 2, but we also plan to actually discuss with the Zcash team to see how to upstream all of the features. And then I think they're also interested in our way of hacking, Halo 2, and adding those kinds of new changes. Anna Rose (00:29:39): Cool. Isn't there sort of another camp of thought, like, I'm kind of just curious, like what made you choose to work in this particular direction? Ye Zhang (00:29:48): Yeah. I think when we are started, like we have this discussion with Barry and also with the Ethereum foundation team. And I think because we, we definitely need the customer gates support and also the look up argument sport. And, at that time, like Halo 2, is the only implementation that sports, both to primes in the open source way and with a nice license. And also like, you know, the ... built by the cash team also have a very high security guarantee. So, yes. Anna Rose (00:30:20): What do you actually use to build this stuff? Like what languages are you using? Ye Zhang (00:30:24): So we are, we are using Rust because Halo 2 is written in Rust. Okay. And we need to follow their yeah. APIs to rust, to write circuits. Anna Rose (00:30:31): And I mean, there's a lot of teams, a lot of ZK teams, which are introducing like native DSLs since you are working with EVM, I'm, I'm assuming the language to build on your zkEVM will be Solidity, but do you also need anyone to interact with the underlying ZK circuits in a special way that like maybe Solidity can't? Ye Zhang (00:30:53): I think for now, like we, we will only expose some very high level EVM API to the developers. They don't need to touch any low level details, but in the future, if you want to add more advanced features, like, you know, support something which EVM can support in our, the zkEVM, we might need to hack that a bit and they can add some subcircuit to our current system and to interact with some, you know, Solidity contract and, that's one potential direction, but that's not what we are building for now, what we are building is just the same environment. Yeah. Anna Rose (00:31:24): Is there a delay, like if you wanted to move from the L2 to the L1 with a zkEVM, is it slower or faster, or is there any sort of like change versus something like optimistic roll up or like the more non zkEVM ZK rollups like, I'm just trying to picture if these steps of like creating, you know, the SNARK in that particular way, if, if there's any sort of time leg on that. Ye Zhang (00:31:51): So in general, the ZK rollup has a roughly similar delay for withdrawal transactions only. So for, for those transactions happening within layer 2, you get some instant confirmation because you have some centralized sequencer and as far as your transaction included, you can confirm, but for the joining from layer 2 to layer 1, ZK rollup usually takes maybe one hour or minutes, depending on your TPS. If your TPS is higher, like more transactions, you can just upload your proof faster because you know, one proof can be amortized over a lot of transactions. So it also depends on how many transactions you have in your system. For zkEVM, because your proving overhead is larger, another interest in design in our current system is that we have a decentralized prover system, which means that we will generate multiple blocks in parallel. And for each block, there will be some prover to take this block and general proof. Ye Zhang (00:32:41): So all those blocks can be general proof in parallel. And, you know, we still get a high throughput, amortized and, proving in time is not a big issue on our platform. So the delay will be similar to other ZK rollups, but compared with optimistic rollups, they take around like seven days to get the finality of withdrawal transaction. They are based on some game theory and, like economic assumptions. So they basically need an honest node to react, secure the transactions, and challenge if they find this wrong. So they need to make sure that at least one honest node captures this. So they need seven days to withdraw from layer 2 to layer 1. Anna Rose (00:33:19): I wanna kind of go a little deeper on this. You just mentioned the decentralized prover. I actually had a question here about the agent that runs the operator of the ZK rollup. This is like, I always am trying to figure out what that is in every system. So often it'll be called a sequencer in certain things like the StarkWare setup. It's like they have a committee. I'm just curious, like, in your case, the decentralized prover is that acting as the agent that bridges over to the mainchain? Ye Zhang (00:33:51): Yeah, I think currently we have two of those, one day called sequencer and the other is called roller. So sequencer is for, you know, correcting transactions, receiving transactions from a user and a generator layer to block. And then like, you know, after you generate this block roller, like in our system is acting as a prover. So it's actually generating a proof and the sequencer will send this block or some witness or execution trace to roller, and this roller will generate proof and send back the proof because currently the sequencer is centralized by the prover is decentralized because we can leverage the computation power from the network and then sequencer verify the proof from maybe get aggregated proof and some means the proof on chain. So it's, you know, still a centralized sequencer to generate blocks. And, but, you know, it's a decentralized prover to generate proof for, for computation power. Yeah. Tarun (00:34:40): Quick question. does the sequencer have to provide some data availability proof also or just strictly a proof of execution? Haichen Shen (00:34:51): Yeah, actually, so the sequencer will provide the data availability. So I think you can mimic this like, in a way of the proposal and builder. So, which sequencer now is the proposal. It generate the blocks, and then also like a sequence, the transaction and general the block, and then provide the data availability to the layer one chains, and then it will send like some like the transaction data or like the trace to the roller, which roller kind of generating the validity prove and upload it to the layer one. So to seal all of the transactions within that batch. Tarun (00:35:24): Right. Okay. Wait, so the roller actually publishes the proof directly. The sequencer only kind of gives them the transactions, both the data that's sampled as well as the execution call sites, but the roller posts the proof on-chain themselves, or am I getting the flow wrong? Haichen Shen (00:35:44): Yeah. Yeah. I think that's the basic idea. So actually there will be some discussion, like just, in that way, like the sequencer, rollup they just send the data availability, roller send after validity proof, makes the system very easier to be decentralized, like in both positions, like in both roles. I think that's a very natural way to do that. Tarun (00:36:07): Right. So actually one question is if I'm a user of the rollup, do I send my transactions to both the sequencer and the roller or do I send it to only the sequencer? Haichen Shen (00:36:19): Only to the sequencer Tarun (00:36:21): Only, okay, cool. That makes sense. Anna Rose (00:36:23): Do you think you will decentralize the sequencer or not? Haichen Shen (00:36:28): Yes, we will. We will decentralize the sequencer. So it makes like the whole layer 2 becomes the trust list. So they don't need to trust any decentralized identity and also be more censorship resistance Anna Rose (00:36:39): On the rollup side on the L2. Is there also any sort of like validator of the actual underlying data or is that the prover, is the prover sort of acting like that? Haichen Shen (00:36:51): Yeah, I think you can think of it like the prover is acting like a validator, but you don't need to have an area of the roller or we just call it rollers. yeah, yeah. To be verified, all of them participate in validating a block. You just only have like a few, like a small committee to verify that, and then someone can roll up the validity proof to them. Tarun (00:37:10): Quick question. On the point of decentralizing, the sequencer is the long term idea to basically have like a pool of machines that can be both rollers and sequencers. Like people can sort of like stake in some way, and then they get randomly selected to one of the two batches such as, you know, yes, there's a single sequencer for one slot, but you're you have some rotation or do you, do you view that as too complicated? I'm just kind of curious, like what, what are sort of the different ways you're thinking of decentralized sequencer? Haichen Shen (00:37:44): I think you cannot, like the people randomly selected for the sequencer or become a prover or roller. so this is because it has different hardware requirements for different positions. Like the sequencer, you just need to drain the block fast enough and to keep up the throughput, but for the roller, sometimes you may need some accelerators like GPU, FPGA, or like ASIC, to be really able to generate the proof fast enough for that. So actually there's different requirements for different roles. So kind of like you just only, you can stake to become a sequencer or stake to become a roller. Tarun (00:38:19): Yeah. I guess the reason I'm asking is more from the perspective of like, it is likely that to some extent you're gonna have quite a huge overlap in the actual entities that are running both of these, right. Like someone who's already running a roller is already in a data center likely and like could easily run a sequence or node as well. So I guess the main question is like, how does that sort of change how you think about the sort of allocation to these different parties? And do you think it's like something where, you know, you're fine with the election of a sequencer being sort of like not unknown, cuz like, you know, at this point we don't really have a single secret leader election. Like there's sort of like some, you know, I mean, if you wanna implement the FHE needed for it, then I think you can do it. But I think if we're still a little far away from that, so how much do you think that matters here and sort of like, how are you thinking about this fact that like the set of sequencers and the set of rollers will probably have pretty high overlap at least to start? Haichen Shen (00:39:23): Yeah. Expect to be more rollers than the sequencers, because, in the sequencer we don't really need to, we just need to have like a handful, a few number of the sequences to be censorship resistant. But for the rollers, you kind of like, you need to keep up all of it, depending on how fast it can generate proof and then what's the TPS you want to support. So as mentioned before we have multiple blocks that can be generated parallel. So that means you need more rollers to generate the proofs for those blocks. So that, or expect like there would be like a ratio we'll keep, we want to keep and then maintain an example like it's like 10 to 1 or depends on the proof generation cost. So that would be like keeping some ratio to have like the more rollers to generate the proof in parallel. Tarun (00:40:09): How are you thinking about block size as a function of the number of rollers that are registered? Do you view this as a fixed block size type of thing or do you view it as elastic as a function of like the capacity of the number of rollers? Haichen Shen (00:40:23): I think that it's still a fixed sized block. so it doesn't like the effect of how many rollers it supports. So usually the block size is limited by how many steps you can include in the circuit. So, and also can translate that into some gas limit you have for the layer 2 block. Ye Zhang (00:40:41): Yeah. I can add maybe slightly more because we are using an EVM circuit. So the largest, like, you know, block size, you, you can support actually depends on like, you know, transaction, for example, like you've got some very complicated, maybe flash loan, transaction, it needs to involve several contract loading and a lot of other opcodes. So this will make your, your, your trace, your execution trace longer. So it basically depends on how complicated your transaction has and how many steps need to be included in EVM. And that depends on the largest capacity of a block instead of just a number of transactions because different transactions differ a lot. Yeah. It depends on how many steps you have for this execution on EVM. Tarun (00:41:23): Right. So the reason I was asking this is I was reading the StarkWare fee model, which has sort of like a thing that, you know, it's not as aggressive as EIP 1559, but it does adjust sort of the block size to sort of the number of steps. And they, they have a notion of step, which I'm not sure if that notion is like, it's not as clean as say the EVM notion of a transaction as far as I can tell, but I was just more curious, like, you know, in Ethereum mainnet right, we do have variable size blocks now. Right. In the sense of 1559 does sort of force that. So I'm kind of curious, like, does that impact how provers have to interact with these systems? Because I could imagine something where, you know, if the block size goes up, then like some proverbs, like stop proving because their relative economic share of the network went down. Tarun (00:42:19): If the block size goes down, maybe it encourages more people, the economics of these, these types of things I think is very, you know, as we are seeing with StarkWare, cuz they've had some interesting like free market kind of things that have been happening in the last maybe like two weeks, I was just kinda curious, how, where do you see this evolving? Do you see it as being something where the economics of these things are static or the economics are kind of like changing as a function of usage? You know, obviously it's, it's far out right now, but I think it will, it will be something that will kind of impact how people think about writing code that they run on Scroll. Haichen Shen (00:42:57): Yeah. Actually I think that's a very interesting question. I think that will also include doing more research on economic models, like how to separate these two roles and then how to keep both roles incentivized. Yeah. I think for sure, like the block size, I think it will be dynamic, but mention that the circuit size actually is fixed. So you can have a prefix, a circuit size, which you, you know, like how many opcodes, how many steps you can fit into the circuit. So that's kind of, that's the limiting factor for that, but in terms of economic models, I think we're doing more research, see how to keep incentivized. Like it can keep some of the base fee like then it can separate like base fee and me values, for example, like to use that, to reward different positions, different roles. So, I think a lot of things are still current today's still and there's some research. Tarun (00:43:44): Yeah. I mean, I think it's interesting, but that, by not being the first roll up, you can learn a lot from what's happening on StarkWare in the free market and will be kind of interesting. Yeah. Haichen Shen (00:43:58): Yeah, yeah, yeah, totally agree. Also like that kind of learning a lot of things from the existing ones and then seeing like the, to avoid some of the mistakes to make, and then trying to like the, make things like the work most mostly here in the Scroll. Tarun (00:44:13): I guess one other thing, you know, we've talked a little bit about how hardware acceleration was sort of like a key facet to getting zkEVM to work and hardware can mean commodity hardware, like FPGAs or GPUs. It could mean specialized hardware, like ASICs. How do you view that landscape evolving over, say the next one to two years. And do you think that at launch when Scroll launches you'll have, you know, FPGAs ready or GPUs ready or like, which kind of proverbs do you think will be there on launch and then live when you're live? Ye Zhang (00:44:47): Yeah, that's a very good question. So basically we are internally developing a very fast GPU solution because we are a software company. We have a lot of likes, we hire some GPU, like programmers and also we started this research for GPU acceleration around like one year ago. So basically when we are online, we will definitely have our GPU prover already to support, to provide very strong computation power already to support very high TPS. So I think in terms of the correlation between GPU, ASICs and FPGAs, I think GPUs will be the first thing to go to market and they first be practical to be used. And then FPGA can beat GPU, not in terms of speed, but in terms of the energy consumption, like, you know, at the same speed up, like FPGA can be more energy efficient. And, but I think it means a lot of dedicated work to make an IPGA really better than a GPU because we have previous experience working with both IPG ASIC, GPU. Ye Zhang (00:45:42): So we, we know, are pretty clear that, you know, although like you can build some customized units using FPGA and ASIC, but GPU still, especially our version. We can be ten times faster than the best open source GPU implementation. So it's significantly faster and it's very hard to build a new version, which can beat our current GPU version. But I know there are some efforts behind building some FPGAs data centers and they can be more energy efficient. And I think it's also FPGA is a very important milestone, like before you had ASIC. And because, you know, there are some specialized operations, for example, Multiscale modification over a curve. Those primitives are highly parallelizable and can be more suitable for ASIC. So I think, you know, maybe within two or three years, we can have an ASIC which can beat both FPGA and GPU. And another concern behind this is that the zero knowledge proof algorithm involves like in one year, like you, you find something which is more efficient. So I think it's just basically achieving a later stage when the algorithm becomes more and more stable. Yeah. Tarun (00:46:48): Yeah. I mean, I guess that's sort of always been the main question. So like having worked in building ASICs, like it takes you like two years to tape out, like you're not going to like, you know, by the time you verified your design, like yeah. Maybe a new processor spec has come out. So I guess my question is, you know, in a space where, we can't even agree on the names of the precise architecture, we instead keep adding new suffixes to Plonk every two months or whatever, you know, how much do you think that like the inflexibility of hardware versus sort of like the flexibility of design, like how you look at that trade off. Ye Zhang (00:47:27): Yeah, that's a very, very good question, actually. So basically currently most ZK algorithms, either you are Plonk or even you are Groth 16, or like, you know, any other algorithms. So they build on top of similar parameters. For example, they are like the most commonly adopted one, Multiscalar multiplication or any curve. And also FFTs over large field. And I think those two are the most important primitives. If you get those components working really fast, then you can accept most ZK algorithms, but different algorithms differ a lot. Also it depends on your circuit arithmetization. You are using because currently for zkEVM. Although we already have the kernels for MSM and FFT implemented, but the workflow needs to be trend a lot. For example, we, we need like 1000 FFTs for our current zkEVM. And they did have cooperate between the CPU and also like the FPGA that became a new bottleneck. Ye Zhang (00:48:26): So, I think if you are with your algorithm evolving, the basic primitive won't change, but it depends on your circuit shape. Like, you know, how many custom gates you are using and it will influence the workflow and it'll, it might influence your overall efficiency because some of the other stuff can become the new bottleneck. And, they are also like, I think, those two parameters are commonly used in SNARK. In FRI there might be some small differences because they can always do some group operations. That's also the reason why STARK is considered to be faster than SNARK because they are, they don't need either the curve. They don't need any group operation. They only need FFT or a smaller field and hash, but, you know, with the scalability increasing, it's also very hard to accelerate FFT for very large scales. So it's still like a huge tradeoff, but the parameters underlying than change too much as far, your bitwise is fixed. And, yeah, Tarun (00:49:20): I will say one funny thing about StarkWare, right. Is like the programming language still makes you have to think about field elements and like there's no strings and like ..., right? Like, there's actually like some funny things of like, you went to all this trouble to avoid group operations, but then like you actually still force a developer who may not, you know, totally understand what that means to have to actually reason about some of those things more directly. And I think that's something that I think zkEVM hopefully avoids in the sense that like, you know, hopefully no one has to actually really understand KZG to write like Uniswap. Ye Zhang (00:49:56): Yeah. Yeah, definitely. That's our priority. And to be the most developer friendly ZK rollup Anna Rose (00:50:02): I have one more question about what you were just saying with like the hardware acceleration. If you have these kind of like, you're talking about optimizations, but when you talk about optimizing ZK in this context, and this is actually about like EVM speed and gas fees on the rollup, I'm just curious, like, when you talk about optimizing those circuits, are you talking about making anything in the actual, like use of the L2 faster? Or is it literally like how quickly one could prove how cheaply one could prove how small the proof is? Like yeah. I'm just curious if it has any impact on the actual, like running of the network. Ye Zhang (00:50:37): Yeah, I think that's a good question. So, basically when I'm talking about hardware acceleration, so firstly , if you don't have, you know, good hardware support, you literally can't generate proofing time. For example, the zkEVM might take you five hours to generate the proof. So that's the first thing, like, you know, it's important to make that practical. We already make that practical enough. And second, talking about the cost. I think still like, you know, with, with ETH price still, like, you know, at a very high, I think the data availability cost is still larger than the proving cost. Like, you know, the cost when you are storing your, your role transaction data on chain is still higher than the proving cost. So it's more about the energy consumption for general proof. And whether you generate that in a more efficient way and with, with less cost, but from the fee perspective is not dominant by this, this proving cost. And, it might change after, you know, there are a lot of danksharding and EIP implementations on Ethereum but currently yeah. Anna Rose (00:51:35): Mm. I wanna actually bring it sort of to a higher level, which is, you had mentioned that like the ZK and zkEVM is not about privacy. It's almost like compressing or acting it's the roll up, but do you have any ideas for types of projects that would do best on something like a zkEVM versus something else? Either like the regular EVM base chain or like another EVM compatible, other chain that's been bridged to Ethereum? Haichen Shen (00:52:03): Yeah. So first of all, all of the existing applications, like the should be compatible with the zkEVM. And then I think for those, if you consider comparing some odd, layer 1 that's EVM compatible, so the bridge, I think becomes king of the, one of the loopholes inside of your security guarantee that sometimes that can be easily attacked, but with the zkEVM. So the bridge, you can be guaranteed, with the validity proof. So that's a lot of things that are more secure than some other, like the layer 1s. And then I think for those applications that have high stakes and then require those security and then also want to have high frequency, like the low gas fee and then with very high frequent transactions. And a lot of users, those applications will be very suitable for the zkEVM. Ye Zhang (00:52:52): I think another like direction we are also like actively researching it actually looking into is that, you know, as I mentioned, we definitely want to add some features the zkEVM. So for example, we can enable some like pre compilers or some, some new parameters, especially on zkEVM, as some, for example, we build some specialized circuit for some hash function so that you can do that kind of crypto optimization, especially on our, the zkEVM in a cheap way. So that's something like, you know, we can definitely enable. And also, as you mentioned, like you care about privacy. So, but, you know, from our perspective layer 2 is beautiful scalability for reducing the conjunction problem on Ethereum layer 1. One, because you can't get both, especially under the account based model, it's very hard for you to get privacy because it's even for Halo, for some other company, they are building under a UTXO model. So it's very hard to build privacy under the account based model. So I think privacy, like Aztec or some other techniques may be some other featured apps, specialized, maybe some smaller rollups. And we can verify the proofs in zkEVM. And so they can, you know, be some specialized, you know, roll ups instead of, you know, a general purpose. But by adding some more crypto parameters support you know zkEVM, we can, we can support those proof verification more, more cheaper. And, yeah, things like that. Anna Rose (00:54:11): Do you ever imagine being used like where part of Dapp is on the main chain and the other part is on the zkEVM, something like I've heard sort of the example of like maybe the governance module is on the rollup and something that happens rarely could still be on main net, but they're somehow connected. Do you imagine something like that? Ye Zhang (00:54:30): I think for now, like we haven't thought in that direction. It basically depends on like, you know, if you are doing something which is more computationally heavy, you can move that off chain and the user cheaper proof instead of the, but the extra cost, if you are running, you know, zkVM or zkEVM model, it might be like higher in verification costs. So it depends on, you know, how the cost of your initial computation can be compared with your proof verification. So it's a trade off and yeah, it's an interesting interaction, but we haven't looked into that yet. Anna Rose (00:55:00): Yeah. This actually just brings up another question. So I've been doing a series on things like bridges and interoperability zones and all that. And one of the things is this idea of general message passing. So I always think of a rollup, very focused on like, you know, is data availability and then it moves tokens. But I don't know, actually, if you like, do you have messages also going back and forth between the L2 and like the main net? Like, are you changing state, like an account? Can you basically send a message through your rollup bridge? Haichen Shen (00:55:33): Yeah, I think that's a good question. So I think you can still send a message through some of the specialized contracts and then that can be forward to the layer one, I think that's totally possible to do that. Anna Rose (00:55:43): Can it be executed there? Haichen Shen (00:55:44): Yeah, I think as long as you can provide extra functionality in the bridge, like having, not only sending some tokens along, and also you can say like, I want to invoke some of the small contracts there, like on the layer 1, or vice versa. You can say like the, on layer 1, you can send some message and token along with invoking a certain contract on layer 2. So you can have this interoperability, I think, between layer 1 and layer 2. Anna Rose (00:56:08): Yeah. That's just something I feel like at least I haven't explored enough. Tarun (00:56:12): I mean, I do think one thing that gets changed with message passing, if you have an ability to standardize the messages and to fix sized proofs, is that people could generate the messages on, say Scroll for some computation they wanna do, and then send it elsewhere under like a fixed size packet. Right? So like maybe you do a computation on Scroll, you do some weird flash loan. You like to have some kind of complicated thing. You generate the proof and then you relay the proof to all the other layer twos via some kind of generic message pressing layer that actually is sort of a more full model than say something like Wormhole or Layer zero or Nomad. Right. Because they can't actually give you any guarantees on the calculation, unlike a ZK bridge. Right. Like they can only kind of give you very simple right now, at least simple transactions. And you do rely on kind of like the relays economic incentives Anna Rose (00:57:15): In that though, there's a new agent or new like operator you just mentioned, like that's a, between L2 message passing systems. Right. Not going through the L1? Tarun (00:57:27): I think, what you could do is you have the, between L2 that falls back to L2, I guess that's like closest to nomad in design land right now. Okay. But it doesn't require staking on all the chains, right? Like the problem with something at Wormhole is you effectively have to your capital, efficiency's kind of low. You have to stake on every chain and like the amount of stake at every place determines sort of the security. But I think the idea is if you have a ZK, a single ZK chain that does the computation and generates the proof, and then you only have to send that proof, that sort of economic value of that proof is a lot lower than like, Hey, I have to move coins on the other place. Right. So, there'll be a lot of weird things like capital efficiency trade offs. Tarun (00:58:11): I think when you start thinking about like ZK L2 stuff versus optimistic, or versus like, you know, Wormhole sale things where it's like, you have synthetic assets on both sides. Because the synthetic asset is not free. It does require like a bunch of capital to kind of be backing it implicitly. No, not that there's not a place for all of those. Right. Like they can get to production faster. Right. But I think it will be interesting in the long run to see how ZKs change, how much capital you actually need to have on every chain. Like right now it's like quadratic. In the number of chains, right? Like you basically need to have capital on every chain. And then the minimum amount depends on every pair of chains. And in some ways, hopefully a ZK thing lets that be less, more efficient for message passing. Sorry. That was just my rant. Little rant about that. Sorry that, that wasn't like any, anything that anyone is doing right now I'm Haichen Shen (00:59:02): No, I think that's a very interesting topic to discuss. I think in the future there will be some interoperability between the, even two ZK rollups, like two ZK layer 2s. So I can directly allocate the funds in different places and then execute it at a different place, I guess. Tarun (00:59:18): So I guess one thing that I think is kind of cool about ZKPs is that you get almost like a interoperability standard built in, right. You only have to trust the verifiers written correctly, which is a lot lower of a workload than having to trust that the VM translation is written correctly. Right. Like you can argue that like the optimism bug, that was found and then the wormhole bug both have this problem of like there's two VMs. That don't exactly agree. Right. They're not bit wise identical. Yeah. And the translation layer was like the bug of the synthetic thing happened. But the ZK thing, as long as the verifier contract is correct, it's you have basically perfect interoperability. And so I think that's a just lower surface for bugs and errors. You know, I think once the ZK rollups are live, it'll be way easier to do this type of stuff because you'll have all these networks of providers who are already validating this chain and you can basically be like, Hey, can you like generate this proof that I can then relay somewhere else? And it doesn't matter who relays it. Anna Rose (01:00:20): That's really interesting. I guess, going back to Scroll though, do you have ideas of like, I mean, you're still building, we should find out actually where you're at in this build, but like, I mean, I think this kind of brings us to the question of like at what stage is the project, what's your timeline for actually having, you know, us be able to play with a zkEVM? Haichen Shen (01:00:39): So we have to design two phases of like for Testnet and then we're quite close to have our phase one Testnet, which we are almost like 70% done. And then right now we can already support some ERC 20 transfer on the zkEVM for on the zkEVM, everything works first, mostly like expected. So we'll see our first launcher testnet in, like a few one to two months. and then I think like then in the phase two, so in the phase one, we probably like the support, some limited, off codes and then some of the transactions. And then the phase two will be like the full compatibility, zkEVM Testnet so that like every smart contract that supposed to be able to run on the extreme should be able to run on the Scroll testnet. Anna Rose (01:01:23): Cool. Tarun (01:01:24): What applications, you know, obviously it's just like early in this space and you know, who knows what the real application was like 2016, Ethere you couldn't have predicted most of the things that exist now. What are kind of applications you're most excited about being enabled by Scroll that, you know, you've kind of heard of or thought through? Haichen Shen (01:01:45): I think those I found for consumer facing applications, like the target for more and then some social applications, close to like the more users, like not only the DeFi applications, that's only targeted for some financial applications, but more like the custom facing application for those like social applications. Like there are very interesting use cases and also to enlarge the whole user space for the blockchain, for the cryptocurrencies to bring more new customers to the blockchain and the crypto world. Anna Rose (01:02:17): Mm cool. Ye Zhang (01:02:18): Yeah. I think from my side, like, you know, because I'm a ZK guy, so I want to see more ZK applications and yeah, especially on our platform and they got cheaper proof verification. Maybe we can support more, more stuff to support those interesting ZK applications. Yeah. Anna Rose (01:02:33): Awesome. Cool. So I wanna say thank you to both of you for coming on the show and sharing with us sort of the journey to Scroll and your thoughts about zkEVM, what that could enable how it's built. Yeah. Thanks a lot. Ye Zhang (01:02:46): Yeah. Thanks for hosting. Haichen Shen (01:02:48): Yeah. Thank you Anna and Tarun for hosting us and having us to hear it's very pleasure to talk to you guys, on a ZK podcast. Tarun (01:02:55): Looking forward to the next one where we learn more about how Scroll live is working. Anna Rose (01:03:01): Yeah. So I wanna say big, thank you to the podcast editor, Henrik podcast producer, Tanya, thanks to Chris for research and to our listeners. Thanks for listening.