Anna (00:05): Welcome to Zero Knowledge. I'm your host Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. Anna (00:27): This week, I chat with Alex Gluchowski, CEO at Matter Labs and co-creator of zkSync. We chat about the history and evolution of the zkSync project, the upcoming zkSync 2.0, which includes zkEVM, an EVM-based programming model and composability architecture, as well as a new L2 scaling technique called zkPorter. Now, before we start in, I want to let you know about the ZK Jobs Fair, which is happening on April 22nd. So this is an event that I'm actually running. It's part of the zkSession series. It's the third monthly event that I'm putting together, but unlike the previous events, this is not a conference. It really is a job fair, all online. So I've learned that many of the top projects and companies working in the zk community are looking to hire talented people into their team. And at the same time, I've seen that the zero knowledge community keeps growing, there's more students or folks just coming out of school, as well as employees in other areas of blockchain tech, or other traditional tech companies, who are looking to get their feet wet in the zk space. Anna (01:29): So I decided to put together a social event that brings these two groups together, and that is the ZK Jobs Fair. So this is happening, as mentioned, on April 22nd. It follows the third day of the ZK Proofs Workshop, which is a full fledged conference put on by our friends at zkproof.org. You can expect the job fair to be a social gathering on an interactive online platform. I'm still deciding on which platform to use, but have in mind something like Gather.town or Topia or Sophya, or something in that direction. But whatever the choice, I'll be sure to throw in some fun games or activities to try out while you're there. So if you are looking for a new opportunity in the space, or you're just curious to meet some of the folks behind the projects you know and love, do apply to be white listed for this event. I've added the application form in the show notes, and I'll shortly be announcing the sponsoring projects who will be present at these events ready to meet you. On another note, I want to say a big thank you to this week sponsor Mina Protocol. Mina is the world's lightest blockchain. It's an L1 crypto protocol that replaces the traditional blockchain with a zero knowledge proof, ensuring a super light and constant-sized chain that allows participants to quickly sync and verify the network. Mina's mainnet has just gone live, offering users a platform to build a private gateway between the real world and crypto. Mina also has an active demo in partnership with Teller Finance for β€œEnd-to-End Data Privacy,” showing how you could Mina to access your credit score and prove that you meet credit threshold requirements for on-chain services without ever disclosing your actual score. Very interesting use case right there. So if you're interested, visit Minaprotocol.com to find out more how you can get involved and join the community. So thank you again, Mina Protocol. Now here is my conversation with Alex Gluchowski all about zkSync. Anna (03:28): So today I'd like to welcome Alex, the co-founder of Matter Labs and one of the co-creators of zkSync, back to the show. Welcome back, Alex. Alex (03:37): Hi, Anna. Very excited to be here again. Anna (03:38): Yeah. So this is actually your third time, I think, on the Zero Knowledge podcast. What's cool with Matter Labs and the work that you've been doing with your team, I feel like I was there at the beginning of that journey, like I've seen it now for maybe almost like two-three years. How long have you been around? Alex (03:58): So we started building zkSync, I think. Two and a half years ago, it was December. Anna (04:05): I'm trying to remember, were you doing hackathon projects before that point? Alex (04:11): Not really. My co-founder was involved in a number of hackathons. Anna (04:14): That's what it was. Alex (04:14): I was doing different things, but then we converged into the same idea from different angles. And actually the first presentation of zkSync was on hackathon in Singapore. Anna (04:25): Exactly. This is what I'm remembering. I think, I was there and I was a judge there. And this is why I feel like I really saw zkSync's history from the very beginning to now, in a way that I maybe haven't seen other projects develop so much. So anyway, it's a project that's dear to my heart anyway. So I also, as the Zero Knowledge Validator, we actually have started to do a few small investments and we have done an investment into zkSync as well. And so this is also an exciting thing. It's a very new endeavor for us and it's one of the first investments we've made. I'm very excited to talk to you today about this project and where it's at. So the first time you came on the show, I think we actually did a general overview of what a zkRollup is. And the second time you came on the show, you introduced zkSync and Redshift. But this time around, what's neat is since that second interview, and since you've introduced the zkSync, I feel like a lot of us have actually gotten a chance to properly interact with it because of its inclusion in Gitcoin. So the Gitcoin Grant round CLR matching 9 just ended, but zkSync is something that people are probably more familiar with today. So why don't you tell us a little bit about the history of this project and maybe how it's been going this past year, now that people can actually play with it? Alex (05:52): Sure. So we started working on this because, it was 2,5 years ago, we just had CryptoKitties crisis and it was quite obvious that scalability was going to be a big problem into the future, in the context of the mass adoption of crypto, when we expect a lot of people to come and start using Ethereum and crypto in general. We did not anticipate this to happen so fast and to such a great extent, as it turned out to be. So we realized that zero knowledge proofs offer a tremendous potential to solve this problem, it was the only way to break out of the blockchain trilemma, to actually scale blockchains, without giving up decentralization and security to the highest degree. So we started back then with basic R&D on zkRollups, because the technology, the fundamental research was not mature enough to support generic use cases, zero knowledge proofs were still on the verge of being usable in production, back the most advanced proof system that was available to us was Groth16. And that required a trusted setup ceremony for each circuit. In other words, for each application and for each update of this application, similar to how ZCash did this. And it was clear that it's not very sustainable, and we did not have recursion. So we can only do some very simple applications or some very limited applications in scope with fixed functionality. So we went for payments, because we saw payments growing and people in a lot of countries embracing crypto not only as store of value, but actually means of payment already back then. And this is what brought us to the first version of zkSync, which we launched last summer, where we just had payments with a highly focus on usability. We made a rollup, which was very convenient to use. You did not have to pre-register before, you could just use your existing Ethereum addresses, you could transfer funds to any Ethereum address, no matter whether it's a normal account or a smart contract. We built a UX-friendly wallet with a lot of nice things and it paid off. We got a lot of integrations recently, starting with Gitcoin, the first Gitcoin round, we did, I think, less than half of contributions worth with zkSync. In the latest round, which just ended, 82% of all contributions were through zkSync. Anna (08:39): Yeah. I mean, I think the way they displayed it too, is to say: "if you use zkSync, this is your savings" and the savings are so dramatic now that it makes a lot of sense. It's so funny, cause it is the need that's pushing people into this, it's really solving a problem and not just like a cool, nice to have. I mean, the first time around, I was just like: "Yay! I get to use a zkRollup!", but also save some money. But yeah, this time around, there's no question. Alex (09:11): This is true. And it's very clear to everyone that the need is burning and not only for payments, it turned out that you need to scale now all smart contracts on Ethereum. And this was clear already a year ago. And, coincidentally, a year ago something happened in the zero knowledge proof research, which made it possible to accelerate dramatically the adoption of this technology, namely Plonk proof system was introduced and recursion was invented for Plonk. Recursion is what allows us to build this generic smart contract functionality, which we're very excited to talk about today. We just published an update about zkSync development roadmap, and we will heavily rely on it to bring EVM-compatible or EVM-portable smart contracts to zk-rollup, which is huge news. It's really hard to underestimate the importance of this. Anna (10:06): So let's talk about that update and the things that are now planned. And then I want to revisit how Plonk and recursion actually allow for this. But first, why don't we introduce, what are these updates, what's coming up for zkSync? Alex (10:20): So the main news is that we are going to open the public testnet for zkSync 2.0 in May. And zkSync 2.0 is going to be the platform where we support smart contracts. And not just smart contracts, it's going to be EVM-compatible smart contracts. It's not strictly EVM-compatible in the strict scientific sense, because we will recompile your Solidity code into a different virtual machine, which is SNARK-friendly, but it's essentially using the same code. And this is what most people mean when they say "EVM-compatible". So you can take your existing Solidity code, your existing tooling and make very minimal modifications in this. And maybe for a lot of contracts, you won't need to do any modifications at all, so you don't rely on some specifics of EVM. And it will just work out of box in the new version, and it will also be composable with other contracts. So the entire programming model remains exactly as it is now on Ethereum, you've got your contracts, those contracts can have some state, you can update the state and they can call each other and have atomic, composable transactions across a number of different contracts. Exactly the same way you have now on Ethereum. Anna (11:39): And what is this called exactly? Is it still just called zkSync or is there a term for this EVM-compatible component? Alex (11:49): Technically there is no name yet. We will think of naming, maybe we'll come up with something nice, but let's call it for now zkEVM, because that's going to be easy for people to grasp. It's a virtual machine, which is Turing complete. This is something we've been working on for many months now and it's coming to completion. And this virtual machine has a compiler from both Zinc, our existing smart contact language, which is based on Rust, and from Solidity. And we use LLVM to compile and use a lot of optimizations and security features from LLVM. And we use recursion to make it efficient, because with zero knowledge proofs, you can only create efficient circuits, which are specialized, but then if you have different smart contracts, they require different types of operations, to different degrees. So for example, one contract call may have a lot of storage accesses and a different smart contract call might require a lot of using of hashes. It will be really hard to fit them into a single type of circuit. That's why we're using different circuits for this different operations. And we combine them nicely through the recursion. Anna (13:09): So you kind of do another SNARK of those different SNARKs or you're just doing one? You're just doing one level of recursion here, you're not doing like multiple recursions or..? Alex (13:21): We're going to do multiple levels of recursion. So we already have recursion live in production for payments. This is why zkSync is currently by far most affordable rollup for payments. We have the lowest cost across all the roll-ups. There are some Validium solutions with lower costs, but they don't offer the same security guarantees system, as rollups. But across all the rollups we're the cheapest because we use recursion. Anna (13:48): I see. I recently did an episode on languages with Alex Ozdemir. We actually surveyed all the zk languages and Zinc was one of them. But are you moving a bit away from Zinc as your core language and focusing more on the Solidity use, or are you going to really be promoting both of those things? Alex (14:07): I think, in the short term Solidity will be very exciting to the most people, because they can immediately port the existing applications. And this is why we are going to put Solidity in the foreground for now, but we are still continuing development of Zinc, and Zinc will be available from the beginning, from the testnet in May. We think that in the long term, Rust-based languages will win. And we might even proceed from Zinc to native Rust at some point in the future, which is relatively easy for us, because we use LLVM and we can use any language which compiles with LLVM fronend, such as C++ or Rust or Golang or Python, or something like Haskell. But we think that Rust seems like the best candidate, long-term, for a smart contract base. We see this with other platforms embracing Rust, such as Aleo and Polkadot, and NEAR, and Facebook's Diem, previously Libra. So long-term, I think, Rust will win. But short-term, of course, we want to support the existing ecosystem and let everyone migrate seamlessly on zkSync. And this is why Solidity. Anna (15:19): Got it. Even on day one, when you can choose between using Zinc or Solidity, does Zinc offer some benefit? Is there something that makes the programming better, easier? Is it closer to the zk proofs or something? Alex (15:37): Well, it offers exactly the same functionality. Just language itself is better suited for building secure applications. You have immutability out of box, by default. You have a pure functional programming style with no side effects and so on. The syntax is a lot nicer. You need to write less code, and this code is going to be more secure, than if you approach it with something like Solidity, which is coming more from the direction of imperative programming. But from the functional standpoint, it's going to offer the same scope of capabilities. Anna (16:18): Got it. So I'm curious. There are some new solutions that are coming out, also using Plonk, such as Aztec. I mean, they're the developers of that concept. Their focus seems to be very much on privacy and I'm wondering, how are they different? And does zkSync have the ability to also provide privacy? Or is this a secondary benefit that you're not as focused on? Alex (16:41): This is something we're not directly focused on, because our main concern is now scalability and compatibility of the existing applications. But you can easily build privacy on top of zkSync, because it's coming naturally out of box of the design, which we're offering, because we will have an opcode for recursive proof verification. And those proofs do not need to go on Ethereum calldata. And they are very cheap to verify, with all the rollups the cost of calldata by far overweights the computational cost of producing zero knowledge proofs. So if you want to build something like Tornado Cash or any other privacy-preserving application on top of zkSync, you will be able to do it the same way you do it now on Ethereum, just lot, a lot cheaper. Your transaction costs into another cash, you will have to pay something like 100 times the cost of a normal L1 transaction, or maybe 20 times. On zkSync it's going to be roughly as expensive as a normal zkSync transaction. So it's not coming in the base layer of the protocol, but as like a layer 3, you can easily build anything there. Anna (17:58): Okay. In our last episode, we actually did talk about this idea of this rollup existing on its own with validators and having somewhat of its own ecosystem. And I know that in that conversation, we did have quite a bit of conversation about data availability. And I'm curious here, what you've described with, let's call it zkEVM in zkSync, if actions are happening in there and they're not reporting back to Ethereum, is that then fully a standalone blockchain of its own with validators writing to a chain? Are they acting as the connector to the main chain, or are they actually doing something else in this case? Alex (18:43): I think it's similar to all other rollups. It's fair to say that all rollups are standalone blockchains that are rooted in Ethereum and that use Ethereum network to broadcast, to propagate and make data available. So in this sense, yes, it's a separate chain with a separate state, but it completely relies on Ethereum for its security. So as with any zkRollup, the protocol offers exactly the same security guarantees as layer 1, as a mainnet, because you can access the data, but you have to go and fetch data from the archive nodes and download the transaction inputs and then reconstruct the state from there, if you don't trust the validators, Anna (19:29): Do you have, in a way, your own consensus mechanism happening there anyway. Or are the validators doing something more passive than what we see in normal POS systems? Alex (19:39): So with zkRollup and with rollups in general, you strictly do not require a consensus mechanism, but you can build one. You can have one, if you want. Anna (19:50): This is interesting to me. This is exactly the question I have. I always understood zkRollups as not needing the consensus mechanism. That was the thing that the L1 provided, but now I'm hearing something else. And this is what I want to explore a little bit. This sounds like an evolution of thinking around these architectures. Alex (20:09): There was this idea of progressive decentralization, that you should start with the minimum decentralization required for application, then proceed gradually because it will give you faster time to market. And this is exactly what we're doing. So we're starting with a single sequencer or a single validator. And then we're going to introduce a decentralized consensus to avoid any potential for censorship and to have the system fully controlled and owned by the community. And not by any single party. We don't want to be a single party that provides the proofs and have all the responsibility and be subject to some political pressure. We want it to be owned by users. So this is coming for zkSync. Anna (20:52): Got it. And right now, the zkSync that we've used with Gitcoin, currently there's a single validator, and that's you, guys? Alex (20:58): This is true. Anna (20:58): Okay. I mean, I know that other constructions like Hermez, they also have a way to decentralize in a way that the connection point between the L2 and the L1. But how would you compare them? How does the validator community or consensus community of zkSync compare to something like Hermes? Alex (21:18): As far as I'm aware of their design, it's still a centralized sequencer, but then you can decide on layer 1 who is going to be the centralized sequencer for the next epoch, but then throughout the epoch, it's the same. What we mean with the consensus that we are building is that every block is decided upon by the consensus of validators. So that you don't have a potential for denial of service by any single validator. So in the Hermez design, if the validator who gets the next slot, they are to produce blocks throughout the next epoch, just goes offline, then you've lost that slot. Then the entire system will stop and wait for the next slot to arrive. But with the design that we're building, if some of the validators are faulty, it's going to be Byzantine consensus anyway. So it will still continue. As long as you have the majority of the validators who are live and functional, you will just move on. And this is important. If we're talking about the network, which secures operations for millions of users and handles thousands of transactions per second, you can't really afford a downtime of even some minutes. Anna (22:36): Also something we talked about in the last episode was about data availability on the main chain. Is that still the same setup that you have today with this evolution, or has that changed? Alex (22:48): This is a very, very exciting part, which I'm really happy to talk about. We're publishing the post, by the time the episode is live we will have details about this new construct live. And indeed for zkSync 2.0, we will start with the architecture called zkPorter. So with zkPorter approach, the users will have an option to have their accounts on the zkRollup site or the zkPorter site. And the zkRollup users will have all the benefits, all the upsides and downsides of zkRollups. So they will pay normal zkRollup transaction fees, and they will enjoy the full security guarantees of L1. On the zkPorter site, if you choose for that account type, you will have slightly degraded security guarantees, but they're still better than optimistic rollups β€” and we can talk about that β€” but you will pay minuscule prices. This will not conflict with Ethereum gas prices, because the data availability is going to be stored off-chain. But off-chain secured by a consensus of many validators, broadly decentralized. And the interesting thing is that the both account types will remain fully transparent to each other and seamlessly interoperable. So for example, imagine that you will have a Uniswap on the zkRollup account. So it's fully secured, just like the main one, and all the big lending partners will have their accounts also on the zkRollup side, but then you will have millions of users on the Porter side, who can interact with this Uniswap just the way they would interact normally with the Uniswap, just making a simple click in MetaMask, and they will have to pay these very small transaction fees. So this is what zkPorter gives you. Anna (24:47): So there's these two versions, and you can opt in or you can choose which one you're going to do, but the zkRollup, does it still have the data availability setup that we talked about in that last episode? And by the way, everyone should check that out, potentially, if they want to, I keep referring to it. But there you had explained how it was written on to L1. Is that still the case for the zkRollup version? Alex (25:13): Exactly. So for each update in the zkRollup account, we will have to publish the final state of the account on Ethereum network. So anybody can check that. And if something happens to the validators, you have full guarantees that you can retrieve this money, you can reconstruct your state and then provide the proof of ownership and fetch the funds from this account. Anna (25:37): Got it. And with the zkPorter though, it's off-chain consensus. And I guess that consensus actually determines state and all of the data lives there. Alex (25:46): Well, it's independent. So we'll have a separate consensus for the state, for the blocks that will go to mainnet, or for all our blocks in general. But this data consensus is only securing data. And this is why the threshold of entry is going to be very low. You will be able to participate to this consensus, with a simple laptop. I mean, your participation will be meaningful, you will be making some money from providing this data availability to the network. And we will be able to collect signatures from 10,000 validators for each block. And it doesn't have to be synchronous. We don't have to collect the signatures within three seconds. It can take time, it can be 15 seconds, half a minute, because the blocks that go on mainnet are asynchronous anyway, they still have to wait for the zero knowledge proofs to be generated, which is still a few minutes, probably. So we can wait for all the signatures to be collected. Then we can set the theshold very high. So we can have a quorum of a super majority of the staked token holders for every block to go on mainnet. It's important to know that all transactions concerning zkPorter accounts are secured by the same zero knowledge proof state transition function that we use for zkRollup. In other words, the validators can never steal funds and they can never crop the state. So the only bad thing that can happen to zkPorter accounts is that the majority or super majority of the token holders will sign some valid state transition, but will withhold the data and not inform everybody about what this new state is. In which case all the accounts in zkPorter will be frozen, including the stake of these token holders. So the only thing they can do is kind of like take everyone hostage together with their own money and freeze them and destroy their stake. So it will be suicidal. ----So we're talking about a--- For this system, in order to remain secure and the funds to remain accessible, we need a not-enormous majority, not enormous minority, but just a rational minority. So we need the minority of people who are sane and don't want their funds to be lost from the token holders. If you have that, it's going to be fine. Anna (28:18): But when you mentioned Validium, and I know it's not like Validium, but it's a bit like Validium... In there, I know that there's a committee, and here, I guess it's this validator group. And in both cases, there's this assumption that it would be suicidal, you'd have to wreck yourself for no reason. And many people would have to do that at the same time in order for this to be corrupted. Is that the link part, except that instead of it being a committee, it's more of a validator community? Alex (28:48): That's exactly the difference. And the problem with the committee is that you have a few validators there, because if you have a lot of them, then how do you manage, how do you govern the participation: for whom, I suppose, to wait? For all of them, for half of them? It must be a small group of permissioned participants. If it's a small group of permissioned participants, all they have at stake is their reputation, which can be a lot, but it's hard to quantify, but then they might have some different incentives, they might be subject to some political pressure. Imagine regulators coming to this known group of small validators and telling them: "Now you have to introduce KYC and AML. And people on this list, we just want to confiscate their money." And they will have to comply. They won't be able to confiscate the money immediately, but they can freeze those accounts and then do an update at some point and take this money. But this is a very plausible situation. We know that this happens to exchanges. We know that this happens to a lot of centralized projects. Because they are known, they have a clear set of owners, and there are a few of them, and it's really easy to come after them. Now with a decentralized data availability, we are in the same situation as we are with Ethereum, there are a lot of nodes, it's really important to have the theshold low and to allow people to participate with minimum hardware requirements. And also the decentralization of the token is going to be very important, because you want the super majority or even simple majority of the token holders to be as broad as possible, not to be controlled by 20 nodes, but maybe by 2000 nodes. So in this case, this could be very hard to come after all of that. It's a truly decentralized system. And this fact is what makes the zkPorter actually more secure than the optimistic rollup. So this is a very interesting and nuanced comparison. And some people might disagree with me, but let me explain. Let's just examine how the security assumptions work in an optimistic rollup and how they work in the zkRollup. So with an optimistic rollup, there is a broad notion that you need 1-of-N honest participants for the rollup to be secure. So this is not the full story. The full story is that you rely on two different assumptions. One is that you have at least one honest validator, who will catch the fraud and then submit the fraud proof to Ethereum. And the second one is that this fraud proof will actually be mined on Ethereum, by miners. You will be able to overcome any censorship attempts on layer 1. So let's consider both parts separately. With one of them, honest validator assumption, it will practically mean that you just need some small amount of nodes to be honest. You will require all the validators to be honest, but you will realistically have not so many validators, because the requirements of running and optimistic rollup node, which is running at full capacity and providing 200 TPS on Ethereum, will already be very high. You won't be able to participate and validate the optimistic rollup blocks with a simple laptop. You will actually need a well secured, very powerful machine running somewhere in the cloud or on your premises, with a lot of memory, at least, because right now the Ethereum throughput is limited by the capacity of access on the database, because you have to do a lot of random access reads in the database to reconstruct the Merkle proofs and verify all storage accesses. And you will have to do the same thing. So to accelerate this and be beyond the limits of Ethereum, you need at least a lot of memory. Anna (32:59): I mean, this to me is something I'm hopefully going to get to a chance to talk to a group, who's building optimistic rollup soon. So I can actually explore this even further there. But just from your understanding, are they running just a full Ethereum node and then having an archival node? Or are they running multiple nodes? I'm confused as to why there would be such high requirements there. I always got the impression that zk proof generation computation would be the heavier thing than a fraud proof. Alex (33:30): The zk generation computation, it requires a lot of resources, but those systems just can be run in parallel. Whereas to run a sequencer, doesn't matter for optimistic rollup or for a zk rollup, you have to run what essentially is a normal Ethereum node. So you have to execute all the transactions sequentially. And if you go higher than the theshold, which is currently set for Ethereum, let's say 20 transactions per second, you go towards 200, 2000 transactions per second. You need a fundamentally faster hardware to process those transactions. And the bottleneck there is storage access. You just need really fast storage. Probably, you need to store all of the blockchain state in this case, optimistic rollup state, which is going to exceed the Ethereum state by far, since it's the scaling solution, otherwise it doesn't make sense. You have to keep all of that, all these gigabytes of data, terabytes of data in memory, not on the SSD. Anna (34:35): In the zk rollup context, you're just writing... Alex (34:39): It's the same thing. The validator is required to have very high, very powerful hardware profile, but not the validators of zk-Porter, the validators of zk-Porter only need to store blocks of data. This is a lot easier. Anna (34:58): So the optimistic rollup memory requirements of a sequencer is comparable to the verifier of a zk rollup as well? But the zkPorter using a sort of Validium one, I know it's not, but like in that direction, would require a lot less. Is that correct? Alex (35:16): Yes. This is correct. Anna (35:19): Would you say though... The optimistic rollup versus vanilla zk rollup, is it actually the same amount of memory and computational power needed to run those nodes? Alex (35:30): So the sequencers are almost identical. Then on the zk rollup side, we will need to produce the proofs for the blocks that have been compiled, which have been produced by the sequencer. And this can be done in a completely untrusted cloud, where you can provision a lot of instances on demand. So anybody can do this. You just need an account on different cloud providers. And eventually it's also going to be decentralized, but for now it's a solved problem. You just go and you create hundred nodes, they produce the proof, you collect them together, and then you submit this proof. But the sequencers are going to be essentially the same way. But to come back to the story, if your sequencer is highly expensive, very powerful, has a lot of requirements, then only if you really motivated validators, who will run it... So you as a user, just won't be able to run it. So it will boil down essentially to just a number of validators running those nodes. And it's going to be a small number. And all of them will have to be malicious to do anything wrong with the system or malicious of faulty. But with the zk-Porter, just the pure number is going to be much higher. Even though the assumption is not one of them honest, a minority, less than super minority operational validators. But then, on the optimistic rollups, comes to the second part, where you actually have to submit the fraud proof to Ethereum, and this is far from solved. So right now we described in the back on optimistic rollups that will cost, at the current price, something around a hundred million US dollars, nominal cost of this attack. And the actual attack is going to be a lot less. And we will publish a link to this in the comments to the show. And it cannot be mitigated, as long as Ethereum remains on proof of work. So it's a threat which hangs on optimistic rollups, and the assumption is that it's a lot of money with a lot of uncertainty, so nobody will attempt this attack. Well, if you accept that argument, then you're just much better off with the zk-Porter, where you know exactly how much money someone will have to burn. And the stake might be up to very, very high amounts, like 70% of all the stake in this network. But then you will have super low transaction costs. Anna (38:07): You just hinted at the direction I want to take this interview next. Currently Eth1 proof of work, in the last week or so there's been these proposals for how to move it to proof of stake. What does the Eth2 world look like with proof of stake and these zk rollups? And I know it's been discussed quite a lot and people have given various perspectives and ideas of what this could look like, but I'm really curious what it looks like currently to you. What does that future look like? What does even the transition look like? Alex (38:46): I don't think that a lot of things will change soon. So first of all, we'll have to wait for the merge to happen, which is realistically still significant time away. Because in Ethereum you have a lot at stake and this things cannot move as fast as some new solutions. And once this happens, we will still have to wait for sharding to arrive, which probably is still some years away, after the transition to proof of stake. And before sharding arrives, essentially nothing will change. We'll have exactly the same system as we have now. Just more eco-friendly, because we don't have to burn all this electricity. But from the point of view of protocol and data availability, nothing's going to change. Once sharding arrives, we'll have to see how exactly it's going to be implemented. And then potentially we'll have just a higher capacity of for rollups. Anna (39:40): Okay. But a rollup wouldn't become a shard, would it? Alex (39:51): They are exactly like shards today. Yes. Speaker 2 (39:53): Yeah. That's so interesting about that. Alex (39:56): And they show the limitations of shards. So if you have multiple shards, they increase the overall capacity of the blockchain, but at the cost of making interoperability between different shards harder and less efficient and less convenient, and this is what we would have with rollups, if we have multiple rollups on Ethereum. Well, not exactly rollups maybe, but because they still share the same block capacity, if we had multiple L2 solutions like multiple zk-porters, for example, than sending funds from one of them to the other will have to go through L1 or through some complicated mechanisms with liquidity providers, which is going to be an impediment, it's not going to be seamlessly compostable. So you will only have seamless composability within the single shard. And I think it will remain the same in Eth2, but we'll have to see the final designs. Anna (40:54): Got it. Let's go back a little bit to what you were saying about the fraud proofs on proof of work from Optimistic. If there is proof of stake, does that change a little bit? The problem there, does that make it easier, or maybe an easier is the wrong word, but does it make it more effective? Are they able to actually, I don't think it's rollback, but are they able to actually call the fraud proofs, if it's proof of stake? Alex (41:21): They will be able to coordinate the community and say: "Okay, we know exactly who of the validators participated in this attack and we're going to punish them." But you have to agree on these things beforehand. It's really hard to change the rules of the game, once the game is on, and we see it with EIP 1559. So let's first get 1559 pushed through and then talk about some collective punishment for validators. Because we see that there's a lot of contention and it's not that easy. We also see the situation with Parity Multisig, where it was absolutely unambiguous. It was a mistake that costed a lot of people a lot of funds. And it would have been easy to make an upgrade, which solves the problem. If the community really wanted to cooperate and was able to coordinate such things, but we did not see this happen. Potentially, theoretically, from the protocol perspective, what proof of stake gives you is the ability to coordinate, but the coordination still remains with the community, and it's far from seeing this coordination really actionable immediately. So I'm still skeptical there. And I'm also skeptical that such a coordination should take place. And I'm very strongly against reliyng on human coordination for something that should have been built more reliably in the incentive structure of your system. With Ethereum miners are essentially free to do whatever they want. From the protocol perspective, they can do whatever they want. They did not sign any contracts with the society, that we're going to do this and that. They just have some incentives to behave in a selfish way, which will benefit everyone. Because it's in their own interest to include transactions and get the feastful of transactions. So you don't want to change that. You want to build systems that are resilient from the underlying perspectives, with taking everyone's incentives into alignment. And I think we can build these systems. I think we can build something a lot more secure from censorship and front running, using of course zeroknowledge proofs. So we have these interesting developments, we have these ideas, such as time-lock encryption, where you could encrypt your transaction and submit it to validators in an encrypted envelope, the validators will include it in the block without knowing what the transaction actually contains. And this will only be revealed after the block is finalized and supported by the stake of validators. And I think this is a much better approach to solve both censorship and front running, than to say like: "Oh, please guys, don't do front running and don't do the censorship, otherwise we're going to punish you maybe, somehow, like a collective action." Anna (44:24): But is that something that you've actually built what you just described there? Is that something you've actually built into your validator rules or not? Or is that just an idea for more like an L1 or an Eth2 construction? Alex (44:38): Well, it's an idea that we definitely want to build. So for now we're focused on the basic scalability, but we definitely want to have this built. Anna (44:50): But which part does that live in? What you just described here... Alex (44:52): It has to remain at the consensus part of the protocol. So in our case, it will be at the consensus level among zkSync validators and users. Eventually L1s will probably embrace this as well, because it's just a great technology. It's the only way we can ensure that long-term, we don't have any censorship and any front running. Anna (45:12): So one topic I want to touch on is the partnerships. So we mentioned Gitcoin. That's the one that I've actually used, but what's coming up for zkSync in terms of projects that are potentially going to be working with it. Because, I mean, if you look at the list of investors that you recently brought on, there's a lot of DeFi projects there. And I'm curious, how far are you in talks with those folks? Are they just supporting, are they potentially going to be using it? Maybe share what you can on that front. Alex (45:44): Sure. So all the partners who invested in zkSync, actually are interested in using zkSync for scaling their projects. We have three types of investors in this strategic round, which we just had. And by the way, we only had strategic investors in this round. The first type is wallets who want to scale the operations for their transfer swaps for their massive user bases. So we are going to do an integration with Argent, Mykey, imToken, with other wallets following. We have a number of new wallets who directly build on zkSync. And second category of projects is exchanges and On-Ramp/Off-Ramp platforms, where we want to integrate and allow transfers like: on the ramp from fiat and off-ramp to exchange has happened directly in the L2, bypassing L1, so that the users can just go purchase some ETH or tokens, go to L2 directly and then start trading it and transferring, because it's just not sustainable to let all of them go through deposit withdrawal mechanisms. It's very expensive from L1. Anna (46:57): How would that actually work? I want you to continue, but I want to just explore that for one second. You would buy ETH, but if you're buying ETH, you're buying ETH in L1 anyway, aren't you? Even if you're buying it on an exchange? Alex (47:08): If you buy it on an exchange, maybe you buy it from someone who already has it in zkSync, or maybe you buy it from the exchange. And then the exchange can batch multiple transactions together. Or maybe the exchange has already some liquidity in zkSync and you just get this liquidity directly from there. And then the exchange can provide more liquidity by doing one L1 transaction from time to time, which is not going to be very expensive. It's not like doing thousands of transactions for thousands of users. Anna (47:34): That makes sense to me. So you don't individually have to do that lockup on L1. Okay, cool. And continue. Sorry I cut you off there. Alex (47:43): Yeah. The third type of projects who invested in zkSync are DeFi projects and they are simply waiting for our upcoming version 2, where they will have smart contracts. Speaker 2 (47:55): Cool. So you're saying they're all looking at it. Is there some official ones, is anyone already working on it? Do they have something? Alex (48:04): Sure. Well, we're building integrations and test kits. And the first testnet application we had built was with Curve, it's live on zksync.curve.fi, you can try it out. It's been built with Zinc, with the previous iteration of Zinc language, but it just as well compiles to the new virtual machine. Anna (48:27): Got it. Could you actually use that though? As an end user already? Could I go and do a stable coin swap on that? Alex (48:36): Yeah, you can do this, but it's going to happen on testnet. Anna (48:39): Okay. So you can swap testnet stable coin. Okay. Got it. So you can't quite do it with the real thing yet. Alex (48:48): You'll have to wait for our mainnet, which we're targeting for August this year. Anna (48:53): Oh, very cool. It's coming up fast. So I think one thing we didn't get a chance to talk about yet is the Matter Labs and the team that's actually building this, because I imagine over the last two and a half years, this has evolved a lot as well. So what is the Matter Labs team? Do you divide yourself up into specific project teams? Is there a zk-Porter team and a zkSync team, or is it all under one umbrella? Alex (49:18): Well, yes, we have different sub-teams. Matter Labs is organized on the principles of ownership and responsibility. And we have a lot of smaller sub-teams working on different things. Right now we are over 20 people and almost all of us are engineers, have engineering background, and we're actively expanding the team. We are hiring engineers who have experience in Rust. We are hiring junior engineers who are brilliant and bright, and ideally have experience in programming competitions, in Olympiads in informatics, ACM, ICPC, contests, and so on. And we're also hiring cryptographers, applied cryptographers, researchers. And we recently also started hiring non developer positions. What we need most is someone on the communications side and business development side. So if you feel interested in what we're doing, and if you're aligned with our values and mission, please talk to us and shoot us message. Anna (50:28): Cool. How did you build the team? How did you originally find the co-founders and teammates? This is just actually a question because the topic of onboarding folks into zk tech is something that I'm focused on right now. And yeah, I'm just curious how you first did it. Alex (50:45): Well, the first hires are of course very tough. So I've found my co-founder at the conference. So I think the greatest environment for finding co-founders is conferences and things like this, where a lot of like-minded people come together and you have people who are interested in something and who are passionate and active enough to do something. And actually have a lot of affection. Anna (51:12): Man, I hope we get to have conferences in person again really soon, because what you're saying, it rings true. And the online versions don't quite cut it. I think it's not just like: "Oh, it will be fun to travel again." It's necessary for this exact thing. Alex (51:28): Absolutely, you're missing out on serendipity. Being online is very different because you cannot just randomly run into people, but it should not be too random. It has to have some context, so you need events like this. But that's for co-founders. For the teammates, the early teammates we hired were exclusively not from the blockchain space, we were hiring very smart developers. So what we're doing, we're hiring people who have very strong math background and developer background and who can program very fluently and being able to maintain very high complexity, but they did not have any blockchain domain specific knowledge or any zero knowledge specific knowledge before us. We had to teach them. Anna (52:17): This sounds like it would be really intense. You'd have to train people. But I wonder if by doing so you also encourage new perspectives. Do you remember that? Did you notice, they might bring actually a very different, maybe a very refreshing perspective compared to what the general community is doing? Alex (52:36): So I would say that to have a new perspective, you first have to come up to speed, to absorb all the knowledge. If we are talking about the product ideas, then you actually have to understand the market really well. So it takes time to get there. If you're talking about the technology, then yes, those people who we hired were coming and pointing out to directions, which we would not originally think. So I think it has different levels. We were hiring the most scarse niche, which is very smart technological people, because that's the hard part: to make things execute, to actually deliver. And this is why it took us longer initially to get up to speed. But now we're really fast because we have this big team who's very cohesive and very knowledgeable now. Anna (53:28): Nice. And I think like 20 people, that's a nice number for a team, I feel. Alex (53:36): Yep. I agree. I would not want to work in a company where I don't know everyone personally and have relationship with everyone personally. Anna (53:45): Cool. Well, Alex, thank you so much for coming back on the show and sharing with all of us the update to the work that Matter Labs is doing: zkSync, zkPorter. One thing I didn't mention throughout the episode is, you actually recorded a video at the ZK sessions last month, all about zkPorter. So I'll add a link to that in the show notes, if anyone wants to catch that up. And you did mention you're going to have a blog post published by the time this airs. And we'll also add that into the show notes. if people want to find out more,. Alex (54:17): Thank you, Anna. It was really exciting, as always. Anna (54:20): Cool. And something I want to do different for the first time ever. I want to say a big thank you to Andrey, the producer of this podcast, as well as Henrik, the editor of this podcast. For the last few weeks Andrey has been with us, this has been really helpful. And Henrik has been editing this podcast for the last year. And I feel like from now on, I need to do a little shout out at the end of the episodes. So thanks again. And to our listeners, thanks for listening.