Anna Rose (00:00:05): Welcome to zero knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promised to change the way we interact and transact online. Anna Rose (00:00:27): This week Tarun and I chat with Jill Gunter and Ben Fisch from Espresso Systems. Espresso has been in stealth mode for a couple of months, but recently went public with their proof of stake L1, developed with flexible private applications and ZK rollups. In mind, we explore the founding and development of the project, the Espresso base chain and the Cape wrapped privacy smart contract application that they've built. We walk through the programming model, the evolution of privacy technology, what configurable or flexible privacy means and could unlock the Espresso roadmap up and more. But before we start in, I wanna highlight that the Gitcoin grant round 13 is on now. This means donations made on their platform during this period are actually matched from matching pools. Your donations go a lot further. If you donate now. I wanna highlight the zero knowledge podcast grant. It's a long standing grant on Gitcoin, as well as the ZK hack grant, both are in the main pool. Anna Rose (00:01:26): So this is a great way to support the show and our events. If you want to, I've added the link in the show notes. I also wanna point people to the ZK tech side round. This is organized by the ZKValidator and 0xPARC, and we've brought together a great group of matching partners from the ecosystem. So there's a 275K matching pool. This goes towards ZK projects. And if you're interested in supporting these kinds of projects or just learning about them, head over to Gitcoin grants and check out the ZK tech side round something I don't usually do on the show, but wanted to ask if you like the show, be sure to give us a review or a like on any platform where you're listening, help us share the show to groups, communities, or teams you think might get something out of being in the community. And, yeah. Thanks generally for listening. I'm meeting lots of new joiners right now through ZK Hack on our telegram channels. Yeah. And getting to learn about all sorts of new ZK projects coming online. So now Tanya, the podcast producer will tell you a little bit about this week's sponsor. Tanya (00:02:28): Today's episode is sponsored by Anoma. Anoma is a suite of protocols that enables self sovereign coordination. Their unique architecture facilitates sufficiently the simplest forms of economic coordination, such as two parties transferring an asset to each other as well as more sophisticated ones. Like an asset agnostic bartering system involving multiple parties without direct coincidence of wants and even more complex ones such as N party collective commitments to solve multipolar traps where an interaction can be performed with adjustable zero knowledge privacy. To learn more about Anoma visit their website at anoma.network. Anoma is also hiring visit heliax.com/jobs to learn more about their openings. So thank you again, Anoma. Now here is Anna and Tarun's interview with Espresso Systems. Anna Rose (00:03:17): So I wanna welcome Ben Fisch and Jill Gunter back to the show. This is the first time I have you both on here together, but yeah. Welcome back. Ben Fisch (00:03:24): Thank you, Anna. Jill Gunter (00:03:25): Yeah, it's great to be back. Great to see you. Anna Rose (00:03:27): So I'm very much looking forward to digging into this project. It has been very much in stealth for some of the people listening. This might be the first time they hear about a project that you're doing together. It's also one that Tarun, who's also on the call and I have made early investments into him through robot, me through ZKV, but I know that a lot has changed. And so for me, this interview is going to be a revisit and I'm very excited to find out how it's evolved. So the name has also evolved a few times since we first spoke. So why don't we, or start with that? What is the name of this project and how did you get here? Ben Fisch (00:04:03): The name is Espresso or Espresso Systems. And we had a bunch of different names that we were thinking about. One, one of the wonderful things about being in stealth is you can, you can, Anna Rose (00:04:13): You can change Ben Fisch (00:04:14): Have a long time to work through your branding and perfective. But Espresso is just a really fun name that we all liked and it evokes kind of speed and compression, and it has a lot of fun memes or puns that you can do with it related to building blockchains that are scalable. Jill Gunter (00:04:36): There have been a lot of coffee related dad jokes. Oh, basically I think what is what that is getting at, which we're looking forward to sharing those with the general public over the coming weeks as well. Anna Rose (00:04:48): Very cool. So Espresso, what did you say? Espresso labs? Ben Fisch (00:04:52): No,Espresso, Espresso systems. Anna Rose (00:04:53): Okay. So Espresso Systems. Tell us a little bit about what this project is all about. What are you focused on? Ben Fisch (00:05:01): So Espresso is a single shot scaling and privacy solution for the, for the blockchain ecosystem. Anna Rose (00:05:07): Oh my God. Is there a pun in there? Jill Gunter (00:05:09): Oh, Yeah. Gosh, great. Here we go. All right. Tarun (00:05:14): Every generation of programming language needs a coffee reference from the... Ben Fisch (00:05:17): Well it's a virtual Espresso machine. That's what we're building here, right? Not the EVM, but the VEM. But that aside Espresso is focused on both scalability and privacy, which obviously we're not the only project focused on. Those are two of the biggest problems in the blockchain space right now. And we can get more into the details about concretely our approach to these two different problems and what, what we're doing. But I think that the important thing to emphasize is that Espresso is a solution for existing ecosystems. So we are intending it, not as a standalone ecosystem, but as it is an independent layer one, but through bridges to the existing EVM ecosystem it's intended as a scaling and privacy solution for other blockchains, including Ethereum. Anna Rose (00:06:04): Very cool. Let's find out a little bit about the company, the team. So who makes up this team Espresso Systems who's in it? Jill Gunter (00:06:13): So Ben is our illustrious CEO. I head of strategy, which means I'm doing a lot of things on figuring out, go to market figuring out who the user base is, what they care about. I'm also kind of like defacto doing a lot of things on marketing and other things that I have no business doing probably, but that's the nature of early stage startups. And then we have a team of about 25 people about 16 of whom are on the engineering side. Those numbers might be out of date by the time this airs, because we actually have a few people joining, oh, over the coming few days Ben Fisch (00:06:49): We are growing quickly. Jill Gunter (00:06:50): Yeah. but we've been very lucky to be able to amass a really amazing team on the technical side, both in cryptography and in systems engineering. And then also to find a few really brilliant people on the product side who working with us as well. So yeah, it's, it's quite the crew that we've put together even while operating in stealth. And we're, we're excited to continue to grow the team here over the next few weeks. Anna Rose (00:07:18): Oh, very cool. Ben, when we, like, I think the first time we spoke at the time you were part of Dan Boneh group, are, is any of the people who came with, like, have you grabbed anyone from there for this team? Ben Fisch (00:07:31): Yes. Well, I've, I've worked with Charles and Benedict for a long time. Charles Lu and Benedict Bunz, we all met in the PhD at Stanford and we've collaborated on projects, both in academia and in industry over the years, you know, some of those went well, some of those candidly didn't go well, but you know, we've certainly bonded and gained a lot of experience together over the years. And so we started this together and brought and very shortly thereafter and Jill and I actually met in 2017 when she was talking about fat protocols at the MIT Jill Gunter (00:08:08): That's throw back for you all. Yeah. Ben Fisch (00:08:11): And we've always dreamed of working together. So the timing worked out well. Jill Gunter (00:08:15): Yeah. And it, it also turned out that we were down similar rabbit holes in terms of the problems that we were thinking about where I guess I spent a lot of last year and the year prior thinking, well, first thinking about what I really have conviction on in the crypto space, which I think is an important thing for all of us to always be revisiting. And one of the things that I really started paying attention to was just why the payments use case hadn't taken off yet. And I was looking at a lot of the stuff that was happening with, you know, circle and USDCs massive growth over the course of 2020, but looking at the utility that that stablecoins was offering, it still was not being applied in the payment space, which was kind of the original Satoshi dream right. Of peer-to-peer digital cash. And I started thinking a lot about both scalability and privacy, and I knew that Ben was one of the, sort of like research luminaries of those worlds and hit him up and was lucky enough to be able to, to land on the team early on. Anna Rose (00:09:18): Cool. When we first started talking about this project, I felt like it was it was very, very centered on privacy. And since then, I know it's evolved. Maybe the underlying tech hasn't evolved that much, but the, the positioning or the way that you're describing it have, so can you tell me a little bit about that journey, like where you started, what you saw yourselves at as at the beginning and where you are now? Ben Fisch (00:09:44): Absolutely. and the tech has evolved as well too since. You know, we've, we're, we're a very research and development, you know, heavy company. So and we're always trying to solve, you know, the cutting edge problems in the space. So, but when we started out, we were positioning ourselves as you know, focusing on solving privacy problems. And we knew that building a scalable infrastructure is an important feature. And I think, think that the shift in positioning is that we actually are building a scalability solution where privacy is an important feature. And that's important when we look at what users we're trying to reach, we want to be relevant to users who may not care about privacy. Right. But one of the differentiating features of our, you know, of our infrastructure is that it also supports privacy in, in web3 apps. Anna Rose (00:10:40): How did you get to that point? Jill Gunter (00:10:41): I was just gonna go there actually. And so that's perfect. I, I wanted to talk a little bit about our process and our product process specifically, and also just the way that we're trying to approach building all of what we're describing here, which is, you know, if you look kind of historically, I think at, it's funny to say historically, this is only like the last five years, but at the layer one projects of kind of the 2016, 2017, 2018 vintage, you know, a lot of, kind of the playbook revolved around coming up with really neat sort of theoretically interesting solution to a problem that existed with Ethereum, writing a white paper about it, and then going away for a period of like one to five years and building that solution and then rolling it out with, you know, developer tooling around it and sort of field of dreams approach. Jill Gunter (00:11:36): If you build it, they will come. And they will hopefully build interesting applications that attract users. And I think that that's worked out really, really well for those really nascent sort of bleeding edge at the time projects. But I think if I look around at the industry today, we've actually moved on from that as an industry, which is really great. You know, we are at a point of maturity where we do have products in crypto in web3, web3, wasn't even a term right in 2017, not pervasively used anyway, but we do have these products that are finding real traction, real product market fit. And so it's something that we've been really intentional about as we are building infrastructure in layer1, that we don't want to just go away and bury our heads in the sand and think, okay, this idea that we've come up with is really cool. And we're, we're just gonna drop it on the market when it's ready in a year or so is time. But rather we wanted to have a much more engaged and ongoing product process with users ranging from developers to DeFi degens, to NFT creators, to DAO admins all the way up to big institutions. And so we've been very intentional over the last several months about, you know, conducting youth tests and really being engaged with all of those folks to understand user needs. And that's where a lot of this feedback has, has come from. And that's where a lot of the positioning has evolved over time. And I would anticipate that it will continue to, which is actually the default thing. If you look at, you know, most Silicon valley tech companies, like, you know, they end up in very different places in terms of what the product is in terms of positioning from where they started. And as Ben mentioned, like we're still very much at the starting line here. And so I think to us, it doesn't perhaps feel like it's evolved that much. But I, it surely will continue to, and your right to point out that certainly our positioning has been informed by all of those user studies, even just over these, these past few months. Anna Rose (00:13:45): Going back to what you had said, Ben, about sort of the privacy first. Now it's more like scaling solution with privacy optional. Is there a trend towards just focusing on scaling that you were kind of picking up on or do you actually feel like, I almost wonder if like the industry had shifted so far to scaling and I almost feel like it's walking back to privacy. I don't know if you picked up on that too. And so that's why I'm curious about like how you see yourselves developing from here. Are you gonna be doing most of the work into the privacy direction or do you see it more in the scaling? Would you see it more in the adoption maybe? And let that define it? Ben Fisch (00:14:24): I think one of the main reasons to focus on scaling first and then privacy as a feature is that scaling is something that everyone, everyone really needs. Everyone needs low cost transaction. And by being a scaling solution, you can attract apps and liquidity and activity onto the system. And that also helps when considering privacy as a feature, if there's no activity on the system, if there's no apps being developed, then it's, if there's no liquidity, then it's hard for anyone who wants privacy to, you know, really get any use out of the system. And so we can talk more, you know, in more detail on what exactly Espresso is. It's, it's going to be like a high throughput blockchain that has bridges token bridges to Ethereum and other EVM blockchains. And it runs a, you know, high through proof of stake consensus protocol that integrates with rollups. But all of this is, you know, designed to attract, you know, web3 apps to be deployed on Espresso. And then it seamlessly integrates with some of the privacy protocols that we've developed there to enable those app develop and users to have optionality around privacy. Jill Gunter (00:15:34): One thing that I just wanted to add there as well, Anna, is that you really need both to unlock these kind of holy grail use cases in web3 and crypto, you need both scaling and privacy. One thing that really came through is we've been talking to potential users over the last few months is that the scaling issue is a pain point today for today's users of these applications. And so I think that it's important that we are meeting that pain point in any solution that, that we design that we bring to market. But equally then privacy, it's not actually so much of a pain point as for example, scaling is for today's users because today's users all came into this all transparent all the time for the most part world and, and were clearly kind of comfortable enough to adopt these products despite that. But privacy is a huge barrier to future users, future use cases and innovation. And so that's one framing that's kind of informed. Yeah. Our priorities and, and positioning. I would say, Anna Rose (00:16:41): I think it, it would be a great moment to actually jump into what the components of the system look like, how they work together. And I sort of wanna revisit that like how it, how users could then interact with privacy, but Espresso Systems is the company, but Espresso is also the platform. Maybe let's walk through the components of this system. Ben Fisch (00:17:05): Great. Yeah, no Espresso is, is the platform is the blockchain that will have bridges to the rest of the EVM ecosystem, but it runs its own high throughput proof stake consensus protocol. And one of the things that we've been working on really hard is how to integrate that with rollups to achieve better scalability without compromising on a decentralization and specifically decentralized data availability rollups are a very popular scaling solution and technique these days. And it really is the only way to scale past, you know, thousands of transactions per second because consensus protocols on their own just hit fundamental information, information, theoretical bottlenecks in terms of how fast information can propagate through the network. And so rollups are a way of compressing many transactions into a smaller size transaction which is ultimately necessary if we want to keep fees low, even, you know, in the sense range as demand, scales beyond what you're seeing on Ethereum today. Ben Fisch (00:18:08): But the challenge is, and with pitfall of a lot of existing all really existing rollup of solutions are that they either are posting all the raw transaction data to the consensus protocol and achieving a fairly limited amount of compression reducing certain things required for verification, but not overall significantly reducing the amount of information that needs to be propagated through the system. Or they're resorting to something called a data availability committee, which is essentially a centralized solution to making sure that data is available to everyone in the system so that they can use that data to build transactions. And so what we've been focused on is how to carefully integrate rollups with a proof of stake consensus protocol in order to get around that issue. And then I can also talk about this is a lot at once, so maybe we can break this up, but I can also talk about our privacy protocols and how those work Anna Rose (00:19:08): Well, actually that was my one. My next question was actually CAPE protocol. Let's, let's actually like talk about some of these different pieces and then talk about how they work together. But yeah, what is CAPE? And this is actually where I got a bit confused. So like is Espresso the L1 that's the blockchain itself. Ben Fisch (00:19:24): Espresso is the L1. Okay. And CAPE, which stands for configurable asset privacy for Ethereum is a protocol that can run on any EVM platform. Anna Rose (00:19:33): So it's not, it's not only attached to Espresso. It could actually, it's a protocol. It's something that could be deployed in another place. Ben Fisch (00:19:41): That's right. It is a protocol that could be deployed in a different place. Although there will be certain advantages of CAPE running on Espresso. One of the caveats of running Cape or really any privacy protocol on Ethereum is that transaction fees and Ethereum are not private. So like for example, Tornado Cash needs a relayer to basically aggregate the transactions and submit them. And that's because in order to submit a, an Ethereum transaction, you need to have an Ethereum account and pay fees from, from a non-private account. So there are certain, there are certain advantages like that, that is, you know, running CAPE directly on Espresso we'll have, but it can be deployed on any EVM blockchain. And in fact, right now we've deployed it as a demo on Ethereums Testnet rinkeby. Anna Rose (00:20:29): Okay, cool. And is this what's being deployed? Has, has it been deployed or will it be deployed? Ben Fisch (00:20:35): It's being deployed this week. Anna Rose (00:20:37): Literally as we speak Ben Fisch (00:20:38): This, as we speak, so by the time the podcast goes out, it will be deployed. Anna Rose (00:20:42): It will be out there Jill Gunter (00:20:42): Yeah. It's a little weird that we're trying to time travel into the future here or the past. Yeah. Tarun (00:20:50): Quick question regarding this. So, so I deploy a contract on an EVM chain that sort of wraps the local asset, but then when I do certain function calls, it goes across the bridge to Espresso. Is that like, how, how, how does the mechanically work, I guess, like, I, you know, in terms of like user submits transaction to the CAPE contract and yeah, yeah. Maybe walking through that would be, Ben Fisch (00:21:16): Yeah, no. So CAPE is a protocol that can run within an, an EVM blockchain. So CAPE is currently demoed on Ethereum's Testnet, but we will migrate CAPE the protocol to Espresso. When we launch the Espresso blockchain assets, and Anna Rose (00:21:33): That's where it will live primarily Ben Fisch (00:21:35): Yeah. After it, that's where it will live. Assets will separately be, can be bridged over from Ethereum to Espresso in the same way that they're bridged over to other, you know, blockchains that are interoperable with Ethereum. The way that CAPE works is that if you have say an ERC 20 within the EVM on a given blockchain, then you can wrap that ERC 20 into a CAPE token that will have configurable privacy features. And it's up to the creator of that wrapper functionality to determine what of those privacy features. Currently there are viewing policies and even freezing policies that the token wrapper can configure, but in the future, this in will expand to, you know, customized policies using some of the other protocols that we've developed, but haven't released yet. Anna Rose (00:22:29): So would you imagine it sort of being that, like say you have a token on Ethereum and you'd bridge it over to Espresso, you'd wrap it in CAPE and then use it in Espresso for certain things. That's okay. I think because you're deploying on Rinkeby, there's this idea that like, when you talk about bridging, would you actually ever think of deploying on Ethereum as well, and then bridging through Cape, can you bridge through the through the Dapp? Basically? Ben Fisch (00:22:58): I mean, we, we actually thought initially about running CAPE directly on Ethereum, the problem with running it on Ethereum is twofold one you know, high fees, you really need some kind of scaling solution in order to make that, you know, keep those fees down and make it more efficient. And number two, we would anyways, need to run it through a re layered similar to tornado cash because of the issue of the privacy of paying transaction fees on Ethereum. So given our agenda, it made more sense to run CAPE on Espresso and also we're going to be focusing on the seamless interaction between Cape assets and smart contracts or web3 apps that run on Espresso. We have a lot more control over that than we do on Ethereum. Tarun (00:23:45): Actually one, one quick question. So this is a, a big debate right now in sort of the multi chain world, which is you can have a single asset that has many synthetics represented on another chain. Like Weth exists in five forms on Solana or which in a lot of ways led to the wormhole hack. So do to the kind of like technical fungibility between the different synthetics in, within DeFi protocols. But then like when one gets hacked, you really probably don't want them all to be viewed as equal. So when you're thinking about kind of privacy and sort of side channel attacks, you know, how do you think about like a single ERC 20 being minted in multiple synthetics on Espresso? Or do you like enforce that being done in a single fashion? Or like how, yeah, I guess how are you thinking about that? Because I think this like proliferation of multiple representations of the same asset thing, you know, has already been a security issue. And I'm just kind of curious if there's like extra privacy, see sort of side channel attacks from that because I, I haven't really thought that much about it until you just described your design side. I, I feel I'm kind of curious if there's actually anything, any, anything that you're thinking of with regard to that? Ben Fisch (00:25:00): Yeah, it's a good question. And it's a little hard to address in, in very broad terms. The, it would help, I think, to think through an example. So CAPE is particularly suitable and when developing CAPE, we we're really targeting creators of assets like stable coins which are sort of a bridge between the traditional financial ecosystem and blockchain ecosystem because stable coins, largely speaking are created by money businesses and administrated by money service businesses, they are centrally backed. So, you know, if you think about a stable coin issuer you know, they can be minting their stablecoin directly on CAPE, on Espresso. They might have units of that stablecoin that are already minted in ERC 20 on, you know, on Ethereum. And if you think about the interaction where something gets locked in a contract on Ethereum and then minted on Espresso and then moved back, it's not a whole lot different from moving between different representations of the dollars between banks. Tarun (00:26:01): Right, right, right. But I, I, I guess there's still sort of this question of like, let's suppose there are two synthetics, so I have one ERC 20, I mint two versions of it on Espresso. And then certain people when they bridge back to Ethereum, tend to bridge in high, like actually are the quantities revealed on Ethereum upon coming back? Like, I, I, I'm just curious if there's like some type of like statistical thing of, like, I go across on one type of asset, I trade in like a curve pool to the other type of asset. And I come back and in that loop, I sort of like leak information about my identity based on like the quantities I, I sent. Does that make sense? Ben Fisch (00:26:42): Yeah, to some degree, it's not really that there exists two different types of synthetics on another blockchain it's. So I would think about it like this, if you, if you have USDC, which is really an asset type, right. And you have units of that, you know, on Ethereum, right. Or even if you have units with that with let's, I think it's similar to think about if you have units with that within an ERC 20 on Espresso. And then we can think about how the interaction between that and CAPE works, because moving from, you know, between Ethereum to Espresso and then from Espresso to kind of a Cape wrapped asset are similar. So by having multiple versions of Cape wrapped, USDC, it doesn't really mean that you have it's, it's, it's really that you have different ways of encapsulating the same asset, but every unit itself is when you lock a, when you lock a unit of, of USDC in, inside its ERC 20 contract, and then create a unit of it, you know, within a certain form inside CAPE that has certain viewing properties, move it around, burn it and move it back. Ben Fisch (00:27:55): Each of those units are, they, they don't simultaneously exist within CAPE. And then also within the original ERC 20 in, in a completely independent and movable way, there's an anatomic, you know, dependence between them. Anna Rose (00:28:09): Mm. So you're getting exactly the same token back when you get out of CAPE, basically Ben Fisch (00:28:13): You are, yes. Anna Rose (00:28:14): The sort of take I got here was the idea that if you'd have multiple types of USDC that had been bridged over or wrapped ETH or something like, like, it's all, like, I wonder, I don't know Tarun if I'm correct here, but it's like, what if you moved USDCs from other EVM compatible chains all towards this one? Is that what you're thinking? Tarun (00:28:36): Yeah, yeah, yeah. Yeah. So, so one, one problem we've seen a lot on Solana and Avalanche is that there are multiple bridges, right? Between ETH and Solana and ETH and Avalanche, and each bridge has its own representation of the, the ERC. And then basically people make curve pools, or like some type of like concentrated liquidity pool that says like, Hey, these two things should basically be equal. And there's an arbitrage between them. Now when the Solana wormhole attack happened a thing that maybe was somewhat less reported was that so end, which is the biggest lender on Solana had an insane amount of utilization of loans because people, the, the so end sort of contracts effectively said, Hey, look, the Oracle price between wormhole Weth and synapse weth or, you know, whatever bridge any bridge weth, I, I forget which ones were all there, but there there's like a bunch of them were, is basically one to one, but one of them actually just like had this hack and actually, you know, had this mint event in theory, it should not be one to one anymore. Tarun (00:29:40): So people just started adding that as collateral, almost instantly. It was actually amazing how fast people are doing this. They added that as collateral and then started borrowing the other types of ETH as it was crashing. So until jump was like, we're gonna put the 300 million in there was not, you know, basically people were putting in something that's worth 10 cents on the dollar in borrowing a dollar against it, and this multiple synthetic representation thing has become a big point of contention. There was a, there was a, a Twitter space with Prestwich and Sunny, and they had a big argument about it a few days ago. Yeah, so, so it, it can cause both security issues, but I've actually, I don't think anyone's ever, ever talked about the privacy issues of having these multiple synthetics, because like you sort of are leaking some information about yourself. If you go between multiple representations of the same, like on this destination chain, if you have multiple representations of the same source asset, Ben Fisch (00:30:35): I really don't think it's fundamental to having multiple representation. I mean, we have multiple representations of the dollar. We have a zillion representations of the dollar. You have bank of America dollars. You have, you have, you know, chase bank dollars, right. The issue that you're talking about is really more due to independent price Oracles that are specific to a particular representation of an asset. And you have like now different, you have basically - I mean, you're talking about forms of arbitrage. I think it's more to do with that than the fact that you have multiple representations, but it is, it is the, the interaction between these different representations and those price Oracles. Jill Gunter (00:31:09): Are you asking Tarun if there are issues around like the fungibility of these things that get complicated by the privacy element of it? Tarun (00:31:19): Yeah, yeah. To some extent, like, like on the Espresso side, right. Like how, how do you think about that? Because I, you know, like I said, I, I can, it's very clear it has these security issues, because if the synthetic can never be rehypothecated due to either a bug like the wormhole attack or sort of something where, you know, like a mechanism that's supposed to either be a price Oracle or something that yeah. Is in our arbitrage game goes wrong. things, things happen, but I'm, I'm sort sort of more curious, like how, how do you think about that on the privacy side? Cuz I, I, I don't think anyone's ever actually Ben Fisch (00:31:54): Really right. Well, no, it's a really good question. And you know, frankly, I think that with these, I mean, side channel issues can be so complicated. So you like, you really don't know until you think deeply about it. And sometimes things, you know, get discovered later, then you have to figure out a fix. My, my first reaction to this is that privacy is more localized in the sense that let's look very concretely, what it means to, if you have a certain wrapped version of USDC and you're using it within CAPE, then you're transacting it anonymously. We, you know, or privately within the pool of assets being transferred within this one, you know, shielded pool. And so it's more of a localized property of those transactions and how those transactions are indistinguishable from one another. And, you know, then, then, then a relationship between what's happening there and other representations of, you know, USDC on completely different blockchains. But that said there, yeah, side channels can creep up in all kinds of unexpected ways. So it's hard to say no, like there aren't side channels that could be exploited and you always have to be vigilant about that. Jill Gunter (00:33:03): You're giving us something to think about Tarun. And this is great. Tarun (00:33:06): I, I mean, I just, I just think app developers on Espresso will be having to think about this a little bit too. Jill Gunter (00:33:11): Well, yeah. I mean, I think that it goes to a great point though, of just how devilishly tricky it is when you introduce privacy into the mix to design applications, sound secure systems, you know, there's just so much more at play, even this question of, you know, fees and, and what, what you're using for fees. And I think that one thing I'm super appreciative for Anna and, and the work that she's done in, in kind of education. And so many others are now kind of picking up the mantle of in this sort of like ZK universe within our little crypto sphere is you know, just getting, getting developers, getting entrepreneurs and everybody else up the curve on these types of questions that, yeah. Even for us all day, you know, thinking about it all day long, like there are always gonna be things that, that sneak up like this, just because these playbooks have not been fully written yet. I was gonna say, it might be useful to explain a bit more about like how CAPE works and the functionality of it to put a lot of what Ben was just saying in context, Anna Rose (00:34:16): Jill, you and I are so synced. That was literally what I was about to say. We both know how to do a podcast. You can tell it's like, what's The next one? Tarun (00:34:25): Yeah. Sorry, sorry. I, sorry. I veered us on this tangent. Jill Gunter (00:34:28): No, no, no. It's great. Great. Yeah, yeah. No, that was awesome. Anna Rose (00:34:32): So yeah, actually, I, I wanted to understand a little bit deeper. What is happening inside Cape? Because I, I just had this thought, as we talked about locking it up. It has these principles. I actually don't know what it does after that. Like, do you just transfer it to other people? Do you trade it in Cape? How much can you actually do? Ben Fisch (00:34:52): It's a transfer protocol. So it doesn't, it doesn't yet have with inside of it trading mechanisms of course you can have off chain bartering mechanisms, which, you know, and then support, you know, direct trades, but it is a transfer protocol. It doesn't run, you know, like a DeFi or UniSwap protocols within it. Yet. So first of all you can create assets directly within CAPE. You can also wrap assets that have been already created within an ERC 20 within the EVM. And once you have a cape asset, then you can transfer it to other addresses or other users. And the unique thing about CAPE is that every asset has certain configurable properties or what we call policies. We currently support flexible viewing policies and, and, and a freezing policy that the asset creator can configure. So for example, a stablecoin issuer could decide that every public viewer should not be able to see any of the details of transactions, which should be completely anonymous and, and private the general public, but the stablecoin provider or designated auditor should be able to see details of transactions over certain amounts, certain credential attributes of the sender and receiver addresses. Ben Fisch (00:36:09): And that can be configured. Jill Gunter (00:36:11): And just to put a little bit of that in, in context, we really designed this with specifically stablecoin issuers, but really asset creators of all types in mind. And so, you know, the flow is that you go go into CAPE we'll have a, a user interface for it that will stand up here in a few weeks, but you, you go into it and you can either, as Ben said, create or wrap an asset. And then the thing that you do, if you are the asset creator and not just an end user of the system, is you designate exactly what Ben just described in terms of who can see what about the transactions that are happening within this asset that you're creating. And also some of these other policies that are standard to stablecoin issuers, so like freezing and so forth. So that's, that was really kind of the design principles underpinning it. Anna Rose (00:37:03): Are there select assets that you already have in mind? Are you able to actually limit what kinds of assets can come in or is it sort of free for all? People can bring any ERC 20 rapid, it's almost like, could you whitelist any particular kinds? Ben Fisch (00:37:19): We do not. Yeah, we do not have control since, since it is a protocol. We could have control over what's supported in our user interface. But we do not really control what is happening inside the protocol itself. Jill Gunter (00:37:34): I think that this actually comes back though a little bit Tarun kind of obliquely a bit to, to what you were talking about a few minutes ago, just in terms of like these questions of, of fungibility. This is nothing to do with like side channels or hacks or privacy, but as you were talking, it did make me think of this challenge that, that we're thinking through though, with CAPE where you could theoretically have, you know, many, many people wrapping the same type of asset over with all different configurations of privacy and these different policies, right? And then you get to this kind of state where you need to, as an ecosystem and a marketplace, get back to having like shelling points around, okay, this is, you know, kind of the canonical most widely, you used most liquid version of XYZ, stablecoin within this system, but that's kind of an gonna be an emergent problem and, and property, I think of the system, Tarun (00:38:32): Don't worry. This is everyone's having the same problem. This is why Prestwich and Sunny who have very different philosophies on how to tackle this, where I had this very large argument the other day. Anna Rose (00:38:41): Now I wanna, was it recorded? Yeah. Tarun (00:38:43): Yeah. I hope it's Anna Rose (00:38:44): Recorded it's spaces. Huh? Tarun (00:38:46): It the spaces. So I'm not sure. I think you can record them. I don't know if they did, but it, I Jill Gunter (00:38:52): I can like hear, there are two voices in my head already. I feel like I'm like Anna Rose (00:38:57): Bummed. I missed it. Jill Gunter (00:38:59): Yeah. Anna Rose (00:38:59): Well, we'll try to dig it up. If we can find it, we'll add it in the show notes to, for our listeners to have a chance to catch it. I kind of wanna understand a little bit like a use case. Like maybe we can walk through who is using this and how like, can, can you paint a picture of someone who wraps it in, they wrap a token in CAPE, within CAPE, and then they transfer it and then they potentially take it out at some point. Yeah. Just maybe walk us through one of these and who, who is this person and what are they doing? Why are they doing this? Jill Gunter (00:39:33): Yeah, Ben Fisch (00:39:34): Yeah. Anna Rose (00:39:34): So I know there's probably men need, but maybe you can just help us with like Jill Gunter (00:39:37): All of like the user interviews and, and all of the scenarios that we had to come up for, for those are just, are just coming to mind for me here, but then why, why don't you kick off? Ben Fisch (00:39:47): And I think that we can continue with the stablecoin example since it's, again, it's an example of a type of asset that is typically issued by a money service business that has risk management responsibilities, and the users are well twofold, right? One, the user of CAPE would be the asset creators who are either creating these stablecoins or creating versions of existing stablecoins. You could have, you know, existing stablecoin issuers use CAPE to create, basically add privacy as a feature sort of like an incognito mode to their existing assets so that their users can enjoy that privacy. But while still retaining the level of visibility and control that they have on Ethereum, which is essential for their risk management. In fact, one of the things that's like unfortunate about or perhaps fortunate, but about, you know, privacy protocols on, on blockchains are that when something goes through a privacy protocol and then comes out of it, right, it's, it's always labeled as having gone through that. So it's very easy for the money service businesses, who, who run some of DeFi protocols, right. To decide, oh, we'll ban anything that has come from Tornado Cash. And so what CAPE provides is a middle ground solution where you know, those protocols can endorse a CAPE wrapped version of these assets so that users can transact it privately. So it's not public revealed to everyone, but it is visible at least to you know, certain auditors and, and, and therefore won't be banned from the, from the other DeFi protocol sponsored by those auditors. Anna Rose (00:41:27): So this is almost like, like, would you picture the issuer of the stablecoin as building somehow with this in mind? Like, are they connected? Are they, are they deploying something on top of it? Are they the person that like, sort of Dapp or user would they be providing the user interface where their users could actually use this track if they want to? Jill Gunter (00:41:50): So just, I think, just to take an, an example, you know, let's say one of the big centralized money service business, stablecoin issuers, right? Let's say they are looking at kind of the state of the world and the blockchain industry and where it's going. And they say, you know, what's gonna be a really important feature for users is privacy. And maybe not sort of like Monaro-level privacy, or, you know, Tornado Cash oriented positioning around it, but just like standard financial services traditional Tradify level privacy. Right. so Anna Rose (00:42:26): Like not needing a centralized system, not needing the database, like still on a blockchain still, immutable still right. Geographically proven. Jill Gunter (00:42:35): Let's say that they're seeing demand from big businesses of like, Hey, we wanna explore using your stablecoin to do payroll for contractors in other countries, or even to pay suppliers in other countries, right. That stable coin issuer might say, okay, Hey, you know, we wanna attract these, these big sort of next generation users to, our stablecoin, we need a privacy solution. That's gonna balance our need for risk management and reporting with their need, you know, our potential users need for privacy from their competitors and other actors that stablecoin issuer could then come use the CAPE platform as, as it's, as it exists today. I mean, we'd need to have it not be on Testnet, but, you know, as the user interface, for example exists today, they could either create a brand new version of their existing, stablecoin within CAPE that has the parameters that they want, or they could create what we call a wrapped asset type on CAPE, which is basically a template for any holder of that existing stable coin, to be able to wrap that ERC 20 coin into, and that wrapped asset type, or indeed that new asset that they're minting on CAPE would again, be able to have the right parameters that they need in order to meet the needs of those users. And so again, they could do that all within the user interface that we're providing and those end users, you know, these theoretical businesses that wanna pay suppliers overseas could use the interface that we're providing as their wallet, et cetera. But as Ben has highlighted a few times, CAPE is fundamentally a protocol. And so if they wanted to stand up their own interfaces around that and so forth for their own users, they could do that. Ben Fisch (00:44:32): Typically stablecoin issuers or providers are not providing a user interface to their users. They're creating assets that then exist on a blockchain, and we are creating a wallet, right. That users can use to interact with CAPE assets similar to metamask, you know, or my Ether wallet. Right. Jill Gunter (00:44:55): And again, I'm just gonna plug, so we're gonna be putting up the UI here towards the end of the month. And again, it's just gonna be on Testnet, it's a demo it's gonna be on Ethereums Testnet, it's not even gonna be on the Espresso Testnet, which will come with all kinds of benefits as Ben has flagged, but we're really, really looking forward to having that be out in the world and getting asset creators and also end users to go in and play with that wallet and play with, you know, the mint functionality and the configurations and, and so forth to get more feedback on it. Anna Rose (00:45:28): Cool. Ben Fisch (00:45:29): In terms of how it functions as, an consumer app and you know, how asset creators and, and consumers you know, interact with it. It's not that different from a product like zk.money. Right. And it just gives more flexibility and control to the asset creators, you know so that it's not just totally anonymous asset it's within the shielded pools on, on CAPE they're. They could have certain view, you know, visibility or free or, or freezing properties configured by the asset creators. Anna Rose (00:46:02): Cool. Tarun you had something you wanted to jump in on, Tarun (00:46:07): I guess I, I'm more curious from more of the development slash product side, like, which features like when I take my ERC 20, I wrap in Cape and I go across, what are sort of the basic view and permissioning functions you're starting with, I guess, like, you know, it sounds like there, there are, there's this universe of different types of things, but like from a token standard looking perspective, like, what are kind of, what do you view as like the initial ones or the more fundamental ones to start with and how are you kind of thinking about the features of, of that from the perspective, like, let's say I'm a perspective developer and I wanted to... Ben Fisch (00:46:42): So right now in the current version of CAPE, it's, it's somewhat limited because we, we have, and this is also for technical reasons. We wanted to have like one, you know, universal zero knowledge circuit that captures all of these so that you can't distinguish transactions by, you know, the, the type of asset that's being transferred in its policy in the future. We can give more, more flexibility to the creators of these assets to program their own like arbitrary policy. But right now it's really just any subset of the amount, you know, receiver and sender addresses and a subset of the attributes of digital credentials associated with the sender and receivers. Jill Gunter (00:47:22): Yeah, I think in future though you know, one concrete use case that we're excited about is thinking about how we might be able to support things like, for example implementation of the travel rule here, right? Where, you know, a below a certain threshold reporting requirements are more lax and above a certain threshold of transfer amount reporting requirements than kick in. Ben Fisch (00:47:49): We actually have that as a feature now. So you can configure that you only see information about those fields that I mentioned, if the amount is over a certain threshold value. Jill Gunter (00:47:59): Yeah. That's implementing the protocol that won't be on the UI at the outset. But that's, you know, again, I think another example of how we've built this with really like stablecoin issuers and all manner of asset issuers in mind. And we're looking forward to continuing to kind of expand what those use cases and applications look like as we go and think about it more deeply. Anna Rose (00:48:25): What are the actual zero knowledge proofs? What is the CAPE protocol using? Ben Fisch (00:48:31): We're using PLONK and actually, well, extensions of PLONK. In fact, we have an open source library called jellyfish that is already open on GitHub. And it it's based on PLONK, but it, it, it has certain extensions of turbo PLONK and other small optimizing that were enabled us to make our circuits really efficient, or really our constraint systems really efficient Anna Rose (00:48:56): Given that you have these different types of, so like you can select the privacy settings. Does that inform what size of circuit you need or does it actually affect the zero knowledge proof or is it just like there's a standard ZKP and underneath there's some like other stuff that you can configure. Ben Fisch (00:49:17): Yeah. There's one constraint system and that's important so that every transaction looks and distinguishable from another transaction the viewing policy is an input to the proof. If you have different viewing policies, it doesn't change the proof that's being computed. Just change the inputs. If you want to enable custom policies that go beyond this, then it's hard to give just one universal you know, zero knowledge constraint system to capture everything. And so if you want users to be able to program their, like their own constraints, then you need this is getting very technical, but you need like two levels of recursion in order to be able to hide things. And so we have an extension of the Zexe system that is based on integrate using PLONK, cuz it for the, for the inner proofs and it PLONKs also for the outer proofs. And we'll be releasing that soon as well. And, and we have a paper on it that we're going to that we're gonna put out there, which is basically just making a lot of optimizations to Zexe based on PLONK. And it has sort of like a, a GenX improvement in the performance on on the state of the art of that. Tarun (00:50:37): Are, are there any other constraints that come with kind of merging those two? I, I, cuz they both have sort of different representations of things. So I was just curious is like when, when like from, from the perspective of like a programmer, like, do they have to like think in terms of writing a Zexe sort of code or do they think in terms of like, oh, I need to like actually think of the lookups and stuff like that. Like I'm just kinda curious like how, how that works from the end user standpoint, Ben Fisch (00:51:04): Frankly like CAPE and the way that the CAPE protocol works and the way that Zexe works are actually very similar in terms of their program model. It's all UTXO based. And Zexe is really an extension of the UTXO system where it like everything is basically encapsulated as a record and every transaction basically deletes records or nullifies records and creates new records. That's how CAPE works as well. The additional thing that Zexe does within his Zexe protocol is that it's, it's proving that it satisfies the custom creation and destruction predicates of any records that a transaction is creating or destroying and it's hiding the predicates of the records. So all of this is abstracted away from the user. Yeah. We haven't integrated Zexe with CAPE into like an end consumer product yet. So it's a little hard to say exactly, you know, what it's going to look like to the user in terms of how they write a custom policy and express that using a constraint system or programming in a higher level language. And then that gets compiled down to like a, a, a PLONK constraint system and that entire flow, like we haven't really, you know, completely integrated that into a consumer product, but it's not that different of a programming model from the way that, you know, CAPE works itself, Anna Rose (00:52:28): But Zexe actually has this programmable - like you can actually program within it at least like that's what sort of Aleo's working on. And so as I understand it, the Espresso's L1, that's where you would have smart contract type things, that's where the EVM compatibility lives, the CAPE lives on top of that, right. As a DApp as a protocol, but would you then have programmability with, and, and I'm guessing like CAPE is programmed, but it's sort of pre-programmed and then users can interface, they can choose their settings, but yeah, like, do you actually also expect people to build again on top of CAPE? Like, would you want them to be able to program on that sort of third level of up? Ben Fisch (00:53:11): Yeah, that's a really good question. So the way that Zexe enables programmability is that, you know, it allows users to basically create these rules for how, you know, records can get created or destroyed. And that can determine a certain, you know, type of application, but Zexe is still quite limited in terms of how it can achieve, you know, programmable privacy and web3 apps. It's really only handling off chain computation. If you really, if you want to retain the privacy features that it enables. Zexe would not allow you to build a privacy preserving version of any type of smart contract that requires public data. Also, there can be fundamental limitations of that because the data needs to be public, right? So Zexe doesn't allow you to build like private Uniswap. Also private Uniswap is not really possible unless you change the definition of how it works. Ben Fisch (00:54:09): So there are limitations of that, but what it is really good for is user defined assets that have customized rules in terms of how they work. So you can imagine that every asset has a different zero knowledge, proof constraint that needs to be satisfied by the transaction. And you want the transaction to, to hide what the asset is and what the, what is the circuit that you're approving is satisfied so that you can't distinguish transactions by, you know, their, their types. That's what Zexe is very good for. Obviously it's more flexible than that, but we're still trying to figure out how it will, again, like integrate into a developer platform and how users will want to use it. Cause it's not really clear what developers can do with Zexe in like a meaningful way around the around programmable privacy. That's like a very high level concept that I know is tossed around in the industry. But when you get down to the gory details of how things work, it's not obvious what program privacy means. Anna Rose (00:55:09): Hmm. Okay. So I wanna go back to the Espresso layer. I think we've, we've done kind of a cool dive into CAPE sit that sits on top of Espresso, but like, so Espressos L1, do you actually, do you have other protocols also planned that are separate from Cape? Do you picture like multiple privacy protocols interacting with that L1 or do you picture Cape as the primary? Like, are, are you kind of delivering sort of this, both, both together and that's, that makes up the entire system. Ben Fisch (00:55:37): Yeah. The two, the two privacy protocols that we worked on were one CAPE and then two, this, you know, newer version of Zexe, which we call very Zexe for verifiable Zexe, cuz it's uses universal... Anna Rose (00:55:49): You have an acronym. Tarun (00:55:53): But that was the sound of 500 size. Jill Gunter (00:55:57): No, that was just me. That was just Ben Fisch (00:56:05): And, and those are the two privacy protocols that we have and we do not have any other protocols planned. It's a lot already. On our roadmap. Anna Rose (00:56:18): I'm like, more! Ben Fisch (00:56:21): Yeah. Ben Fisch (00:56:23): But yeah, a lot of our focus right now is on the Espresso layer one and how it works in terms of its integration of rollup and consensus and how it achieves higher throughput without compromising on data availability. Anna Rose (00:56:36): Okay. Cool. And I think that's what I wanna explore now is like, let's go back to that point now that we understand a little bit more about out what's living on top of it. Like do first of all, like it's EVM compatible, right? It's that's the part that's EVM compatible. How are you doing that? Is it like a fork of geth? Are you doing something that compiles down into something else, but like you can deploy solidity code on it. Ben Fisch (00:57:00): So we're actually developing a, a ZK EVM. So we are building a roll up directly for the EVM. We're translating EVM op codes you know, to constraints. And so that's one exciting thing that we've been working on. Just like a inefficient implementation of ZK EVM, Anna Rose (00:57:21): Which ZK EVM, by the way, our, are you like following which camp? Cuz there's a few different projects like working on that. I know Hermez has one, EF is working on something like this. Do you have your, a unique one? Tarun (00:57:33): Scroll. Anna Rose (00:57:34): Scroll. Ben Fisch (00:57:34): Yeah so it would be a lot more similar to, so I would say that the, the different approaches roughly break down into two different categories, one is build your own custom VM and then have a compilation from the EVM to that VM. And then the other approach is to try to directly build a constraint system that captures the EVM as closely as possible. And yeah, I don't wanna butcher what other projects are doing specifically. So those are just the two broad categories and we are in the latter category. So we are building a constraint system that directly captures EVM state transitions. And yeah, we're working on using our turbo, you know, PLONK constraint system and the various optimizations that we've made to try to make that efficient. Although I would say that while it's good to make small efficiency optimizations and the representation of, of the EVM. Ben Fisch (00:58:29): I think that with the trend in the industry towards having just high, really high performance provers, I'm sure you're aware of Zprize and this, you know, industry, I think you're participating in that, right? So there's an industry initiative to have you know, high performance provers. And there's also, so a lot of startups ready that are working on doing that. It's a, it's less important to have these small differences in, in, in how the constraint system works. When you can run, rollup servers like on a very high performance platform, which is different for roll up than privacy where privacy requires, you know, consumer is to produce these zero knowledge proofs like on a phone or on a laptop. Yeah. so that's just something to note that betting on really high performance provers and the fact that the computational cost of producing rollup proofs per transaction is still just so, so much smaller than the costs that are currently due to like gas fees due to congestion on blockchains. That it's not as significant to care about these small differences in the efficiency of the prover, which is why we're focusing a lot on how rollup integrates with consensus is to you know, achieve higher throughput. But while, while preserving the decentralization of the system, which is so core Anna Rose (00:59:47): Interesting. So the Espresso being the L1 that is actually where the ZK EVM works here, that's, that's where that lives. Ben Fisch (00:59:54): The ZK EVM, yes. Is a component of the Espresso consensus protocol and the protocol itself. I mean the, the provers are a separate logical competent. So like the servers that are computing these roll up proofs do not need to be consensus nodes themselves, but would be sending proofs and data to consensus nodes. But the the way that this integrates with consensus is exactly and who gets to verify proofs and who actually has to receive data and basically sign to say that they've received data and then can serve data to, to other nodes or users. Anna Rose (01:00:32): So I wanna kind of see CAPE is deployed on the ZK EVM. It is it like those proofs, are they being generat within CAPE? Ben Fisch (01:00:43): CAPE is a higher level application. Anna Rose (01:00:44): Okay. So where like, but okay. I keep thinking of, I keep thinking of rollup models where you have, like, you have smart contracts that do the verification of the proofs. So in the case of Cape, are, is it Cape is deploy, is a smart contract deployed on Espresso. That's actually doing the, of the proofs that underlie the Cape protocol, or, Ben Fisch (01:01:06): Yeah, I think let's like, let's take a, a step back. I think that it gets confusing with like all like these terms being thrown around in the industry, like rollups or consensus or this or that, or the other Cape is at the application layer, right. Transaction processing itself, that's at like the layer one level. Right. And so when you have rollups rollups are, you should really think about rollups and consensus as just combined defining a new consensus protocol. Right. A consensus protocol is just a transaction processing system that has you know, some way of ordering transactions and then different consensus protocols, you know, ranging from a centralized system to, you know, a decentralized system of different types, right. Are having different security properties of what guarantees you have around consistency and, and liveness and data availability. So when we talk about the rollup and the consensus and Espresso, like this is all just this abstract transaction processing system, it has nothing to do with the, the application layer at the application layers. The fact that you can run protocols like CAPE, and then you can run, you can run EVM transactions. EVM is, is is, you know, it's a virtual machine. It's a way of describing what, you know, logical transactions can be processed by the system. Anna Rose (01:02:24): Sorry. And I'll tell you where my mix up comes in, cuz I, because I often am talking to ZK rollup projects and yeah. Recently, like one project mentioned that like the ZK rollup, what it looks like is just, it looks like a Dapp and that has like, kind of like thrown me for a loop that it's like, it's verification, smart contracts just look like, Ben Fisch (01:02:46): Because you can have right. Cause you can have rollups run just through a smart contract, which gets kind of trippy, right? Yeah. So you can actually have a boot strapped on top of the consensus. So you can actually have a roll up that just low, like it's just run by a smart contract. And that's actually the core of the problem that because rollups are somewhat of an afterthought as a way to, you know, scale throughput. And there isn't this careful integration of rollup with, with, with the underlying consensus protocol, you end up in a situation where either you're posting way, much data on-chain still, and therefore you're limited in terms of how much you can scale or you have a half-hearted solution to data availability. And so we want to make sure that we're looking at it as an integrated design with certain security goals. Right. We want to maintain that if the, you know, two-thirds of the stake is, you know, honest or then revised, right. However you think about that, then you maintain these properties of consistency, liveness and data availability, and then that's a security property want to prove of the overall system, including how rollups interact with consensus as part of the protocol. Anna Rose (01:03:56): On the data availability side though, like, is, is the model then the underlying model, is it similar to these other data availability solutions? Like, are you thinking about it in the kind of context of like Celestia or is it very, very different because what's built on top has different needs. Ben Fisch (01:04:14): Based on my understanding of Celestia, it's very different since Celestia is based on basically proof of retrievability and it could in, you know, retrospect look somewhat similar to different ways of having data availability committees. But to give you a peek into how it works we have a proof of state consensus protocol, which is based on, you know, randomized committee elections, which is how some of these proof of state consensus protocols work. So concretely we have integrated the HotStuff, consensus protocol with a you know, randomized sortition based proof of stake protocol kind of similar to Algorand. So it's, you can think of it as Algorand, but with HotStuff as the internal BFT, as opposed to the BFT that that Algorand uses. And the way that we integrate rollups is in terms of how committees get randomly elected to actually receive and then serve the raw transaction data versus how committees get elected to just participate in voting on transaction, ordering, and at a very high level, the reason why it is possible to scale throughput and still achieve decentralized data availability is that you do not need to elect as large of a committee to basically function as the data availability committee, as you do to participate in voting. Ben Fisch (01:05:37): And the reason is that when you sample a random committee, you need two thirds plus Epsilon of that committee, to be honest in order to maintain safety. But as long as one of a random committee member is honest then you preserve data availability, right? And there's other subtleties as well. Such as the fact that due to the possibility of an adversary to adaptively corrupt a committee after it becomes known, you actually need to broadcast blocks that are being voted on to through gossip, to everyone in the network before the committee becomes known because otherwise an adversary could, could basically bribe or corrupt those committees and then basically get control of their keys and, and mess up with with the safety of the system. Whereas that's not the same concern with data availability. So that's getting a little bit too much into weeds and we do have a white paper that we're writing on this, and we'll be publishing soon, but in a nutshell, it, it, we comes down to a careful look at how rollups interact with a particular consensus protocol and a careful analysis of the, of the desired security properties that you want in order to be able to achieve this. Tarun (01:06:51): I, I will say one thing, which is that there is some similarity with Celestia here and that data availability sampling still as sort of like here, coupling consensus with a proof of kind of virtual-ability, but the idea that you randomly select some subset of validators to provide the set of transactions in like some non corruptible way and like making sure that they don't get revealed until after is, is at she quite similar. So I'm excited for the paper cuz there's, there's definitely some overlap. And now that there's three main data availability things going on right now like Eth2, Celestia. And this it'll be interesting to compare. Anna Rose (01:07:32): Polygon has one too, by the way. Tarun (01:07:34): Oh yeah, sorry. Forgot. But I think Polygon is very similar to Celestia. Eth2 is trying to do one thing that's quite different. Yeah. Which in like how they do the sampling, Anna Rose (01:07:44): They told me it was somewhat different, but there's an episode that I think came out last week as of a few weeks before this episode. Ben Fisch (01:07:52): Well, all protocols are somewhat similar at a high level. Yeah. And quite different when you look at the underlying details. Tarun (01:08:00): Yeah. The unfortunate thing is there's only one actual sort of fully fleshed out data availability paper that exists, which is the Celestia one. So everything else is kind of like a very like the ETH2 and ETH2, one in particular is like very like hazy. Like a lot of the details are not there for instance. So like you can't really compare them because they haven't written anything up. Ben Fisch (01:08:23): Yeah. And we'll have a paper out soon that explains what we're doing and how it distinguishes from what Celestia is doing. Anna Rose (01:08:29): Cool. Do you imagine other projects deploying on Espresso itself or do you actually see it as like a fully fleshed out system with both sides with like the underlying L1 and CAPE and that users are supposed to more like deploy on both of those or kind of work in that system? Jill Gunter (01:08:49): I, I think fundamentally Espresso is, it is a layer one. Right. And so we view it very, very much as that open platform and we hope that people will yeah. Transport apps that exist on other chains onto Espresso, that people will use it as an open platform for innovation in all kinds of new ways that haven't been possible to date. But at the same time, we're also aware that again, to go back to just the state of the industry and how much it's matured and how far it's come, it's not good enough, I think anymore to just put that out there and and say, yes, come one, come all and, and build something here. We wanted to make sure that we were seating it with functionality and designing it with intentionality around what those types of things will be. And so that's a bit of context as to how we came up with this architecture at the outset and CAPE both as a protocol and something of an end user product at the outset. But certainly we hope that it goes well past even our wildest dreams. Anna Rose (01:09:57): Well, on that note, I wanna say thank you to both of you for coming on the show. Ben Fisch (01:10:01): Thank you so much. It's a pleasure to be here. Yes. Anna Rose (01:10:04): And thanks for sharing with us Espresso. It sees the light of day. I'm very, I'm very excited to be able to also talk a little bit more openly about it. And now share this episode where we got to explore it. Jill Gunter (01:10:16): Thanks so much. Tarun (01:10:17): Yeah. Great. Thanks. It was great. Great, great chatting about this and actually getting, getting things out in the open. Definitely. Anna Rose (01:10:23): I wanna say thank you to the podcast producer, Tanya, the podcast editor Henrik and to our listeners. Thanks for listening.