Anna Rose (00:00:05): Welcome to zero knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. Anna Rose (00:00:27): This week, I chat with Rahul Magani who's leading the applied ZK effort at Jump Crypto and Hendrik Hofstadt who's the project lead at Wormhole. We look at the work of Jump Crypto, how Wormhole first came to be through Hendrick's work with the validator Certus One, which has since been purchased by Jump. We explore the challenges of interoperability, the design decisions they made in balancing security, speed and functionality, the risks facing these types of solutions, including the famous Wormhole hack, what the future holds and how they aim to explore using ZK in bridging and more. But before kickoff, I wanna encourage you to check out the upcoming ZK tech Gitcoin side round. This CLR matching round runs from June 8th through 23rd. It's a great way for small, early stage teams or contributors building ZK tech to get their first bit of funding, or if you're an existing open source public good project focused on ZK tech. Anna Rose (00:01:20): You can also submit a grant at Gitcoin During the CLR matching round funds that are donated to the ZK tech side round are matched from our matching pool. This initiative is led by 0xPARC and zkValidator, as well as our fantastic matching partners from the ecosystem. We will have at least a 100K in matching for this round. So be sure to get your grant in and choose the tag ZK tech to be eligible to our matching pool. You get the donations anyway, the matching is sort of a bonus, but this is a really great way to kickstart your project. Just remember, the grant round starts on June 8th, which I think is today. So be sure to get your grant in as soon as possible. Now here's the podcast producer, Tanya, who will share a little bit about this week's sponsor Tanya (00:02:00): Today's episode is sponsored by Anoma. Anoma is a set of protocols that enables self sovereign coordination. Their unique architecture facilitates sufficiently the simplest forms of economic coordination, such as two parties transferring an asset to each other as well as more sophisticated ones, like an asset agnostic bartering system involving multiple parties without direct coincidence of wants, or even more complex ones, such as n-party. Collective commitments to solve multipolar traps where any interaction can be performed with adjustable zero knowledge privacy. Anoma's first fractal instance, Namada, is planned for later in 2022, and it focuses on enabling shielded transfers for any assets with a few second transaction latency and near zero fees. Visit anoma.net for more information. That's anoma.net. So thanks again, Anoma. Now here is Anna's interview with Wormhole. Anna Rose (00:02:57): Today. I'm chatting with Hendrick from the Wormhole project, he's the project lead there, as well as Raul from Jump who's leading the applied ZK efforts. Welcome to the show, both of you, Hendrik Hofstadt (00:03:07): Hey Anna. Thanks for having us. Rahul Magani (00:03:08): Yeah. Thanks for having us. Anna Rose (00:03:10): So you're working on seemingly two different projects. Let's first kind of crack into what the connection is between Wormhole and Jump. Is Wormhole like an independent company, or is it a project? Hendrik Hofstadt (00:03:23): Wormholes a fully, fully independent project. Okay. It was kind of incubated at Jump. Most of the core project contributors working at Jump the reason for that is like laying in the background and the essentially original like origin story of the Wormhole project, which we kicked off back at Certus One, which was the company I co-founded, which later got acquired by Jump that we built the original version of Wormhole. And then after the acquisition kind of built it into its full own thing spun it out and have kind of been supercharging the efforts there. Anna Rose (00:03:58): Very cool. And actually, I wanna come back to your story with Certus One, a validator, kind of, I also have a validator, so we can talk a little bit about that, but I wanna hear about Raul and your role at Jump and maybe what this ZK like applied ZK group is. Rahul Magani (00:04:14): Sure, sure. Yeah. This is something that we've sort of kicked off very recently. But I think our interest in ZK is really stems from kind of really being like hardcore, you know, builders in the space. And, and so we really want to tackle some of the hardest problems in the space and obviously zero knowledge cryptography is, is something that's, that's very interesting from a technical point of view. So we've looked at ZKs both from an investment standpoint, but now we're really looking into kinda leveraging these proof systems for, for different sorts of, you know, internal projects that we have, including Wormhole and a few others that we're really trying to push. Anna Rose (00:04:49): It sounds like both of you you've become part of Jump like in the last two years or so, but this is older, am I correct there? Rahul Magani (00:04:59): I'm fairly recent actually. So you know, I think it's been about seven months at this point. Okay. Hendrik Hofstadt (00:05:04): About one year for me. Anna Rose (00:05:05): Okay, cool. So then maybe like less than that, and maybe you're not the total people to speak to it, but what, what is the backstory of Jump like it's an older org. So what is Jump actually? Hendrik Hofstadt (00:05:15): Jump actually started in the pits in Chicago trading. It was now think about six, seven years ago that Jump really got active in the crypto space. Okay. first, fully focused on trading, which is the roots of the company, electronic trading, high frequency trading quantitative trading strategies, but then really quickly realized that crypto is one of the like few spaces and a very new space in which they are like, this is not a zero sum game where there can be like positive sum games in collaborating and contributing to projects. So very quickly at Jump. And particularly now Jump Crypto, which is way more public than a Jump Trading Group was before we've kind of built out these three pillars. One of the pillars being the traditional trading market making in all kinds of crypto markets from centralized to decentralized exchanges, but then the other two pillars being the actual building. Hendrik Hofstadt (00:06:16): So contributing to projects like Wormhole to projects like Pyth and portfolio companies in the venture book doing research publishing research, working on really tough problems, like zero knowledge proof systems, special cryptography, and other like cutting edge cryptographic research and applying some of the like deep technological knowledge in the company to that. So building hardware, building ASICs and working with FPGAs and then on the third pillar investment actually supporting companies through investment. But I think pretty much every investment Jump Crypto makes uses like all of these pillars in a like trying to support the projects as best as possible from a technological building and the markets perspective. Anna Rose (00:07:04): But is it still like a high frequency trading org? Is that like where most of the business activities actually happening, Hendrik Hofstadt (00:07:13): You'd be surprised in that I think more than 50% of Jump Crypto at this point is in the building and investing and research and investing side of things. So cool. There's been a tremendous amount of growth particularly over the last one to two years. Rahul Magani (00:07:31): Yeah, probably in the last year we've seen, you know, a lot of them, the team pretty much expanded you know, probably a hundred, a hundred percent in the last year. And, most of us are very focused on building. I think everybody, even from the non-technical side, has some background and love for crypto and truly wants to push the space forward from a product point of view. And we have the trading as well because it's sort of our bread and butter. But we are, we're always focused on, you know, building the new tech building the new products. Anna Rose (00:07:59): Are there such things like quants in high frequency trading, well known, but is there a version of crypto quants that's different or is it sort of the same role? Hendrik Hofstadt (00:08:09): There's certainly differences between the way markets are structured and maturity in the markets in particular, but I think that's also quite a bit of overlap that really depends on what kind of product you're trading and what you're looking into. The interesting aspect is that quite a bunch of the, the quantitative researchers and, and developers that like we're hiring now also get to learn how to interact with blockchains. A bunch of new hires actually went through a Solana bootcamp and now going through different on chain development, boot camps. And we try to apply this type of knowledge to the building side as well, particularly from an economics and quantitative research perspective. This is tremendously valuable knowledge that can also be applied to designing protocols, designing tokens and economics. So the amount of synergies is incredible. Anna Rose (00:08:59): Cool. So Raul what's, what's your background that led you to be part of this applied ZK effort? Are you coming from ZK research? Yeah. What were you doing before? Rahul Magani (00:09:10): Yeah, so I, I sort of had a sort of non-traditional path, I guess, maybe into ZKs. I, in college, studied sort of electrical engineering in finance, you know, as a sort of like dual degree program. And I was very interested in the quantitative side of modeling for finance. And then I got into AI research actually. And then I sort of found my way into crypto as, as one of my friends was actually working at Pantera at the time doing, you know, quantitative research there. And I said, you know, maybe this is something that I would be interested in. I don't know much about crypto, but I'd like to learn. And I eventually actually interned at Pantera for, for a few months and, and really got into crypto from there. I ended up actually then going to, to Arbitrum where I spent some time just, ah, nice sort of doing, doing a bunch of different things on the scaling side. Rahul Magani (00:10:00): And that's how I developed my interest in L2s and scaling and all the things that are associated with that. And then I eventually actually went back to do my master's. And that's where I sort of got interested in the intersection of cryptography and machine learning. And so I was working on basically applying zero knowledge proofs to machine learning and figuring out how to create proof systems for neural networks. And while I was there, I, I sort of randomly got introduced to Jump and, and, you know, left my masters in, within, within sort of one week, you know, I had an offer within one week and I, so I just joined Jump, but I sort of still had this passion for, for zero knowledge proofs and I wanted to explore it as much as possible. So I've been digging into them digging into sort of all the math, the very interesting cryptography associated with it and, and really like how to leverage these proof systems to create production ready systems and really create products that leverage this sort of amazing tech. Anna Rose (00:10:50): Nice. Hendrick. I wanna go into your backstory a little bit and Certus One validator. So years and years ago, 2019 New York Blockchain Week. Maybe I hosted a panel and I think there was someone from Certus One on that panel, like Bison Trails. This was like, you know, right. When proof of stake networks were all still very much like testnets. And I mean, maybe there were a few that were live Tezos I think was live, but like there weren't very many, much that was out Cosmos, I think at that point wasn't out. So that was my introduction to sort of the validator crew. Tell me a little bit about what kind of thing got you into the validator thing, what Certus One became and sort of what you did with it. Hendrik Hofstadt (00:11:35): Certus One, it was, it was in 2018. The crypto markets had just crashed. I'd finished high school. And one of my high school teachers actually got me really excited about Ethereum smart contracts and like actually building applications beyond just pure transfer value on blockchain. So I was kind of diving into the space, looking around when I suddenly saw Tendermint and Cosmos, like building your own chain and the kind of internet of blockchains. And that kind of really caught my eye. And then right on, I think on the homepage of Cosmos, it was like, we're looking for validators. And I was like, huh, ah, my background at the time I'd done mostly outside of, outside of blockchain. A lot of security research at the time bunch of infrastructure things before running blockchain nodes. And this kind of looked like a very interesting way. Anna Rose (00:12:31): This was all during high school? Hendrik Hofstadt (00:12:32): That was, that was during high school. And this is, this is just post high school. I was actually like, cool. I was backpacking in Canada and after the backpacking trip, I was kind of, you could say computer staffed and I was looking around and saw this, saw this post on the Cosmos side. And so, wow. This could be an interesting way to apply the infrastructure knowledge and the security knowledge, because for the first time, like cybersecurity is always kind of seen as this like extra thing, particularly at the time of like, it's not the key requirement to have like the most secure system possible suddenly with slashing and the validators, like key management was the key and most important property. And that looked like an extremely exciting challenge. So Leopold, my co-founder at the time. Hendrik Hofstadt (00:13:20): And I set out to try to build the most secure validator possible. And that was the start of Certus One. I think we really approached it from a deep, deep, deep technical angle because both of us have very strong technical backgrounds and we build a knowledge base, we always try to approach this in a very collaborative approach of sharing a lot of our knowledge sharing, a lot of tooling. We broke Cosmos a bunch of times and participated in the back boundary programs when we did a chain, like the most important thing for us was to really know the software we're running. So we read into the code, we did security research and we did that pretty much like all of the chains. We then ended up launching a bunch of Cosmos zones. We were like early in Solana doing a lot of security research there, working with the team and like expanding the validator business, building out more of the infrastructure, which then obviously got pretty, pretty wild over time as it was exploding. Hendrik Hofstadt (00:14:16): We would've never expected this to be such an attractive business case as well for, it was primarily about exciting technology. And, and then through that work, we grew really close with the blockchain project. So one day we we've been doing a lot of Solana research, as I mentioned, and like, and it totally calls us up on a Saturday and it is like, Hey guys, you, you're probably one of the first people that built like more complex applications or knows the, the smart contract run time on Solana. Well there's a project coming, which then would, would later be known as, as project Serum, the central order book exchange on Solana. And he was like, we don't have assets. Can you guys build a bridge? And we were like, whoa, this is, this is pretty crazy. And a two week timeline is pretty wild. Anna Rose (00:15:05): That's what he gave you? Hendrik Hofstadt (00:15:06): Hell, let's do this, let's do this. Yeah. And it was like, Serum's launching in two or three weeks. And it was like, woo, this is wild, but let's do it. Let's do it. And then we dove right in, that was kind of the first point where outside of the pure technical infrastructure, building out the validator business and grow that with more chains, we were able to really, really use that deep knowledge of the system to build out something. And that was purely a token bridge at the time. But actually then over time with Certus what happened was that jump had been customer of us of ours on the infrastructure side. And they had been building Pyth on Wormhole a bit later and, and reached out to us of like, Hey, you guys built this bridge on Solana. Can you also bridge out the, the Oracle data to other chains, help us expand Pyth to other chains? Hendrik Hofstadt (00:15:55): We're like, wow, we had, we had built this bridge as a token bridge. We obviously made a mistake and forgot that there's a lot more use cases and interoperability. We haven't forgotten our roots in the Cosmos ecosystem where IBC was already pretty much fully fleshed out. And so we sat down, went back to the drawing board and thought about how to do this. And it was at that time that we jumped, we realized there's a lot of potential synergies to work together across infrastructure, across projects like Wormhole and a bunch of other initiatives we'd been running in security. So this is how they ended up making, making an offer kind of the, the president of Jump one day out of, out of nowhere, message me like Hendrick. I have a radical proposal to make. And that's how things came to be and how a lot of things got crazily accelerated. Anna Rose (00:16:49): Damn, you just mentioned something called Pyth. Can you explain what that is? Rahul Magani (00:16:54): Yeah. Well, Pyth is a high fidelity data oracle on Solana at the moment. And as, as Hendrick mentioned, these Wormhole to take, take Pyth prices, cross chain. And so Pyth is sort of focused really at the moment on providing really, really high quality data. So the publishers that we have are some of the, you know, most respected financial institutions in the world to name a few, maybe obviously Jump is one of them, Jane street, Two Sigma, like a few other, you know, trading firms that are, they're very, very involved in actually pricing assets and doing price discovery and having this data available on chain for, for applications to actually consume. And so we're really, really focused on that at the moment, but you can sort of think of it as similar, I guess, to ChainLink. If that's an analogy you wanna use. Anna Rose (00:17:37): This kind of data, is this data blockchain data? Is it crypto data or is it all data? Is it like outside of. Rahul Magani (00:17:43): It's prices at the moment, so it's like, you know yep. Like trading pairs essentially. So something that might be on Pyth would be like, you know, USD price that you'd, that you'd be able to get from an, from an exchange or, or one of the publishers actually. Yeah. In this case, Hendrik Hofstadt (00:17:57): It's crypto, but it's also equities actually. Yeah. So you have some of the largest firms that trade, these equities provide the data straight from the source. This is actually one of the crazy examples of what I'd mentioned previously with Jump, really seeing positive sum games. I think this is probably one of the first times in history, yeah. That all of these firms that are traditionally like really, really harsh competitors actually working together and bringing that data in together. And it's kind of cutting out the middleman instead of like the typical Oracle approach where you have a kind of node operating that fetches data from different sources, data has been pushed straight from the source on chain. You don't have to trust any, any middleman in between you get like the most, most recent up to date pricing information. And I see way, way, way more different data points being streamed over time. That could be the weather. That could be options, pricing like everything. Anna Rose (00:18:55): So you're saying that this oracle lives, like Pyth is on Solana, right? Like it's a Solana based application, I guess that it acts as an oracle, but why would you need to move that information anywhere? Hendrik Hofstadt (00:19:09): So right now the data gets aggregated and lands on Solana. I think the choice there really was with Solana block times as well, having access to that type of data source, they can obviously stream much faster than like a ten second block time so the 500 milliseconds are super attractive. They can definitely go even faster. So let's see where that leads us, but applications like a synthetics on Ethereum or an options protocol, another chain a futures protocol that need strike prices. They wanna consume that data as well. So it's natural that the data needs to flow to other places. However you can imagine 30, 40 ish different parties, streaming individual prices on a 500 millisecond interval that these are quite a bunch of transactions. So there's pretty much no other L1 out there that could handle that throughput without your bills going into the millions per day, potentially. So it's great to use Solana as the aggregator and then go cross chain using a protocol like one Wormhole, even though Pyth is also a case where we're actively exploring zero knowledge cases, use cases and applications. I think there's a couple of things that are really exciting... Rahul Magani (00:20:25): There are a lot of levers to pull here with zero knowledge, for sure. Yep. Anna Rose (00:20:28): But before we go into that, I like, I have another question on this, because what happens when there's exchanges outside of those sort of providers to the oracle? Like, so say, say something like Osmosis where like it's different assets being traded in a different place. Like, does that go, is that fed? I mean, maybe not Osmosis, but maybe like Uniswap on one of the other EVM chains. Like, is it yeah. Is any of that data also coming back in or is there a disconnect a little bit between like the centralized exchanges and the decentralized? Rahul Magani (00:20:57): Right. So, so, so the way that, you know, the prices are, are determined are, are pretty much what the publishers say the prices are in, in the case of Pyth. So it's not really from an exchange necessarily. It's what the publishers are providing and, and their ability to do price discovery is going to be the main sort of, you know, ability for Pyth to actually have robust prices. So the robustness comes from the publishers being able to do this very well. And like, obviously we wanted to pick some, some of the best financial institutions in the world to provide this so that we have robust, you know, price aggregation and, and, and feeds coming in. Hendrik Hofstadt (00:21:30): Yeah, it's very interesting in the sense that these data providers, what they often stream are their own traits and the prices they are quoting. And a lot of them operate across different venues, including decentralized venues. So quite a bunch of the firms there trade on Uniswap as well. They trade on a curve, they trade on all the exotic, different exchanges. They might even be doing MEV. So they have a quite good overview of the markets, but at some point like exchanges also source data. And there's this couple cases where Pyth is already consuming decentralized exchange data, right from the source with Serum being on the same chain. But there's also cases that we're exploring there of using Wormhole to essentially get the data from Uniswap, from trader Joe and, and all of the venues throughout the, the multi-chain world and aggregating that back and bridging that back to, to form a, a better aggregate value. Anna Rose (00:22:24): Mm. So I wanna hear the end of the Certus one story, and then I wanna dig into Wormhole deeper and Pyth and the ZK stuff, but I think we should sort of finish that. So yeah. Certus one running validators had this conversation about joining what's actually happened then to Certus one, because you had a team that was running validators, like, yeah. Are you still running validators? Is this like, yeah. How did you distribute out how this all works? Hendrik Hofstadt (00:22:52): Jump Crypto has now taken over Certus one. We've actually increased the number of engineers that are working at the team and have grown running more chains, more infrastructure across the board, the validator, some of them we've, we've renamed to Jump Crypto. Some of them are still called Certus One, but we keep on running this infrastructure. And then our internal security research functions development functions, all moved in and particularly like our focus on things like Wormhole has, has been, been super charged, been spun out into its own foundation with a much larger team. Now counting more than 30 people being dedicated to the project. And this has pretty much happened to all of our efforts that we had before. Anna Rose (00:23:38): Wormhole itself. Now I wanna dig into that. Like, I mean, what do you call it? Is it a bridge? Is it a bridge technology? Is it a, I've heard, I mean, I've an interoperability solution. I feel wary saying bridge because every time I try to call something a bridge, someone's like, no, no, no, yes. Yeah. What do you call yourselves? Hendrik Hofstadt (00:23:56): This still happens to me. I still sometimes say bridge, but interoperability protocol is what we call it. And I think that might even change over time. I think the way we look at it has changed as we interacted with more people and worked together with more people. We started as a bridge. Now we've become an interoperability protocol. And actually, what, what we're seeing now is we're getting even closer to, to the users, kind of what we identified our, our identity had to be. Our mission had to be, what we now call is like turning blockchain users into web3 users, which is allowing people to use decentralized applications without having to think about the L1. This is kind of the, the final mission of no matter where I have my main wallet. I want to be able to interact with Uniswap. Hendrik Hofstadt (00:24:43): I want to be able to interact with trader Joe and Avalanche C-chain and with any other application in the ecosystem. And I don't wanna have to bridge around, move, move funds, think about the different wallets that switch my Metamask network. This is just annoying. And coming from an interoperability protocol, if we look at it, the kind of final goal has to be fixing the user experience. And this is kind of what we're looking at. We haven't found a better name than like interoperability protocol yet, but I, I think we've, we've really, really identified what, what the core mission statement should be. Rahul Magani (00:25:20): Yeah. And it's not just the users sort of interacting with the applications themselves. It's also applications on different chains being able to interact with each other seamlessly. So it's, you know, the fundamental sort of interoperability protocol that is the generalized sort of message passing layer. So we call it fundamental to sort of like the generality of, of Wormhole, as opposed to, you know, just being a specific application, like a token bridge. Anna Rose (00:25:43): Let's, let's actually diggin on that, because you said it sort of started more as a token bridge. What do you have to add to allow for this generalized data passing? Like what do you do? Hendrik Hofstadt (00:25:54): Yeah. You take one step back actually it was okay. Token bridge is already one level up. It's already one of the, the higher level layers down below that what we looked at was like at the, at the very base level, what is happening is that there's a way for chains to essentially access or verify the state of the other chain. The underlying fundamental problem is that these chains can't access each other's state or trust each other's state without running light clients. So this is where we can later jump into the ZK angle. So we had to take it a step back and essentially build, build a basic layer that would essentially give access to this functionality of verifying states and exchanging information. This is the messaging layer. And then on top of this messaging layer, this is what happened with the Wormhole V2 or now just Wormhole with Wormhole V2, we split it up into the core messaging protocol, wormhole and Portal, the token bridge on top, which is just an application using this primitive, just like there's now quite a bunch of applications out there that are using the raw message passing to build out the use cases like Pyth but also like a bunch of other applications that we call XDapps, cross chain applications that use the messaging to like expand to other ecosystems and allow users to interact with their protocol despite being on another chain. Anna Rose (00:27:16): So you just sort of said this, it's like you had to go back to that idea of verifying the state. I'm assuming you don't use light clients at this point, because as I learned in a recent episode with Nomad, it's very, very expensive to run full light clients of other chains, especially in something like Ethereum. And often you are trying to link yourself to Ethereum. So what was it originally, if not a light client? Hendrik Hofstadt (00:27:39): The way it currently works is with 19 Oracles, we call them guardians. Okay. These are some of the largest proof of stake validators, actually that usually also happen to operate a significant share of the stake of, of any of the connected chains. And they operate nodes on all of these chains and what they do is they attest to and sign observe messages and state of these chains. And that is being used to then allow the other chain to verify that. So essentially an off chain oracle, and we've kind of a tiered approach to how we want to get from here to a set that is more than 19 signers. And then finally, in, in the last step to a stage where we don't increase the number, but we go to zero because we don't require oracles anymore because it's fully zero knowledge proof, and light client based fully trustless. Anna Rose (00:28:31): Wow. Okay. Okay. This is really ambitious. This is why I wanted to invite you on the show, because I, I do wanna say this, like the criticism I've heard in the past about Wormhole is that it's a multisig, it's a multi signer. And when you talk about those 19 guardians are those, the multisig holders, I don't know what you call them. Multisig participants. Hendrik Hofstadt (00:28:52): Yeah. Exactly. These are the 19, 19, 19 Oracles. I find it a bit ironic. Sometimes when, when people, I totally get the criticism and it's a valid point and it's something we want to change. We wanna increase the size and eventually get rid of any third trusted third party or middleman, but these 19 actually usually make up what you could say is like one third to sometimes even 50% of the stake of the connected networks. So saying you don't trust them. Then I can also ask you, why are you using this change? Obviously it doesn't apply to an Ethereum, but it applies to most of what we see as proof of stake out there, simply because, and, and you, you probably know that very well. There's only a limited set of companies that have the expertise to operate like highly secure, validator nodes. And that set, I think is no, no larger than a hundred to 200 firms. So this is kind of where we see the sweet spot of, of node operators and where we want to go. Anna Rose (00:29:50): Interesting. But you wanna expand it? So let's talk about that first step that you just suggested this idea of like making it larger, what would that actually entail? Rahul Magani (00:30:01): Yeah, I think one thing that might be interesting to go into is the original sort of wormhole V1 and, and why there's 19 in the first place. I don't, if you could talk about that. Yeah. Yeah. Hendrik Hofstadt (00:30:11): The original reason there's 19 is because with the multisig what needs to happen is in order to prove that two thirds plus have signed and observed a message we need to attach two thirds plus of, of signatures, which is I think, 13 13 right now. So you have to bridge that number of signatures to the other chain, which is quite a bit of call data, and then quite a bunch of signature verifications, which is expensive. We could go beyond that. There's some practical limitations with chains with compute limits, kind of upper caps particularly Solana as well, but 19 was kind of the sweet spot between being sufficiently decentralized. And we can get like enough of the, a typical POS voting power in to feel confident about the safety and liveness of the protocol. But at the same time, it wasn't a spot where it was still viable computationally and cost wise to verify the messages. Then there's obviously interesting ways to reduce the number, number of signatures required, but at the same time increasing the signers set. Rahul Magani (00:31:15): Right. Right. And, and I think practically, like, you know, the approach that we've been looking into for increasing the guardian set is really to say, let's not verify, you know, 19 or 13 signatures on chain. Let's do, let's just do one signature. And so there's a variety of approaches you can use. And there's a bunch of different cryptographic primitives, but threshold cryptography is one of them. That's sort of very interesting at the moment. And I think, you know, Hendrick has talked about this before, but originally when you know, he, he had looked at, you know, doing threshold cryptography originally, you know, for Wormhole because he sort of, you know, always on the cutting edge of this stuff. But I think the frost paper was released, I think, December of 2020. And at that point Wormhole had just come out. Rahul Magani (00:31:54): Yep. So using something like a technology for which the paper had just come out, you know, the month prior probably wasn't a very good, you know, practical solution. So I think like we, we know, you know, a lot more about Frost and about, you know, some of these protocols we know about the robustness of them. There have been audits of these implementations of these protocols on different elliptic curves. And so we know kind of much, much better now, like that these protocols are actually very robust and probably robust. And so we're actively exploring this area. You know, how to do threshold Schnorr signatures, how to sort of expand the guardian set, maybe to a hundred generate one signature that we just need to verify on chain and sort of reduce the cost as much as possible. Anna Rose (00:32:34): Where are you using the threshold cryptography in that? Like, is it the selection of the signer or is it in the data that they're signing on? Rahul Magani (00:32:44): It's, it's both. So I guess there's two components to this one is the threshold component, and one is the sort of signature component. And one thing that's nice about you know, some of these protocols is that you can choose any subset of let's say T out of N you know, signers, and that will result in a valid signature that you can use so effectively, all of these guardians would have a key share. They would generate a signature share, send it to a central dealer that would then aggregate these signatures, and then that dealer would sign the message. And then you'd all you would have to do is essentially verify that, that signature as opposed Anna Rose (00:33:16): To, oh yeah, this is the threshold signing thing. Okay. Yeah, yeah. Yeah. I remember this from conversations with the NuCypher guys and keep, and there's actually a project called Threshold, which I guess, and there's a new project called Entropy. That's also playing with this stuff. Okay. Got it. That's where you're using it. Cuz I've also heard this in the context of MEV and like creating more privacy for what is written to the chain. This is the Osmosis, it's just the word threshold gets thrown around. And I'm not exactly sure where, where it lives in this. Okay. But I see what you're saying there. Hendrik Hofstadt (00:33:48): Yep. Same technology, different application. Rahul Magani (00:33:50): It's very flexible. So it's, it truly is like a primitive, you know, so we used it anywhere. Cool. Hendrik Hofstadt (00:33:55): Yeah. But really the core for making the decision to go for the 19 in the beginning was the robustness of technology. If you're building something like a messaging bridge or, and a token bridge on top of it, this is a core trusted primitive, and it must not fail because there is a lot of value that is either explicitly as TVL or implicitly through just trust assumption of, if you trust the cross chain messaging for your governance, then the value of your protocol essentially becomes kind of implicit TVL that is hard to measure, but it's there, it, it must not fail. It must be as robust as possible. So making conservative technology choices was always super important. We saw what happened with some of the, like some of the threshold signature papers there were people that were implementations that forgot the very tiny detail, which was tragic in the end. Hendrik Hofstadt (00:34:48): And would've like allowed the protocol to be exploited. And this is something we wanna avoid by making like conservative choices on technology, but always being on the cutting edge with Rahul and team exploring their technologies building out MVPs. So yeah, the moment we consider them safe, and i'm also viable enough in the sense that some cryptographic primitives are just not supported on chains. I think you've previously discussed them on the ZK side of a bunch as well. Where we just need to wait out for L1 and L2s supporting them Anna Rose (00:35:21): For you. At what point is it safe? Hendrik Hofstadt (00:35:25): That there's, there's a lot of, I think there's a lot of, a lot of different aspects. When is it safe? I think when there's been multiple, like audited implementations, when there's been aggressive peer review of the papers and they've just had a bit of time to settle, there's no fixed amount of time. I would say after which you would call a paper or a mechanism mature, there's also different levels of complexity. Some mechanisms like frost are considerably straightforward in how they function. Other protocols are significantly more complex. So with them, you'd rather wait out a little bit more or see a bunch more of audited implementations and kind of let other people like try this out before taking a multi-billion dollar use case that has that much TVL already in making the migration. But I think we feel pretty confident about Frost and it just recently came out for making it more robust as a distributed protocol. So I think there we've been able to develop confidence. So we're making a strong move in there right now, because there's another extremely exciting aspect about this, which is that it allows us to know that Taproot also supports Bitcoin on the Wormhole portal. Rahul Magani (00:36:40): Yeah. And I think there's been a lot of discussions about, you know, what makes, you know, a protocol sort of secure, you know, from a theoretical standpoint versus like it's secure an implementation. I think there was also, you know, related to ZKs, like I think maybe a few weeks ago an issue like Fiat-Shamir and the use of the hash, the correct implementation of the hash function. And so there was an issue that I think, I believe it was Trail of Bits, that found, Anna Rose (00:37:04): I think these were Plonk implementations, wasn't it an implementation of plonk actually like an older, Rahul Magani (00:37:08): I think there was an issue with that. I believe it was also some issue with Bulletproofs as well. And, like the issue was actually found in the paper as well. So, you know, some, some of these things have been around for a while and people have used them and, but, you know, they're so complex that it's very difficult to know if they're actually gonna hold up over time. And you know, that's just something that happens with, you know, building new technologies. Anna Rose (00:37:29): So I, I recently had a conversation I think in a podcast where we talked about this idea of like engineers, like in the implementation of some of these things, these bugs actually get found, but they're not always like projected into the world. Like the fix is just made and it's not out of malicious intent, but it's because like someone might be re-implementing it. And, or there's like assumptions in a paper that aren't completely overt to everyone who's implementing them. And I know that's a really interesting example because that kind of disclosure of Trail of Bits that was actually, it didn't affect every system. And that was so fascinating to see like what, what did get hit and what didn't. But yeah, I actually wanna, I wanna move on to the hack or the vulnerability because I wanna go into that because everything we're talking about right now is like cryptography being well, you know, well battle hardened and, and even like you described this initial design as being conservative, like careful. Yeah. And, and, and as I get it, like, I mean, you call it a hack. Yes. Some people would say they are vulnerability other people anyway, but okay. So it's a hack. Hendrik Hofstadt (00:38:36): There was a hack because there was a vulnerability, so I'm fine with both descriptions. Anna Rose (00:38:43): All right. So yeah, in that hack though, like, can you explain to us where the problem was because was it, I mean, was it in the 19 guardian choice or was it somewhere else? Hendrik Hofstadt (00:38:54): It was not in the design and it was not in the 19 guardian choice. Anyway, we've seen an example of maybe too small of a signer set with the Ronan bridge exploit recently. In this case it was probably one of the most common things in, in blockchain and smart contract exploits, which is, which is a smart contract exploit. So there was a bug in the implementation of the Solana smart contract of the wormhole call messaging bridge. Anna Rose (00:39:21): Wow. Hendrik Hofstadt (00:39:22): Which essentially allowed an attacker to bypass the signature checks. So the attacker could essentially forge signatures and pretend that there were enough signatures of guardians and then produce valid messages. And what the attacker did was they produced messages that looked like token bridge messages bridging Ethereum into Solana. So they were able to mint arbitrary amounts of wrapped ETH in the Solana ecosystem, which they then used the legitimate bridge path to bridge back to Ethereum. Hendrik Hofstadt (00:39:53): So that was despite like multiple passes of review and audit at the time that was a bug that was overseen. And I think this is really what, what we need to realize with what we're building here in, in blockchain, like code is unfortunately never perfect. So striving to get as close to perfection as possible, getting as many eyes on it as possible. Yeah. Is really, really core. They've been really strict practices before actually internal review processes. We made it even stricter, we've since then, despite there already having been audits and multiple audits actually in process are about to be kicked off. We've done even more audits. We have, I think, four or five different security firms on retainer that are constantly looking at the code base at every single change that's being made. No changes rolling out without going through that. Hendrik Hofstadt (00:40:46): And we've got one of the industry's largest bug bounty programs out there to actually incentivize white hats to collaborate and report bugs, if they find them. And I think the combination of these different pillars, this is really what, what is, what is core in, in something that is yeah. A rocket ship and, and not a car. Yeah. You can't just drive to the mechanic and have it fixed the moment it's launched. It's out there and it's just on the way and it works better. And since then, I think we like spiking up to, I think, value being secured four to 5 billion, we've had tons of people look at it also as a result of the attack. And I think this is one of the pieces of infrastructure that really over time also, because messaging, you rarely modify anything there just settles and becomes a like fundamental, very secure and mature piece of infrastructure that people can rely on. And I think this is what it is developing towards and has developed towards having been life since then with tremendous amounts of value being secured. And I think Jump plugging the 320 million hole was a sign of the conviction we have into the security of the code base and the processes involved. But it's obviously something that I think we're gonna see in the space with applications, and this is why I think the audits Whitehat incentivization through bug bounties and just a collaborative security approach and better tooling is so important. Anna Rose (00:42:16): Yeah. Yeah. This it's what's interesting here is it happened on the Solana side. So, and this kind of speaks to the idea that like the Solana smart contract ecosystem and tooling set is probably less developed than the Ethereum one at this point. And so you don't, I mean, I'm just guessing you don't have the same kinds of checks and tools. And we had Giorgios on recently to talk about Foundry and I mean, that's pure EVM. It's not looking at anything else. And so like, yeah. Is there an equivalent of Foundry or something like that over on or truffle on Solana? Hendrik Hofstadt (00:42:50): There's tool chains? A lot of that has also just developed recently. Yeah. And security scanners are being built in the Solana ecosystem. Like in the EVM ecosystem, you have three, four different security scanners that can help automatically detect some of the common vulnerabilities. You have scanners that can actually solve problems where you can do formal verification of smart contracts. That is like, these are pieces of dev infrastructure where the EVM world is just still miles ahead. And I think that was part of like, part of the reason how, how things like this, like this was dealing with a more like deeper Solana, more complex Solana, primitive that may be out of 20 people building smart contracts one or two only have to interact with like these type of things that just not as flashed out yet, but that doesn't change the fact that things can be overseen even in the EVM world. Yeah. And these processes remain just as important though. Yeah. Anna Rose (00:43:48): To me it highlights something like there's all these new chains coming online. And I mean, yes, there's a lot of EVM compatible ones, but there's also a lot of new smart contract chains. Yeah. People are bridging to those chains, there's all these different kinds of bridging technologies that are happening. But yeah, the smart contracts on those new chains need to sort of catch up in terms of the security understanding. It's not to say they won't, I'm sure they will, but it's just maybe yeah. Something to, for people out there listening to, to start working on cuz it's, if it's scary, that idea of becoming very multichain and having these kinds of dangerous pockets that are so unknown. Yeah. Hendrik Hofstadt (00:44:28): Yeah. I, I think we're trying to tackle some of that in, in particular as well with like building dev tooling for building cross chain applications and building across these highly hydrogenous environments. Like the core pillars of Wormhole, we kind of define them as Xassets, truly cross chain assets that don't live just on one chain, but can freely move without slippage or any exchange having to happen on multiple chains. And UST was an example of that. UST was fully fungible between all chains as a Wormhole X asset. Unfortunately it failed for other reasons, but that part was particularly successful about it in it being able to move around. We have a bunch of other assets where we see that there's like extremely strong synergies. The moment your asset becomes a truly cross chain asset. And then the second aspect of what we call apps, truly cross chain applications. Hendrik Hofstadt (00:45:20): They kind of rely on these assets being fully cross chain fungible. You don't wanna swap every time you move between an ecosystem, you want the coin to be everywhere and have canonical representation. And then what you can do is, build applications that span across multiple chains, build applications, where you have your hub, you build your application once in one language like that can be solidity. And on EVM you have one core deployment and all the other deployments essentially are just light clients to that. Yeah. Are essentially just remote controls and then you have liquidity in one place, no fragmentation, but you can still use them from everywhere. And this is where we are trying to build a lot of tooling. Rahul Magani (00:46:01): Yeah, this is actually like, this is very exciting from a security standpoint too, because right. Like if you have multiple different chains with different tooling, you're writing in different languages and different virtual machines, like the, the probability of bugs is like, you know, you sort of have this like combinatorial explosion of bugs that you really, you know, you have no idea, you know, how to, how to find these bugs. You have. So maybe you bespoke sort of, of you know, primitive on Solana that you don't understand, but there's another one on Ethereum and you don't understand there's, one on some other chains that you don't understand. And what's nice is like, I, you know, I'm sort of particularly excited by things like formal verification because you can sort of prove at the protocol level. Here's what I want my application to do. And here's what I believe is going to happen. Please tell me if it's going to happen, as opposed to, you know, let me go find every, you know, nook and cranny of this code base and search for, you know, what potentially might be the, be the error. And so if applying those principles to maybe one application, one xDapp is, is a lot more sort of secure from a, you know, defense in-depth standpoint than doing sort of the traditional bridge. Hendrik Hofstadt (00:46:59): Yeah. Kind of build once, ship to all the chain models. This is the type of tooling we wanna build because like liquidity, fragmentation is just no option for a protocol. Mm. And building the same code base in five different languages would be so annoying. I think that's, we don't need to, to debate that that's just an unnecessary amount of work. Anna Rose (00:47:21): Going back to the formal verification idea. I mean, first of all, that's a lot of work to do formal verification across a lot of these things. And even with formal verification, I feel like I've spoken with people about this where like, there are still limitations cuz there's bugs. You can't, you can't imagine testing for or something like you still have to have a very good imagination to be able to be sure. Right, right. That everything you've tested for has been tested correctly. It sounds almost like a wild west and a bit of like, I'm gonna borrow this from the MEV language, but like this accelerationism where it's like, you're just like as long as there's these, this chance for a lot of these exploits, people will probably try and it could actually build a much more battle hard in general. I just think it's going to be very it-like, it does sound like there's, we're like in for some things in the next little while, but hopefully, hopefully they are mitigated enough by as much testing as possible. And then also that at the end of this, there is something much, much stronger and more understood and that like a lot of these other languages get, you know, better tooling and all of that. Rahul Magani (00:48:29): Yeah. I think that, I think the main difference between sort of blockchain development and sort of the web 2 paradigm, I mean there are bugs for sure, you know, in the web two paradigm and most of them go unnoticed largely because you're not really securing assets directly. And the difference here is like you write some code and that is directly securing, let's say, you know, in the case of Wormhole at some point like $4 billion. And so you have, you know, the potential of, you know, the, the, the risk is much, much higher. And so focus on security has to be much greater. And I think that's why a lot of protocols focus on audits. They focus on, you know, doing defense and depth, but, you know, I think formal verification, as you mentioned, is definitely difficult. It's more complex, but it's sort of just another tool in the bag that you can use to, you know, sort of prove the correctness of your program. And I think it'll become more and more important, especially for finding smart contract bugs. But not just for smart contract bugs. It's really like proving protocol level security, right? So interactions between different programs, different modules, different libraries and, and knowing that this is actually a, what is supposed to be happening you know, data like moving through, you know, every library, every module, every API correctly. Hendrik Hofstadt (00:49:35): Yeah. That's why I'm actually taking the leap of getting a bunch of the one more modules formally verified with, with a partner. I, we think like going forward, there's just no way around it for something as fundamental and critical as this, it just has to happen no matter how complex Anna Rose (00:49:51): One quick question I have going back, I, I wanna go back to the 19 guardians and the next generation of guardian numbers and the ZK because we haven't talked about that yet, but I just wanna ask on like the guardians that are there now, do they have any sort of like protective function? Can they halt stuff? Can they freeze anything? Like, do they also in a way currently act as a backstop? Hendrik Hofstadt (00:50:16): Mm-Hmm so these guardians just like in a, in a normal blockchain network, if they shut down, they can halt the system or if a, if it's sufficiently large number of them. So one third shuts down, they can halt the system. This is not explicitly programmed into the protocol, but it actually, in the case of that exploit, is what happened. Anna Rose (00:50:36): Oh, you were able to stop it basically to say, Hendrik Hofstadt (00:50:39): Ah, within just a couple minutes the team had detected what happened and the guardians had been informed and the guardians verified and then individually shut, decided to shut down their nodes, preventing further damages to the system. And I think this was an amazing, like showcase of how well the community functions and I mean, within 24 hours, the system was operational. Again. I think we are, we're quite proud about the, the incident response process there and how well the processes worked, you know, that as a validator, like we always talk about these, these like, there's always talks about how, what, what to do in these crisis cases. Yeah. How, how to collaborate, how people have exchanged numbers and how, how you can page all of them in an instant. And sometimes, and like in, in cases like this, this is what needs to happen. And there needs to be community consensus. And in this case, it worked out like perfectly, you could say, and they all coordinated to do governance and do an upgrade, fixing the bug within just a few hours and shutting down within just a couple minutes, individually verified, no central kill switch or anything. Anna Rose (00:51:42): So this was like the 19 guardians said, okay, we're gonna halt the chain to prevent further damage. What was happening on your side? Like, what was, what were you, what was it like in that room . Or is it, were you in the same office? Hendrik Hofstadt (00:51:55): Wormhole team, particularly quite distributed all across the world. So the drainage of funds was quickly detected. We have an internal process, so this is part of doing security, right. These things can still happen, but you have a process in place even for the worst case, as unlikely as you think it is. So that was a process around someone taking the lead, and owning this. Essentially this incident, a war room being opened up, all people being paged, woken up, like no matter what the time was. I remember I was just playing a game with like a murder mystery game with my friends that night turned into true crime, but. Anna Rose (00:52:33): Oh my God. Hendrik Hofstadt (00:52:34): Um we got all pulled in within minutes after this happened and made decisions to inform the guardians, have them verify that there was indeed an incident, so they can individually make the decision to then shut down if they think it's the right decision. And then it was just a very constructive process of like, how can we, like, what was the actual bug? How can we fix the bug? How can we prevent further damages? And I think it over, like, I'm, I'm quite proud about the performance of the team and how quickly we've been able to resolve and deal with the situation despite the financial damages cost. Anna Rose (00:53:10): Did you find out who it is? Hendrik Hofstadt (00:53:11): No, this is, this is still unclear. Investigations are ongoing. Anna Rose (00:53:15): Oh, wow. Did they use any sort of ZK related mixers so far? Hendrik Hofstadt (00:53:21): The original funding came through Tornado Cash. Wow. Despite that I can't really comment. Anna Rose (00:53:26): Got it. Hmm. Well, I guess we'll keep an eye on that. The thing that I, you know, felt like when having you on the show, one of the thoughts I had, and I actually didn't know this as a fact, but I assumed given the development of all of these, you know, interoperability solutions I've had like Axelar on, I've had Nomad on, like I've had, you know, I've been talking to these other non bridge, X bridge, interoperability, whatever they're called, you know, interoperability solutions. And basically I got the sense, like Wormhole is most definitely going to be thinking about this, even though I didn't know that as a fact, I figured like you must be looking into new models, new ways to think about bridging these assets without the guardian model. So tell me a little bit about what work you're working on? You'd sort of mentioned some of the threshold cryptography stuff, but like, let's go further into that. Hendrik Hofstadt (00:54:19): Yep. Maybe, maybe starting from the, like just a high level business perspective of the Wormhole approach. The Wormhole approach is always one of like an organic kind of development as the space matures and as technologies mature and become viable. And while technologies are not ready yet, we try to always provide the best possible interface to developers and to users. So the way the Wormhole messaging API is built is that it totally extracts the way in the back end. So when you use Wormhole as developer to build a cross chain application just use the portal to make your asset a cross chain asset. When you do that, you never inter, like you never see the 19, you never see a multisig. You never see any of that, and you're never gonna see threshold signatures. And at the same time, you're never gonna see ZK proofs. This is happening essentially on the back end and is fully abstracted away from the interface that developers and users use. So that was the core making it modular. Plugable. Anna Rose (00:55:16): Would you also, you wouldn't see any delay you wouldn't like to have to wait for signing to happen or something like that. Hendrik Hofstadt (00:55:22): It would, it would only get faster. I think with, with like, potentially like actually speeding up ZK. That's another topic we can dive into that we're heavily working on. Cool. But yeah, this is essentially like the focus there and, and as we're making that we obviously, like, there's no way around trustless. We see that when two chains, these like the, the middle man, the signers, the multisig participants, guardians, are only there because there's currently no way for the two chains to verify each other. But fundamentally they're trying to step in as a midterm solution to that. But ultimately we kind of go where Cosmos started 3, 4, 5, quite a bunch of years now ago. And with, with what IBC is, which is light clients verifying where the chains actually verify each other and establish direct trust relationships. Hendrik Hofstadt (00:56:16): And the only way we can get there, no trust in the middleman, only trust in the chains you use. We need to go and use zero knowledge proof, light clients, and, and zero knowledge proofs for the reason like you could run light clients without your knowledge proofs, but there's the aspect you already mentioned of like, it's extremely expensive. Imagine relaying every single, Solana slot 2 to Ethereum. You would probably occupy most of the block space and that would be quite expensive. So ZK less for the like zero knowledge part and zero knowledge being privacy, but zero knowledge in, Hey, I can compress. I can prove something without giving you all the data. Yes, yes. And this is where this comes in, where we can either do, like, if we prove a Mina, we can actually use the full, full state proven execution proof. Or if we have an Ethereum, we do a light client proof of the head of the chain. Anna Rose (00:57:06): Are you influenced or inspired by Plumo in the work that they did over at Celo? Because there it's like the light client, Kobi who it's the co-host of the previous episode. Yep. And who I work with at the validator Plumo being this, this compressed light client, are you, are you using this model or is it, is it somewhat different? Hendrik Hofstadt (00:57:25): Plumo is amazing because they saved our team a lot of time and implementation because for a chain like Celo, we can just like essentially reuse a lot of the ZK work having been done in ZK light client compression. Yeah. That has been done there. Cool. but, Rahul Magani (00:57:42): And it's a similar architecture, right? Yeah. At the end of the day, like, like, it provides us a basis to see, like, you know, here's how we should think about architecture. But obviously there's gonna be some things that change you know, one of the sort of difficult things about using zero knowledge proof, I mean, in this case, we're more concerned with, you know, compression as Hendrick mentioned, as opposed to the zero knowledge. I mean, yeah. That's something that we wanna add potentially have as, as an add-on that can be sort of enable privacy infrastructure for Wormhole, but we're really interested in doing like sort of these light client proofs where we're verifying, like essentially creating these state proofs and then bidirectionally leave bidirectional state proofs to allow different chains, to verify each other directly. And so that's the ultimate goal. And, and I think obviously there's gonna be a lot of challenges with this. And I think one of the main ones is that the consensus of different chains is, is quite different and, and these mechanisms are quite different. So integrating with chain, Anna Rose (00:58:34): So which curves are allowed. Rahul Magani (00:58:36): Yes. That's, that's another thing that, that we should definitely, you know, sort of talk about. And that's actually one thing that I've been more and more interested in in terms of standardizing curves, you know, across different chains. I mean, a lot of you know, chains even right now, like for example, the EVM doesn't support, you know, pairing friendly curves really. And you have, you know, BN254, which is sort of the standard, but it's not very efficient for verifying, you know, zero knowledge proofs. And once we get to a standard where we have these pairing friendly curves, where we have like, you know, some of the other cycles of elliptic occurs that are necessary for recursive proofs that, that will be extremely important to like the adoption of ZK proofs briefs as, as like a cryptographic primitive, as opposed to something that's this kind of bespoke area of research that, that we're sort of interested in because it's a cool technology supported by a lot of interesting math, Anna Rose (00:59:24): But then are you waiting for that? Are you sort of waiting for a standard across chains of which curves are allowed, and then you create a version of Plumo that you would kind of like to implement there? Like, would you implement it? Would you build it out? Rahul Magani (00:59:38): Generally? I think jump takes the approach of like really kind of pushing the forefront of technology as opposed to waiting for things to happen. So like we've been actively, you know, speaking to a few folks in the ecosystem. I, you know, I recently had a conversation with the Solana folks and like, you know, totally about potentially supporting some of these pre-com compiles, you know, that are necessary for this stuff. And we wanna leverage ZK proofs because we think that they're, they're very important and that they can allow us to do things that are more interesting than sort of the traditional sort of cryptographic primitives, but we still need some tooling around it. We need the ability to even write circuits efficiently, right? Like there's a bunch of different proof systems out there now, like, you know, Plonk, Nova, like Nova's a recent one. But interacting with them, like, you know, like Arkworks, for example, like is, is, is a pretty standard library at this point, but doesn't support more novel proof systems, you know, even Plonk supports it, it really isn't there in arkworks. Right. Anna Rose (01:00:35): Although there are initiatives to build that out. I know Plonk is, has a crew. Rahul Magani (01:00:40): Like Manta, for example, has recently launched like open ZK lib and like the folks there, and they wanna make a higher level abstraction that allows you to interact with different brew systems from the same language. And so, like a lot of these initiatives are very, very important. I think like, as we spend more time on this tech, like we should think, be thinking about tooling and, and like part of the, the work that I at least wanna see, you know, that comes out of the applied ZK work that we're doing is like really digging into how do we build this tooling and make it more effective so that we can leverage the technology in a, in a really sophisticated in really robust ways, opposed to just going after some of them and, and not worrying and, you know, outsourcing some of the, if tooling work to other people we're really like focused on building this out. Like, like we don't like to wait around for other people. We like to do it ourselves. Hendrik Hofstadt (01:01:25): Yeah. It's a very, it's a very Jump approach. You've seen that. I think Wormhole and Pyth are an example of Jump culture in the sense that they're like throughout venture and, trading, market making, we get exposure to pretty much everything out there. And then when we see a problem where we think, what, why is no one solving this, then we first try to find the people and, and work with them. But if there's really pressing issues where we see ways we can contribute as Wormhole being able to like, solve the interoperability problem, or Pyth of like, why, why am I only getting data that some validators scraped off of Google finance and not the data directly from the source then, like people like just start building and applying some of the research. And I think on this research side, like jump's always been super research heavy has like crazy amounts of hardware expertise. Rahul Magani (01:02:17): It's not, not just Jump Crypto. It's always been like, you know, a very research oriented firm. Yeah. You know, from the HFT side as well, Hendrik Hofstadt (01:02:24): It's like, go ahead, like, okay, we're just gonna build it. Or we're just gonna work with others to try to make it happen if it's not there yet. And I think this is the approach which is largely taking on the ZK front of just hiring the best talent and trying to build a solution, bringing the best people out of the traditional side of the company to help solve these hard problems. Anna Rose (01:02:45): Going back to that earlier point we had made about, when is something ready? Like, how are you gauging? Like, so you're, you've talked a little bit about how, like, individual chains potentially need to allow for certain circuits in order to deploy some sort of standardized light client, compressed, light client using ZK. Is there something else in the actual proving systems or like the ecosystem, is there some point that you're waiting for to, to do it? Because it sounds like right now it's too early, like it's in your research kind of trajectory and you're hiring a bit, and you're probably gonna do like test implementations, but it doesn't sound like you're gonna enable this next month. It sounds like it's a way off. Rahul Magani (01:03:28): This is much more, much more long term. Yeah. Anna Rose (01:03:31): What's, what's the point where you feel like it's gelled enough to actually implement? Rahul Magani (01:03:35): So, yeah. So I think like one thing that I'm sort of specifically waiting for is like having like sort of pairing friendly curves like support for pairing friendly curves on, on every chain that is really the, the mote here for, for actually deploying zero knowledge proofs because you can't really efficiently verify without these pairing friendly curves. And so once, once that happens, like I think we'll be in a good position to actually deploy Zero Knowledge proofs on, on many different chains, as opposed to just you know, one chain that, you know, that supports it right now. And one thing that I would really like to see is like support for recursive you know, Zero Knowledge proofs in that case, you would need sort of these, these cyclic elliptic curves where and, and that's, you know, that that's a bit more complicated. I mean, Mina, for example, does support these curves. Like, you know, the pasta curves are known. Anna Rose (01:04:25): Yeah. I almost feel like, and this, maybe this show can help this a little bit, but like getting folks who understand this to start sort of speaking up in different ecosystems as to the need for it, because it's, they shouldn't really be controversial. They're just like things that would really make, you know, connecting to other chains, potentially ZK friendly. You could actually use ZK to do it. Rahul Magani (01:04:47): Yeah. I think this is actually maybe a, maybe a good segway because we've sort of been working on the hardware side as well. And I think there is a sort of community initiative around, you know, accelerating the zero knowledge proof system, which is known as the Z prize. And, and there's quite a number of people working on it. And I think this is probably one of the first moments I think in sort of this zero knowledge history where like a bunch of different participants in the space are coming together and like trying to standardize things. And so there's been a lot of discussion for example, on when actually designing the prizes, like which curves should be used. Like, what is the standard here? Yeah. There, there have been, you know, a number of different discussions and, and people have finally sort of settled on a few curves. And I think like once we have this collaboration, this like communication between different parties, like we're gonna be able to actually know people will know, like, this is what the, this is the curves that we need to support. And so there have been some standards, some standardization across this, like, like with the merge, like ETH 2 is gonna support like bls12377, like there's gonna be a support for like 377. So, Anna Rose (01:05:44): You know, that's true. Rahul Magani (01:05:45): I actually, I thought they were I might be wrong, but I, but I heard that I know at least for the Z prize, for example, like a lot of the prizes are sort of settled on bls-12377 and 381. And so at least there's been some discussion about that, which, which I am very excited about opposed to, you know, just using random curves. Anna Rose (01:06:06): That's awesome. And actually, yeah, Z-Prize, that's so cool to think of that initiative. Z prize is like pushing the standardization, cuz actually there is a group called ZK proofs. Who's been doing standardization research work, but I did always get the sense, I think they did too, that it was like they were doing this at such an early stage. There'd be like new protocols coming out, new systems being developed. And it was kind of hard to say what the baseline is at that point. And it's, it's kind of funny too, to think that it's through this like competition basically like funding, the hardware acceleration, just sort of forces the issue being like, no, no. Now we really have to make a decision competition Rahul Magani (01:06:47): Is a great way to motivate. It's a great motivator for sure. Anna Rose (01:06:49): That's cool. I hadn't thought about it that way. You had also mentioned using ZK in Pyth, is that used in a different way? Rahul Magani (01:06:57): Yeah, so, so it's actually, I think pretty similar in the sense that like right now, the way, the way Pyth works is that these, you know, you have these set of publishers who sign prices that they want to attest to. And, you know, on Solana it's, it's verified using a multisig approach. And again, we want to, you know, reduce the cost of this. So we can, we can actually leverage threshold cryptography again as a potential way to aggregate the signatures. So that we're only verifying one signature and then we wanna actually potentially use this is also obviously just a, an active area of research. We haven't, this is not something that's gonna come out, you know, next month, but it's something that we're actively exploring and leveraging like zero knowledge proofs, a way to maintain state off chain, as opposed to you know, maintaining state on Solana. Hendrik Hofstadt (01:07:43): Imagine if you wanna take, you have Pyth data, it essentially gets aggregated in Solana. So you are somewhat implicitly trusting Solana validates besides the data source. You want to go cross chain, then you trust the proof. In addition to the aggregate there, and here, you can essentially eliminate every single middleman between the like source of the price components. So individual price contributions of the participants, and like you're kind of cross chain native because the moment you have a proof of an aggregate price that is portable, a ZK proof is very portable and you can even compress multiple of them into proof of proofs. So imagine the, like the, both on the Wormhole side, a proof of the world of like here's a state, a proof of proof of the state proofs of all of blockchain. Here's a proof of proof of like 200 or 2000 different instruments that are getting priced. And you submit that single proof to the chain,ucost of one transaction, one proof verification, and then you mainly do a very cheap inclusion proof to get an individual price. This is where we want to go. And then like, as these proofs get faster, like we can have data Rahul Magani (01:08:57): Scale arbitrarily and that's, that's the cool part. It's really like you can scale to as many publishers as you want, you can scale to as many prices as you want. There's really no limit other than the verification cost on the chain itself. And so that's really the approach, like there's this divergence that's really nice in these proof systems and the computational complexity of generating the proof and the complexity of verifying. So that's where we really get like the cost savings. Hendrik Hofstadt (01:09:22): Yeah. So it's pretty amazing. You can get really crazy frequencies of updates and you have no cost of scaling to more products, which is like the big goal. And here the same way research means we're starting to write circuits, we're experimenting with different proof systems, but like a lot of the things we do now, like reach to like a research. These are more like MVPs that can develop into a full product. This is not like a small research team, somewhere in the basement of the company. This is very much directly next to the product team or sometimes the product team is already working on this. And I think a lot of the circuits will be developing like on the one whole side of the light clients there, the designs that get written now they are going to be ready for when the technology is there to actually cheaply verify them on chain or verify them on chain at all. So once it's viable, we can make the move and ship next month. Rahul Magani (01:10:16): We're gonna, we're gonna push to make it viable too. Right. Like we wanna build those, those primitives as well. Hendrik Hofstadt (01:10:20): Yeah. We have a team working on FPGA hardware acceleration, competing in the Z prize ourselves besides sponsoring. Yep. Yeah. Like trying to push the boundaries wherever possible. Anna Rose (01:10:31): Cool. I wanna know actually the state of what you just described for the ZK Pyth stuff. Pyth am I saying it right? Rahul Magani (01:10:38): Yep. Pith.Yeah. Anna Rose (01:10:39): Yeah. Like what state are we talking about? You sort of say you're next to the product, but like how far off...? Rahul Magani (01:10:44): Yeah, so I mean, I mean, we're, we're very new at this, at this point we have sort of built out a very small, like sort of proof of concept, like very, you know, very small kit to do like signature verification. We've obviously spent a lot of time, like I've been writing quite a few like design docs on this trying to actually come up with an architecture that makes sense. We've explored, you know, a bunch of different things from zero knowledge proofs to, you know, creating a peer to peer network that's asynchronous using threshold cryptography, using threshold cryptography with ZK proofs to make it curve compatible so that you could sort of wrap up the entire state in a proof and then you know, basically sign it with the threshold with threshold cryptography so that you can actually verify it on chain. So there's a bunch of different, you know, architectures that we were actually just talking about before this, about this. And like how, how could we make them, you know, compatible right now? I think there's a lot of different options that we're exploring right now. We just haven't. We still need to go through the process of rigorous design and, and, and rigorous sort of the process of actually engineering something. So it's, it's gonna be, it's gonna definitely be some work, but it's very interesting. Hendrik Hofstadt (01:11:46): Yeah. Something that's gonna come fairly fast is right now provides stream straight to Solana and provide that prices there. In order to even do the ZK prices, we need a peer to peer network where everyone kind of gossips and publishes the individual price components. And starting from that point, some of these POCs might even be able to roll out to some chains and actually use that. So they might actually be viable like alpha, but usable in the real world. And this is kind of amazing because like, no matter what chain in the world, or no matter what network, as long as enough people are publishing data, you can push it on chain and you, you totally avoid these like single point of failures. And we want to get there as fast as possible. So we're gonna like to take an incremental approach here with Pyth. I think that's more possible than with the wormhole we kind of in order to really put it into practice. All the chains need to support it. Yeah, but with Pyth we can take an incremental approach. Rahul Magani (01:12:42): Yeah. It's definitely, it's definitely nice in that, in that run. I think one of the things that I really wanna highlight here is that like, we want utilities, like, like we're fundamentally building utilities for the ecosystem like with Wormhole and Pyth and, and we want these utilities to be sort of decoupled from each other. Right? Like we don't want utilities to be sort of coupled, right. You don't want your electricity to be coupled with your gas, right? Mm. Like, you know, if you lose your gas, you might lose your water like that. That's not great for your house. So like for this ecosystem, we want, you know, oracles are, are fundamental primitive of the ecosystem. And so our cross street bridges and, and generalized message passing we have like, you know, layer ones, like, right. Like we want to make sure that there's no sort of like single point of failure. There's no, there's no domino effect. So like, when, for example, when something happens to Pyth, like it shouldn't affect Solana when something happens to Solana, it shouldn't affect Pith. There should be a fundamental independence between them. And that's really what we're trying to achieve with this and sort of zero knowledge proofs became a way that we were sort of exploring to sort of like start decoupling. Anna Rose (01:13:41): Cool. I wanna wrap up on sort of a last question, bringing back the sort of future vision you have for like, let's, let's assume that in a short amount of time, pairing friendly curves are supported everywhere. You're able to do the ZK model. Would you remove the guardians in that case? Or would you kind of keep something like them? And this is more of a question of bridge stuff. And this also, by the way, comes back to the point about UST, like the ability to stop something when it's going out into the ecosystem when maybe it's toxic. Rahul Magani (01:14:18): Yeah. There, there's definitely a case to case to be made for, for keeping some set of guardians. I think like one, one thing that I think we were talking about as well at some point was like, if you wanna make a protocol truly decentralized, you want to make the participants of the protocol who are most invested in it to have the ability to sort of provide their, their resources in a moment where there's like catastrophic failure. Right? So like, yeah, figuring out what these failure modes are and then saying like who is going to be involved in ensuring that this doesn't happen or we can stop what's happening. And I think like in that case, we'd want to, instead of like, you know, Wormhole, arbitrarily picking who these guardians are it would be more of the network picking who they, you know, who would be, who would be responsible for this failure mode is, is more of a, is more of the question. Anna Rose (01:15:05): You kind of picture it being governance, right? Like an, almost a vote, something that would decide which assets maybe get stopped or included Hendrik Hofstadt (01:15:14): After a while. And this is very much on the portal side, on the, on the token bridge side, they were actually also moving away from the bridge. And I like calling it like just the, essentially a portal to create cross chain assets. Okay. Because fundamentally with one click, you have a representation of your asset across all of these chains that can freely move between them fully permissionless and without any white list or anything. So that's pretty cool. But one thing we're thinking about there is particularly for higher order applications on top of Wormhole, like the Portal you'd wanna have a council that is governance elected that has a stake that can, like two of them, halt it for five minutes. Three of them can halt it for 15 minutes and then if they halt it and governance decides that was not legitimate halt, they can get slashed for that amount. Hendrik Hofstadt (01:16:01): And ideally that bond they put up would be larger than what you would recently reasonably expect MEV to be by halting token transfers on a particular link. I think a token bridge protocol or tokenization protocol will require this, the messaging itself. I think there should rarely or never be a case where there needs to be a halt because fundamentally it's just a testing state and mentioning UST, I think UST would've been an example where like there wouldn't have been any reason to stop. Actually, I think this has been one of the strongest days for, for Wormhole because Wormhole facilitated pretty much all of the UST transfers across chains, and Wormhole facilitate a lot of the bETH or stETH transfers in and out of Anchor because they'd been using that as a canonical bridge without any hiccups, like record days in volume. But I think the whole team is really proud about how that bridge was able to stand up to what I haven't checked the exact numbers, but very likely was record bridge volume. Anna Rose (01:17:05): Are you talking about when it fell apart and everyone tried to get their assets out? Hendrik Hofstadt (01:17:10): That, yeah, that was probably the most stressful situation. A bridge can imagine that your bridge happens to be the bridge that a protocol has chosen as its canonical bridge that is currently falling apart. That was record load in terms of messages being transferred and value being transferred in the short time. This is also why you like just circuit breakers in volume. We've been discussing them for a long time, but they aren't really valuable. The last thing you want to happen in this situation is the transfer of UST being cut off. So this is why we're extremely happy that the protocol held up so well. And I think it's only, only a sign of how this technology becomes more and more robust over time and can actually handle this type of load. Because like, what we want is to really like big assets in this space to become cross chain assets. And I think they can only reasonably do so if they know the bridge protocol will even be able to handle the craziest situation of stress, you could imagine, which I think this was pretty much as close to the extreme as you can get. But.... Anna Rose (01:18:16): Full market meltdown. Yeah. Hendrik Hofstadt (01:18:18): Portal can. Anna Rose (01:18:19): I didn't realize, I didn't know you were the canonical bridge to Anchor that's wild. It was a tough week for a lot of people, but I guess it is very good that they didn't feel stuck by that. I know that at some point IBC transfers have, I mean, stopped now from Terra to other places in the ecosystem. Hendrik Hofstadt (01:18:38): This was super fascinating because here, here comes the difference between light clients and this Oracle approach. Because the, all of the guardians, they run a full terra node. So like when someone takes over Terra consensus and they try to fork off, they try to arbitrarily mint coins or something. This would be an invalid state transition, a light client would still think, oh, look, it's the valid, valid data that signed this token transfer out. So you could print money with the, in the Wormhole case. If they had forked off the chain and taken over the network, as there was this crazy inflation, you wouldn't have been able to essentially take out like arbitrary amounts. This was obviously a situation where there was a little Wormhole and people have been discussing like, can the bridge fail if Terra gets taken over by a single validator and in this case it wasn't. So there's some interesting actual considerations when you do just light proofs and not execution proofs of the actual virtual machine. Yeah. Anna Rose (01:19:40): Wow. I feel like there's so much story time in this. I feel like it would be so like these war rooms and like the imagination that people have for like, what would happen if, and then you have to like map out activity of like millions of people at the same time and try to figure out, I mean, this is just like simulation, but simulation on things that nobody knows yet. It's so unmapped. It's crazy. Hendrik Hofstadt (01:20:03): Yeah. The best thing is you can have good tooling for observability. And most importantly, it being open source tooling. Like if there's only people sitting at Jump monitoring the protocol, something has gone fundamentally wrong. There needs to be tooling for the community to individually be able to like, see transfers going on, detect malicious behavior. Like, I don't want a situation where the Jump contributors go to the guardian, say, shut down your bridge, there's something wrong. And they would do it. I would never want that to be the case. And it isn't the case. There's always a way to individually verify the same as with validators in the beginning, they did whatever the network told them to like to upgrade to this version. There's an emergency patch. Now we've got security councils coming together, verifying the source code changes and actually being these checks and balances. And it's the same for the bridges. And like, this just needs to be supported further because otherwise, like, this is the only way this decentralized infrastructure can, can really grow further. Rahul Magani (01:21:04): And same with the future plans. It's, we're always looking in that direction. Like how do we ensure that it's distributed and decentralized? Like it's always in our, in the back of our minds. Anna Rose (01:21:13): Yeah. Cool. Well, I think on that note, we can wrap up the episode. I wanna say thank you so much for coming on and sharing with us the story of Jump, of Wormhole, of the applied ZK group and the work that you're looking forward to. Yeah. Really appreciate it. Hendrik Hofstadt (01:21:29): Thanks a lot. Anna, this has been a great conversation. I enjoyed it a lot. Rahul Magani (01:21:32): Thanks. Thanks. Really enjoyed it. Anna Rose (01:21:34): Cool. And I wanna say thank you to the podcast editor Henrick, the podcast producer, Tanya. Chris, who worked on research for this episode and to our listeners. Thanks for listening.