Anna (00:05): Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. This week, I catch up with Adrian Brink, one of the founders of Anoma. Anoma is a new protocol for multi-asset privacy and N party bartering. We chat about his previous work in the space and then dive into the new project. We talk about their aim to create a system that allows any digital asset to function as a means of exchange repayment and what kind of projects or ideas Anoma could enable. Anna (00:50): Before we start in, I want to let you know about the next upcoming zkSessions event. Event happens on June 23rd, and the topic will be all about DAOs, NFTs and novel funding mechanisms. One of the goals is to find out if and how we may be able to incorporate privacy and zk ideas into these new systems. I will also be hosting an interactive community discussion about how the zk community could potentially start using some of these tools. Event happens, as mentioned, on June 23rd, I've added the link in the show notes, and I hope to see you there. Next up I want to thank this week sponsor, EY blockchain. EY is committed to building a better working world on the public Ethereum blockchain with robust privacy technologies, technologies like zero knowledge proofs. EY believes blockchain technology will be the glue that knits together a productive global business ecosystem. To learn more about their products and services visit them at blockchain.ey.com or check out EY's open source contributions at github.com/eyblockchain. I've added both links in the show notes. So thank you again, EY, for sponsoring the Zero Knowledge Podcast. Now here is my interview with Adrian. Anna (02:05): So today I am talking with Adrian Brink, one of the founders of Anoma, which is a new protocol for multi-asset privacy and N party bartering. Welcome to the show, Adrian. Adrian (02:15): Thank you for having me. Anna (02:16): So, Adrian, you've been in the space for a while. I think we've known each other for at least three years, but at the same time, this is actually the first time I have you on the show, I believe. Adrian (02:26): That's true. Yes. Anna (02:27): So before we start in on Anoma, let's do a little background on you. Where did you start in, in blockchain? I think we met at the first zkSummit, back in 2018. So you're clearly there already. Tell us where you started. Adrian (02:42): So I wrote my graduate thesis in Computer Science on e-voting on blockchains, and this was at the time of the Catalan independence referendum. And I'm personally not a huge fan of independence movements, but I thought it'd be neat if the Catalan people can have a way to have censorship-resistant e-voting systems. And I built it on Ethereum, at that time. And of course, any computer scientists listening right now will know e-voting is the thing that no one wants to touch in computer science, because everyone believes it's completely insecure. For some reason, we put the financial system on computers, but no one trusted enough to put voting systems on computers. It's still funny to me. But so I built this and, as part of the research, background research, I came across Tendermint, Tendermint consensus actually, dove into this, and then I ended up joining Cosmos. I was the 3rd engineer, a core protocol engineer, mostly because I had written to Jae that I know how to code and whether he needs any help building it, because I thought it was a neat project. Anna (03:41): It was early. What year are we talking? Adrian (03:43): It must've been 2017, I think. Anna (03:46): Okay. Was the project already launched? Adrian (03:47): No, this was just about the time of the fundraiser. I think, I joined like a week after the fundraiser. From there, built up the entire Cosmos stack, helped out their lot. Afterwards we launched Cryptium Labs as a Validator. We were one of the very first validators and as result I knew my two current founders from Awa and Chris, because we all were working in Cosmos at the time. So we build up Cryptium Labs, as our answer to "No one knows how to run these things. No one knows how to run proof of stake and the people that build it are clearly not the most qualified people to run it, in practice." So we sold that company earlier this year to Chorus One. But also it turns out that validation is honestly not that interesting on the long run because it's mostly racking up servers and data centers. And we always love doing research and protocol engineering. So we ended up for probably a good two years being the second core dev team on Tezos, shipping all major upgrades while we were there. So we scaled this to 15 people and we took everyone with us to work on Anoma, which we launched earlier this year. Anna (04:47): You mentioned your two co-founders, Awa and Chris. Awa has been on the show, I think once, during a combo episode, which I did in an event, so a really short interview. And Chris Goes definitely was on the show last year, talking about IBC. Adrian (05:02): Yeah, Chris is the lead author around the IBC protocol, also worked in Tendermint consensus for a long time and Cosmos proof of stake system. Anna (05:10): Was it the three of you at Cryptium? Or just you and Awa at Cryptium? Adrian (05:13): No, it was the three of us. Anna (05:13): Oh, it was okay. So this is a long-standing founding team. Adrian (05:18): Yeah, no we've built, at this point, 3 companies. Anna (05:20): I should also, as an aside, say the Zero Knowledge Validator is actually an investor in Anoma. But when we did that investment, we were kind of an early stage investor. I know since then there's been quite a bit of development and that's what I hope to jump into today with you. So let's dive into this new protocol for multi-asset privacy. What, in a nutshell, is Anoma? Adrian (05:43): So in a nutshell, Anoma is a protocol fundamentally designed differently than the existing smart contract platforms, like Ethereum, and also the existing chains like Bitcoin, for example. Anoma is fundamentally asset agnostic, it doesn't care, it doesn't have a specific base layer asset, of course, there's a proof of stake asset, but you can, for example, pay transcation fees also in other assets that you bring onto the platform. So Anoma fundamentally makes the choice that the world is going to be interoperable. We don't know necessarily via which protocols, but we have good ideas there, but assets will flow free of state machines. And Anoma is there to enable people to have multi-asset privacy, for whatever assets they want to bring, and to allow them to be traded in a very flexible manner, which is derived from a protocol called the Wyvern DEX protocol, which is backing the protocol behind OpenSea, by the way. No one knows about the protocol, because Chris wrote this a long time ago. Anna (06:42): What's the name again? Can you say that? Adrian (06:44): The Wyvern DEX protocol. Anna (06:45): 'Wyven' DEX protocol Adrian (06:47): Wyvern, yeah. Anna (06:49): Wyvern DEX. Okay, cool. We'll put that in the show notes for interested listeners then. Adrian (06:54): Yeah. And also architecturally it's just Anoma makes quite a few different choices. So it's built on fast finality BFT, it uses fractal scaling. We don't believe that there needs to be one validator set that rules all the state machines. I think this is very, very unlike to appear in practice, anyway, it seems very unlikely that it will ever move world commerce into the Ethereum validator set, no matter how scalable it is on a technical side, it's just not how humans socially work. We don't have one world government for the same reason. So Anoma scales fractally that there can be many different instances in different geographies run by different people, but they all interoperate with each other and assets can flow within Anoma, as well as from Anoma to other systems and into Anoma. And you can just use these assets to privately pay someone to just shield your assets, to make your own assets to transferring them private. And it also allows you to have these very complex state transitions, where, for example, you want to trade a CryptoKitty for a concert ticket in Barcelona for some BTC, for some ETH and then normally you can atomically settle all of them. And I think maybe the last thing here is also that we fundamentally believe that zero knowledge proofs are going to be big, and privacy is a fundamental human right. So what people can do actually is deploy their own Validity Predicates. And those Validity Predicates can do a lot of things. Whether they can be, for example, new private bartering circuit, new private trading circuit, a new privacy preserving transfer protocol, but developers can write these, we're building a language called Duvix, which compiles down to zero knowledge proofs, which makes this very convenient. So developers can deploy it on Validity Predicates as well. Anna (08:31): There's so much to go into in, in everything you just said. Adrian (08:36): I know, I'm sorry about that. Anna (08:36): One thing that stood out to me here was the fractal scaling, this idea of various ecosystems, various blockchains. This sounds a lot like the Cosmos ideal and this concept of IBC. And I know Chris worked on that. So I'm wondering, what is the connection, in your mind, between the way you're thinking about this fractal scaling and what you were working on at Cosmos and what Chris was working on at IBC? Adrian (09:03): Yeah. So Chris is the lead architect on the IBC protocol. And IBC is a fantastic protocol, I have to say. I think people don't really understand this, but IBC isn't tied to Cosmos, IBC will work between NEAR and Polkadot. And Anoma, it's just like... Anna (09:19): it's a bridging idea, right? Isn't it a form of a bridge? Adrian (09:23): It's not a bridge in itself. it's just a way to describe how data gets transferred between, Proof-Carrying Data, gets transferred between two blockchains and how it gets authenticated. You can think of this more like TCP/IP, where it's just the protocol that describes how a package from Polkadot gets authenticated, while the Polkadot validator set can be read and interpreted on the NEAR side and vice versa. Anna (09:47): It doesn't involve creating light clients on each of the chains? Adrian (09:50): So that is the requirement for the protocol, that they have efficient light clients. Anna (09:54): So they do have light clients. Adrian (09:55): Yes. So that's a requirement: in order to be able to instantiate the protocol, you need to have efficient light clients on either side, because otherwise the communication complexity blows up. But so Anoma doesn't take the view — Anoma isn't a routing hub for assets. Anoma is just a protocol where people can use their assets initially very much for privacy reasons, so for multi-asset privacy. Anoma will be the very first solution, I think, in the world, where you can have a unified privacy set between all assets. So when you think about things like Zcash and Monero, the reason why they have a really hard time getting traction is because, while they provide privacy, they tie it into a very specific asset. And like, I don't want to pay for coffee in ZEC or Monero tokens. Anna (10:41): Although apparently there are places that are accepting that. Zooko showed me that recently. Adrian (10:47): Yeah, there's this weird thing where like Room 77 in Berlin was accepting Bitcoin for burgers. It's a weird novelty, but it's not going to be the thing that gains traction. So with Anoma, you get, for the first time, multi-asset privacy, where to an external observer, it's indistinguishable, whether you are buying coffee using USDT, or selling a house for a hundred million dollars. So all assets have the same privacy guarantees and you can think of the multi-asset shield pool, which does this, as an extension to Sapling, where Sapling shields a sender, a recipient and an amount, the multi-asset pool also shields asset identifier. Anna (11:22): Is there actual computational stuff? Is it more than just transactions that are happening in this privacy setting? What I mean is, is there a private computation? Is there more than just account balance changes? Is there something else? Adrian (11:38): The multi-asset shield pool is just one specific Validity Predicate in Anoma. So there can be many more, you can deploy others that do more private computation, but the multi-asset shield pool specifically doesn't do private computation, it just does proof checking with that account balances, that you're not inflating tokens. Anna (11:55): Let's talk a little bit about its connection to Sapling, because you're using part of Sapling in your construction, is it right? Adrian (12:03): Yes. So this is an extension of Sapling. Sapling is great. That's no reason to throw away Sapling. I don't understand why a lot of privacy tech, that tries to do this on Ethereum, doesn't do this. But Sapling has amazing features like viewing keys, spending keys and so on. So I'm a big fan of Sapling. So this is an extension to Sapling. Anna (12:23): Got it. But did you redeploy it then? Or is the idea that you're going to take that code and redo it? You're not actually connecting to the existing live Sapling on Zcash? Adrian (12:34): Oh no, not at all. We're just using the code and the code is being redeployed. So the extended code, the multi-asset shield pool is being redeployed. This also means that we have to do a new trusted setup, which we're going to start sometime in the summer. Depending on where the research goes, we ultimately may do a PLONK p-lookup trusted setup at the same time, so we get a universal proof string. So because one of the ideas behind Anoma is that people are able to write their own zero knowledge circuits, using Duvix. And it's very inconvenient to deploy as zero knowledge prove application, if you have to redo a trusted setup every time. So hopefully we can get around this with PLONK p-lookup and then people can just write private applications and deploy them directly. Anna (13:15): If anyone wants to learn more about trusted setups, last year, I did a series of episodes all about all of the trusted setups that have happened so far. And I think since then, there's at least Plumo, which I actually work on and help out the team with. So that's another trusted setup if ever anyone wants to try one of these out, they're kind of annoying, but also kind of awesome, cause you're deep in the tech, you're like actually contributing to real privacy. Adrian (13:39): Honestly, I think the biggest thing with private setups is mostly that you want a large number of disjoint participants and that you ideally don't want to have to redo them. So if you can honestly get an efficient universal, trusted setup and be very confident. Anna (13:54): Totally. Let's go into the bartering system. This idea of... I think you hinted at it in the intro to Anoma, but you've talked about different assets. So bartering, that idea of like trading the milk for the flour, object for object, but with no dollar in between. Tell me a little bit about what that looks like. Adrian (14:18): Yeah. So this is very hard to build in other imperative systems, like EVM, for example, because the account model isn't structured the right way. But with Anoma, the way the account model is constructed is that all changes to the state are happening first, so the state changes are applied first and then all involved accounts simply get the resulting state and can verify whether they're okay with it or not. And then they say, "Yes/No" one state changes, which makes the computation to do state changes to lots and lots of accounts, very, very efficient, because you can do all the verification of the state changes in each account fully in parallel, because they're not dependent on each other. When you think about this in a traditional system, you have this step-by-step execution where you go like, "Well, contract 1 does some execution steps and then calls contract 2 that does some", in Anoma this all just happens, all state changes first. And then everyone gets called whether they're okay with the resulting state. This allows us to build an incredibly flexible N party bartering system. And what I mean here with N party bartering is, for example, let's say Alice has some BTC and wants ETH, Bob has ETH and wants DOTs, and Charlie has DOTs and wants BTC. Traditionally, there's no way to settle all three desires at the same time, and we are calling them "intents", so there's no way to settle all three intents at the same time. But with Anoma we can just take all three of them and settle them directly, thereby removing the need to have one liquid pair that everyone connects to. Traditionally, the way this is solved is everyone just has a liquid pair against USDT. And when you want to do this 3-party trade, you all go by USDT to settle this. Anna (16:03): I have a question here about prices and how you define this stuff. Are you deeply connected? Do you have an Oracle? Are you pegging this to something? Adrian (16:15): Right. So this comes to the way assets flow into Anoma. Anoma [is] fundamentally interoperable. So assets flow via IBC or some other interoperability protocol, it really doesn't actually matter what the interoperability protocols are. So now you have an asset in Anoma. And then you're just defining at what price for you individually you intend to, you're willing to settle this, or to trade it. And then there's a role called "the matchmaker" that looks at all the hundreds of thousands of intents flowing through the system and sees which ones can be combined together and settle on chain. Anna (16:46): Is there a base currency then? Is there like a base value that you can peg this to? Adrian (16:53): No, this is not pegged at all. When I'm submitting my intent, I have BTC and I want ETH, I'm saying "I'm willing to trade it at this price or better". And someone else may say they are readly to trade ETH for BTC at this price or better. And then these intents are assigned to flow through a system and a matchmaker will look at them and go like, "Well, these two are compatible. Let me combine them and settle them on chain." Anna (17:14): That's one-to-one though. That's like ETH/BTC, BTC/ETH. But when you talk about bartering and including these other assets and you're bundling them, how do you assume the value? Adrian (17:26): Right. This is why we're like "the tech undefining money|. We don't need this fall-off base currency. So if you have the intent "BTC for ETH" that has some exchange rate effectively attached to it, that the user sets, and then you have another intent that says: "ETH for DOTs", that also has an exchange rate, and a third intent that says "DOTs for BTC". So everyone has their own defined exchange rate effectively, at which point they're willing to trade. And so if you see three of them, you don't have to sell them one by one, you can just say, "Well, settling all three of them at the same time gives everyone what they want at a price that's equal or better than what they said they were willing to take". And the best part is this doesn't stop at three. This goes to, you can be a thousand people. You can be trading the most random assets, you can be trading local currencies, you can be trading commitments to climate change. You can be trading CO2 certificates, because they don't need to be tied into underlying base currency. This is an incredibly generalized system that allows people to do business with other people effectively. Anna (18:27): In a previous episode we did with Martin Köppelmann from Gnosis we talked about this concept of "Coincidence of Wants", cause they have this CoW protocol. And when I was going through Anoma, I saw that you mentioned you don't follow that model. It's not Coincidence of Wants that you're using. Adrian (18:45): No. So I don't know how Gnosis uses this, but we just don't rely on the fact that two people have to have exactly the same Coincidence of Wants. It can be three or N people that collectively have an overlapping set of, where they all coincide in terms of what the intents they have. In traditional market this is always two way that you do it. Like, if I want to trade a chicken for a cow, I have to find someone that has exactly the inverse. With Anoma you don't, you can find a group of people that collectively settles everyone's desires. Anna (19:19): What if I wanted to unload a terrible, terrible dog coin and nobody wants it. What happens? Adrian (19:28): Then you will never get settled. If no one ever wants to buy the thing you're selling. Anna (19:32): So you can put it there, but no one's going to ever take you up on it. Adrian (19:37): Exactly. Maybe, also, I guess a note on the architecture here then. So in Anoma there are hundreds of millions of intents possibly flowing through a system, but they never got settled on chain. Only those that actually get matched with other intents get settled on chain, which means that, for example, the way a market maker would operate is, they probably quote a new price every half second or so. And they just send them as intents. So they keep updating the latest viable price, and then someone can pick it up and "Oh, I like this. I like the price you currently quoting, I'll settle this on chain". Anna (20:13): As a person offering, you want to participate in this. Do you offer two things? Like what you want and what you're giving? Or do you offer what you're giving and a selection of things you would accept back? Adrian (20:26): You can do both. This is completely up to you because you can think of intents like a function that just says what state transitions you're willing to accept. So you can say "I'm willing to accept 2 tickets for Imagine Dragons concert in Barcelona, or three blue Crypto Punks or a BTC, or a combination of all of them", but it's up to you. So when you write your intent, you, say, you write an acceptance function that says "I am willing to accept" and a function justifying what you're willing to accept. The cool thing is this goes way beyond trading. It means that you can start having things like "we collectively agreed to", and only if, let's say, a certain percentage of the network agrees to, "with 1% of our transaction fees, commit to sequestering CO2 from the atmosphere", for example. So you can do these very generalized "collective commitments", it's what we're calling them. Anna (21:25): Taking carbon out of the ecosystem, I know that there's a protocol that actually does that, I think in the Cosmos ecosystem, where... Adrian (21:35): Oh, Regen. Yeah, they do differently though. They try to map, or they try to identify who's doing well in terms of environmental damage. Anna (21:43): Oh, I see. Okay. So, say, there were though these credits, you'd still need a second entity to issue those somehow. And those would maybe be NFTs? I'm trying to picture what that example looks like. Adrian (21:55): Anoma doesn't do asset issuance. I mean, you could, if you wanted to, but this is really not what the protocol is good at. Things like Ethereum are great at asset issuance, and Anoma just provides the ability, once you have these assets, to effectively come to agreement over them, to come to agreement with using these assets. CO2 sequestering is far away, really, and we need a bunch more infrastructure, more supply chains need to be on blockchain, this data needs to be available. We need better oracle systems. Anna (22:23): But it could eventually maybe be in it. Adrian (22:26): Oh, absolutely. Eventually this is the goal. Once we put the global economy on hundreds of millions of different blockchains and data can flow freely between them, we can design amazing computation to incentivize the right behaviors for people. Anna (22:39): Does it fall Into the category of a DEX? Do you think of it in that space or is it something else? Adrian (22:48): So if you wanted to think of this very simply, in terms of early things that you can do with Anoma at launch, you can, for one, think of this as a mixer, kind of an interchain mixer almost, because you can get perfect privacy for all connected assets, without having to transfer them. The multi-asset shield pool will just provide, if you transfer them to yourself. And secondly, you can think of it as a DEX, but not really. The DEX is the simplest of applications that can be built on Anoma. So you can build a DEX on Anoma and the protocol supports this fully, but long-term this is much more than just a DEX. This is a way to come to [an] agreement over arbitrary state transition functions, rather than just like, "I want to trade a Bitcoin". Anna (23:32): Got it. Is it based on, and maybe you already mentioned this, but is it a proof of stake protocol in itself? Does it have a validator group? Adrian (23:41): Yes. Anoma's proof of stake. Anna (23:43): And it's based on Tendermint, I guess? Adrian (23:45): Not the proof of stake system. So it uses Tendermint consensus. I'll probably just swap this after launch at some point. I bet HoneyBadger, Heterogeneous Paxos. But yeah, it's not built on any of the existing stacks. This is all written from scratch. I would just happen to use Tendermint consensus, because honestly, Tendermint is a fantastic consensus and I still think it's stupid that everyone is trying to reinvent consensus. They're not giving that much. In 2017, we had a weird time, where you could raise on your consensus algorithms, it was really strange. Anna (24:17): But so you're going to have a validator community. Is this validator community full-fledged, decentralized? Is that part of it very decentralized? Or do you have any smaller groups that are kind of born out that? Adrian (24:34): No. So Anoma is a full base layer. It's a layer one that has a validator set of probably a hundred or more people that runs a decentralized protocol, the Anoma protocol. They may also run matchmakers and order book operators at the same time, but it's fundamentaly running a proof of stake system, in order to provide private digital asset-agnostic cash and N party bartering to anyone that wants to use it. Anna (25:00): Got it. The reason I'm asking is I recently completed a series of spontaneous series of L2 episodes. And I recently was thinking, who isn't an L2? So I was wondering if you maybe would eventually alter this validation to not just be a validator on a base chain, but also some sort of connection point. Adrian (25:19): Yeah, I mean, I have fairly strong opinions on L2s and how they make... The thing is pretty much everything, these things will just be connected to each other. Currently we live in this world where everyone is trying to build their own walled garden. Like the Cosmos people are trying to build their own walled garden, Ethereum people are trying to build their own walled garden, the Polkadot people are trying to do exactly the same. And everyone says: "Interoperability yes, but only within my walled garden". And it's sort of like AOL in the early days of internet saying: "My interoperability is by far the best in my own walled garden". And I think this makes absolutely no sense long-term. Long-term it's just going to have probably thousands, if not hundreds of thousands, of blockchains that all just really communicate with each other, because they're all going to build around modern fast finality BFT and then the distinction between what is an L2 and what is an L1 that just can connect to other things is becoming very thin, really. So in my opinion L2s are taking all the disadvantages of building on top of an existing system. So you're inheriting a lot of the engineering deficits, but in terms of usability, you're gaining very little over just being an interoperable L1. So I think that over time, a lot of things will become just like independent L1s. Anna (26:35): I feel like more and more there's this question of we know that L2s and other L1s bridged to other L1s are actually different and they do offer different pricing, they offer different capabilities, but they're trying to solve similar problems. So it's not completely clear yet if one of these models is necessarily going to win. Adrian (26:56): I think the main thing is that people really think at the moment that you either win or you lose. But I think the general pie that we're looking at is way larger than most people can imagine. And it's not going to be: "Does Ethereum win against Bitcoin?" It's going to be: "Well, we can all coexist, because we all have unique offerings". And on the L2 side, to me an L2, the main thing, is that you're deriving security from some L1. So you don't have your own security. And early on people always assume that no one else is going to be willing to move their ETH, for example, into a different L1, because all of a sudden you have a totally different security model. But I think that most people really don't care, because most L1s have very good security. So the advantages to being an L2 that's tied into having to put up state roots in Ethereum bear a lot of disadvantages, because engineering freedom just goes down by a lot. And I'm not sure that the user base will care, because from a UX perspective, whether you move into an L2 on top of Ethereum or into a different L1, it's going to be the same, you're going to go via some bridge. Anna (28:03): I want to go back to privacy. And we did already talk a little bit about using Sapling, that a lot of this stuff would be private, but I want to revisit, now that we understand a little bit more how the bartering system works and all of that, how does the privacy actually work? Is it just, this is all happening in a shielded way? Or is there parts that are very private and parts that are not? I know this has always been a question about, when you're doing asset transferring, you may not be able to have perfect privacy on every level. So I'm curious about where is the privacy exactly? Adrian (28:38): Yeah. So it depends on which Validity Predicate account you're using. The multi-asset shielded pool is just one of many Validity Predicates that can exist at Anoma. So with the multi-asset shielded pool, you're gaining pretty much perfect transfer privacy, or mixing privacy, whichever way you want to look at this. But for example, for trading, we are currently writing private trading circuits as well, where maybe some of the information is hidden, but the other is public, but you need it all to discover counterparties. Over time, people can deploy more and more different Validity Predicates that have different privacy guarantees maybe, or that may not even be privacy-preserving, because they want to solve some other thing, like they want to have an automatic market maker as a Validity Predicate. This works too, you don't have to have privacy, these are just Validity Predicates that users get to interact with. Of course, we have an "Intent Gossip System", where people say, "Well, I'd like to, for example, trade a Bitcoin for an ETH". And there the question becomes: "How much are people revealing for that trade that settled". And I think honestly, over the long run, the way this will work is most trades happen physically, most actual commerce happens physically. And so the way you do it is that you don't announce to the world that you're willing to buy a Bitcoin from you. But rather that we communicate and agree on prices automatically, but over local network connection, and then just end up settling it. And at that point you can just fully privately settle this even. So you start having private subnets effectively, where you may not announce all your intents, or maybe only ask to advance further in the automatic negotiation, where you start revealing more and more information, but not to everyone, just at point to point level at that point. Anna (30:16): Is that ike a private OTC then? That's what that sounds like a little bit. Adrian (30:19): I guess you could look at it this way. That's not important. Anna (30:21): But so you used this term "Validity Predicate", and then you say one example of a Validity Predicate is the bartering, basically the multi-asset shielded. So you mentioned another Validity Predicate would be like an AMM construction or more like maybe an order book-based DEX or something like this. Or maybe fix that then. What is... Cause I think just the terminology of that isn't completely clear to me. What goes under that category? Adrian (30:50): Yeah. So Validity Predicate is... Every account in the system is associated with a Validity Predicate. So for example, when you hold the native asset XAM on Anoma, you hold it in an account and your account has a Validity Predicate attached to it. And this Validity Predicate in its very simplest form may say: "You need to have spend authorization from a private key, in order to move this", but then you can also start upgrading them and say, "Well, if I move under a hundred dollars, then I can sign this with this key. And if I want to move a million dollars, I need to sign with three different keys". Anna (31:24): But these are not the SNARKs, right? These are not like the SNARK is with it, the proof? Adrian (31:29): No, exactly. So the verifier... For example, ---you could also say that you wanted to have some, like--- your Validity Predicate should only accept a specific proof. And then what you deploy, as you own Validity Predicate, is a verifier for that proof. This is the way the multi-asset shielded pool is implemented: the verifier for the proof of the multi-asset shielded pool is deployed as an account on Anoma. And so whenever you want to send an asset via the multi-asset shielded pool, you submit a proof to that Validity Predicate, to that account, which then verifies it and says "yes" or "no", which means there can be many multi-asset shielded pools, for example. And we can seamlessly upgrade them as well, because you can have multi-asset shielded pool 2, multi-asset shielded pool 3, as a separate Validity Predicate in the system. Anna (32:15): In that case though, could you have a Validity Predicate model where there is no privacy, using Anoma? Adrian (32:20): Absolutely. Anna (32:20): So it's not forced. The privacy is more like in the example, in this first validity predicate that you're working on. That's one of the key things, right? Adrian (32:30): Yeah. Got it. For example, also, um, maybe this is more clear, like the way Bitcoin is bridged into Anoma is — Oh, Bitcoin is hard because they don't have efficient light clients. But let's say Polkadot is bridged to Anoma because they have an efficient light client. The DOT token is just its own validity predicate its own account in the system. And the DOT account in the system has a validity predicate that can verify a light client proof from Polkadot and verifies when ever you want to send like a DOT between two accounts on Anoma, and know that like the supply is preserved, for example. Anna (33:04): Got it. Is there anywhere else where you are working or including privacy in the Anoma stack? Adrian (33:10): So, yes. Maybe not necessarily long-term privacy, but at least short-term privacy, which you need for front-running prevention. So we also building a protocol called Ferveo, which is a distributed key generation protocol, which effectively means that the validator set of Anoma joins the constructed public key. And every validator locally has a key share and then they can use to decrypt with their shard. And once you combine enough decryption shares, you get the end result, which is a decrypted transaction. So when you submit a transaction on other systems, what happens is that you giving up information, like a block proposal, someone watching the mempool has an asymmetric information advantage over you. Because for example, you may have said, I'm willing to trade this asset. This is now public before it gets included in the block, but before it's finalized in a block. Adrian (33:59): So what happens in Anoma is that users encrypt transactions against this common shared public key of the validator set, and then in blocks, blocks only order encrypted transactions. So as a validator, as a block proposer, I see a bunch of encrypted transactions and I go like, well, they all compete for the fees. Let me put them into a block, and then then I'd have a block, but I'd have no idea what these transactions do. And then in the next block and block N+1, the validators jointly decrypt, all those encrypted transactions. And then they get executed, meaning that no one can front-run you. Anna (34:34): But they're already queued up, I guess like the queuing part, the part where you're putting them towards being in the block, it's unclear what's in it. Therefore you can't have this Miner Extractable Value MEV, flashbots scenario where the miners are basically getting in front of it. Adrian (34:49): Yeah, by the time you see what a transaction does, it's already ordered and you can't retroactively change the ordering anymore or inject your own transactions. Anna (34:58): I'm thinking of it like a queue, but maybe it's not exactly like that. At what stage is the actual stuff encrypted before its decryption? At what stage does it get included? Is it still encrypted when it gets included? Or is it like at the moment of inclusion? This is what I'm trying to figure out. Adrian (35:17): Thinking about this through. Right? So there's a validator set at the Anoma network, a shared public key is known because validators generate them every 24 hours or so. So as a user, I want to send a transaction, not even an intent, I just directly want to move some tokens from account A to account B. I, as a user, construct this transaction and I encrypt it against the public key before I submit this to any full node. And then I submit an encrypted transaction to a full node, which puts it in the mempool, which only says, well, this thing can pay fees right now because that information is still available, but it doesn't see at all what the transaction does. Then let's say we have block 10 right now. So this is in the mempool. The block proposal for block 10 goes, "well, I have a bunch of encrypted transactions here and let me just put them all in a block." Adrian (36:02): Then the validator set comes to agreement — just a BFT agreement — on it and says, well, block 10 is now finalized and it includes these encrypted transactions. And then as soon as this is in a block, validators that follow the protocol rules have to start submitting decryption shares for all of these transactions, to each of them. And then in block N+1, the next block proposal will include all the decryption shares — oh, not the decryption shares, they will just use all the decryption shares and include the decrypted transactions, but not for ordering, but rather for execution. Anna (36:36): Isn't NuCypher's, whole thing, distributed key generation. I feel like it is, but I don't know if it's ever been used in this context. Adrian (36:45): It's different. So NuCypher, I think allows you to generate keys that are held by multiple parties, but more of the use case of being able to hold like a Bitcoin key, that's shared between a lot of parties. With Ferveo in Anoma, this isn't that model. It's because the purpose is that the decryption thresholds follow the BFT thresholds, right? You need two thirds, honest majority for BFT and you have the same, a requirement for the decryption part, which means not any individual validator can front-run the system, Anna (37:21): It does sound a lot like the work that I know Dev and Sunny are doing at Osmosis. Are you working like is this a connection point? Adrian (37:31): We're working with Sikka on the protocol yeah. Anna (37:31): The reason I say like, actually pretty recently, we did an event with the Cosmos ecosystem highlighting some of the privacy-related projects that are there. I'll add a link to also go Dev's talk there because there there's some visuals that kind of illustrate some of this as well. At what stage is this? Like, is this a protocol on paper? Adrian (37:51): This is fully implemented. Anna (37:53): Cool. Okay. So this is something that like you have seen engineered and this is actually working. Adrian (37:59): Yeah. This is not theoretical. "This may work in five years" like, no, this works right now. And this just needs to get into a base layer. The problem is this is almost impossible to retrofit into existing base layers because it fundamentally changes to the way execution flow works. Which is also one of the other reasons why with Anoma like — it was never really an option, even if I had liked other ecosystems, to build this on top of another layer one, because fundamentally the way we structured transaction execution, transaction submission, all the gossiping part, it just — it's not retrofittable into existing base layers. It's fundamentally a different architectural choice that allows us to build very different applications. Adrian (38:42): But couldn't like an existing POS system do a major upgrade where they incorporate something like this? Adrian (38:47): Of course. Every POS system can become any other POS system. There's nothing stopping us because it's all open source. Anna (38:54): Although the validators might not love it. And all of the tooling might not work. Adrian (38:58): It's like the same way like, yes, Google can become Facebook. That is true, the technology isn't rocket science. But most people don't do it because you have to maintain legacy and your community may not like it that much. Anna (39:12): I mean, is there any work like this being done for proof of work systems? Because, we're thinking of this as like validators building this. Adrian (39:20): This doesn't work for Proof of Work. Because fundamentally in proof of work, you don't know who the miners are upfront. In proof of stake we have a list of "these are all the public keys that are the validators," and we need this because every validator needs to generate private key shards. Anna (39:36): Yes, this is a very proof of stake solution. Adrian (39:39): This is also like why proof of work I think will exist for like very simple applications following time. Because it's like, it's interesting, I think Bitcoin is cool in the sense that we have global stable clock, for example, that's hard to bias. But the innovations are coming from proof of stake because the protocols are more flexible. So one thing about Ferveo is it also provides really secure randomness in the sense that it doesn't require a VDF, VDFs with like special hardware to produce unbiasable randomness. But out of running the system of the key generation protocol, you also get secure randomness, which we currently aren't using. It's just the way that it works, the cryptographic protocol just also gives you this. So we don't currently have an application in mind yet to build on Anoma that uses this.But it is available. Anna (40:30): Nice, that's, that's very useful. I wonder if you could use that back into something that's like at the core? Adrian (40:36): Yeah, I mean, other consensus algorithms use randomness to select validators. I'm not a big fan of it. Anna (40:41): It won't be that. Adrian (40:44): Maybe, like later on you can use this also in the proof of stake system and the consensus mechanism. Anna (40:51): I know that at some point in the last half hour or so you mentioned intent gossip. I kind of want to bring that back up and talk about that in this context of the bartering system and all of these like multiple pools, because gossip as we've understood is sort of like message passing, right? It's sending a message out and having to be repeated, but how does it work in this system? Is it different? Adrian (41:16): Yes. Well, okay, let me explain the way the intent gossip system works. So it's a matchmaking system. So, in Anoma people, can't just only submit transactions that directly hit the chain, but they can also submit intents, things that they need to coordinate with other people effectively. This may be a trade, or an agreement to try to fight climate change should cover the cost of carbon. It's arbitrary, what they can do, what intents people can submit. And the way these intents work, they float around in a gossip system for these intents, um, before they get settled on chain and then people look at them and there's an intended layer for people to match different intents that can be matchable with each other. This also comes in really handy though, given the upgrade system that Anoma uses. Adrian (42:03): So Anoma upgrades fractally — so it scales fractally and upgrades fractally, meaning that we don't have these binary switchovers that all existing systems have and that really suck in my opinion, because in a binary upgrade system, your software, like "today, the new software comes active and controls a hundred million," but you have no testing outside of test environments, but there was no slow ramping up of value being tested on the new protocol version, right? With Anoma, the way Anoma upgrades is that you deploy a new protocol version. Anyone can deploy new protocol version and over time assets can slowly migrate into the new version, right? Because fractal instances are interoperable with each other. So from the main ledger, you launch prototype 2, slowly assets migrate, a lot of assets migrate, then you can do the final binary switch over where you migrate all the remaining state with you, right? So this is how Anoma upgrades, but the intent gossip system works globally. Over all instances, even if it's previous versions, even if it's previous or future versions. Anna (43:11): Does that mean that trades can actually happen across that as well? Adrian (43:14): Yes. Trades can also happen between fractal instances, which means you have liquidity between. Cause otherwise you have the problem that your liquidity is fundamentally tied to your fractal instance. So the upgrades almost always have to be binary in that case, but on Anoma you aren't forced into this, you just get one global gossip layer of all instances. Anna (43:36): I think I forgot to ask this, but we talked a little bit about the light clients, but are all of the instances running light clients? Is that how you kind of imagine it? Adrian (43:44): Yes. They connect to each other via, currently most likely IBC because it's honestly an asset agnostic protocol that just works well. But honestly, I'm struggling with this a little bit, I think the way people currently think about these systems is like, "oh, interoperability protocols are specifically tied to existing things." And that's entirely not the point of interoperability protocols. That I meant to neutrally connect things. So Anoma I will support whatever interoperability protocols come out. If there are better protocols like UDP, for example, different protocol for the Internet stack, if that comes out in two years Anoma's happy to support it because we just care using well designed, well-specified interoperability protocols. And yeah, so fractal instances also connect by the same interoperability protocols. Anna (44:33): Can fractal instances also be different types of blockchains? Like could there be a UTXO one, an account model one? Is there variations on these or did they have to have at least some baseline consistency? Adrian (44:50): The Anoma protocol can probably change quite fundamentally, but I don't think it could change to UTXO or different account model because a lot of the transaction execution flow and the way privacy works is just tied to this account model, to this very generalized account model. Also, I don't see a reason why you would want to do this. UTXO seems clearly worse. Same for like the existing account models with imperative execution. Anna (45:16): Okay, so let's talk about different types of account models. And is Anoma built like Ethereum, first off? Adrian (45:25): There are accounts, so let's say it this way. But something like NEAR or even Polkadot are much closer to Ethereum than Anoma is. All these systems have accounts and, for example, the way smart contracts on NEAR work is also this imperative execution. You invoke a smart contract, you invoke an account with code in NEAR and it sort of starts with the first instruction, goes to the second, goes to the third, maybe tries to call a smart contract, maybe this is a synchronous or asynchronous call, but it's fundamentally top to bottom. What happens in Anoma is more that the state changes are computed first and then they passed into all the validity predicates at the same time. And these validity predicates really don't do computation, in that sense, they do verification. Whether they are happy with the state changes. They don't go "let me call next contract, let me then do 5+5." They go, "this is the resulting state, am I happy with this? Like, am I happy that I lost a Bitcoin and received 10 ETH?" That's sort of the model. Anna (46:24): And there's this validity predicate that's submitted alongside each. Is it in each account or is it... Adrian (46:30): Every account has a validity predicate. Anna (46:30): Is that submitted or is that the verification side of things? That's what's used for verification. Adrian (46:38): So when someone creates an account, like when it comes created, a validity predicate has to be attached to that account. And then that validity predicate can do verification, like is invoked when that account is invoked as well. Anna (46:50): How does the Ethereum one work then? That one's more like writing... Adrian (46:54): So in Ethereum I call an account and then you also invoke the smart contract, the code in the account. But the code doesn't see the resulting state. It just tries to like do 5+5, call a separate contract, move a token somewhere — like it does imperative computation. Whereas in Anoma it's more of a functional system, to some extent. You can think of this like Ethereum is imperative and Anoma is more functional in terms of how the accounts work. Anna (47:25): At what stage is the Noma project today? Are we talking it's on testnet, it's still like modules that are being put together, is it incentivized? Like where are we at? Adrian (47:37): Yeah. So, maybe one important note here to start off with is: Anoma is pretty far along. This is not a researchy-kind of idea, where we're trying to figure out whether this works in practice or not. All the research is done and it works, and it's mostly focused on implementation at this point. So we are currently running the first internal testnets. Hopefully we will start to have public testnets in the next four weeks or so — like the first public testnets — and then probably incentivized testnets later this year. As the tech gets more and more testing. Yeah, this is where we are timing wise. Anna (48:14): Maybe it's too early to say this, but do you imagine sort of a "first use case" that you're going to highlight or work towards? Adrian (48:21): Yeah, I think one of the first use cases is certainly going to be this multi-asset shielded pool and just giving people incredibly good privacy guarantees, irrespective of what assets they hold. I think really people underestimate this, this is kind of completely impossible to do in our space. Like if you have a CryptoKitty, and you want to shield your CryptoKitty, that just doesn't work. It's impossible. With Anoma, your CryptoKitty can live in the exact same privacy set as like your USDT coffee payments. So I think that's actually going to be one of the first use cases: providing people really, really good privacy, no matter what the assets they hold. And then later use cases, I can see a number of things appearing, whether this is around trading or AMMs. I think actually around private bartering a lot. I think just being able to write more validity predicates and more private circuits. Anna (49:12): I'm going to take one step back for one sec. Cause the CryptoKitty example just made me think of a question. If it lives, if it's an Ethereum-based CryptoKitty, does it get locked somehow in Ethereum in order to enter Anoma? Or is that kind of how it would work? Adrian (49:27): Yeah, I mean, this is how bridges work generally, right? That the way you move an asset across a bridge is that you lock it against the validator set of the receiving side. They then mint it for you. And then if someone — the nice thing is if anyone else wants to go back, they can also on the validator — you burn it on the Anoma side, and the validator set unlocks it on the Ethereum side. I think at some point we should do like a whole episode around how bridging tech works. Anna (49:50): Oh, for sure. That is actually so on the docket, like bridging as a topic has come up so often over the last little while. Adrian (49:58): I can give you a sneak peek there. I don't think bridging makes a lot of money. Anna (50:03): Oh, how do you know that? Adrian (50:04): Because bridging Anoma to NEAR, there's no work involved. It's like you just send a bunch of data that exists on both chains to each other. No one is going to channel a 1% fee to move an asset from Anoma to NEAR. For legacy chains, this may be different. So like for Bitcoin, ETH, Monero, Zcash... Anna (50:22): NFTs maybe? Adrian (50:22): Depends where the NFTs live. So if the NFTs live on Ethereum, maybe. But if they live on NEAR, like, moving data between NEAR and Anoma, it's trivial. Anna (50:32): Okay. Yeah. This is a topic. We're at the end of the episode, we won't be able to cover it this time, but it is coming. This is really, it's good to highlight it now. Cool. Okay. So listen, Adrian, thank you so much for coming on the show and sharing all of this work and thinking around Anoma and the project. Can people already get involved, learn about it, go somewhere. Where should they go? Adrian (50:55): Yeah, absolutely. So Anoma.network is the first place to stop. We have the research blog up there. So a lot of the underlying tech is explained in it and we're releasing more and more explanations for how the stack works. There's the white paper as well, which explains Anoma in a very comprehensive way. I'm a big fan, it's not that long, it's like eight pages. Anna (51:16): It's a bit longer than that. I think it was 14 pages. I recently looked at it. Adrian (51:21): Yeah, maybe 14. Don't tell them! Get them hooked first... Yeah, so we're always looking for good engineers. If you're interested in privacy, bringing privacy to the masses, and bring privacy to all assets, reach out to me. You can reach me at adrian@anoma.network. You can also find us on Twitter, Anoma Network. Yeah, public testnets are coming. So join the Discord. They're going to come in the next four weeks or so, so looking forward to it. Anna (51:48): So thanks again, Adrian, for coming on the show. I also want to say thank you to the podcast producer, Andrey, the podcast editor Henrik, and to our listeners. Thanks for that.