Anna: Welcome to zero knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. Anna: This week, I chat with Adam Gagol and Matthew Niemerg from Aleph zero an L1 project that mixes zero-knowledge proofs and MPCs with a DAG consensus algorithm. The aim of their project is to enable private smart contracts. But before we start in, I want to tell you a little bit about ZK Hack, a multi round online event with workshops and puzzle solving competitions. This is put together by this podcast and the Zero Knowledge Validator and is supported by a group of fantastic sponsors. It has kicked off its weekly cadence yesterday on October 26th, and it continues every week until December 7th. Even if you've missed the first session, you can still join in think Hackathon meets CTF meets round based competition. There will be a leaderboard and prizes as well as deep dive learning sessions with the best teams in the space. So do head over to the website now. Sign up to join. You can also sign up for each individual workshop directly from the website. I also want to thank this week's sponsor DeversiFi. Deversifi's mission is to make opportunities of DeFi available to everyone and their platform enables this with an impressive user experience and simple to use interface. It's built with some of the cutting edge StarkWare scaling tech you've heard about on this podcast. Right now on the platform, you can invest, trade, send tokens and manage your portfolio all without paying gas fees. As a Layer 2 roll up users, get the benefit of security, privacy, and control without sacrificing any of the cornerstones of profitable trading. They're also just about to launch liquidity mining with their native governance token DVF. If you want to find out how to make the most out of DeFi visit DeversiFi.com. So thank you again, DeversiFi. Now, here is my episode on Aleph Zero. Anna: Today. I'm here with Adam Gagol and Matthew Niemerg both co-founders of Aleph Zero. Welcome to the show guys. Matthew: Thanks Anna. Pleasure to be here. Anna: So this is actually the first time we meet, I am not that familiar with Aleph Zero, this project that you've created. So why don't you first start there? What is Aleph Zero and when kind of roughly was this project started? Matthew: Yeah, so we started in early 2018. I was introduced to Adam and several of our other co-founders and, you know, with all that, we were really just trying to have, a way of approaching a new project, a new Layer 1 that we thought was a little bit more, not so focused on, you know, sort of this, this the hype and, and the, the price appreciation and more about what are the actual problems in this space and how can we, you know, try to tackle them and do so in a way where we basically treat, you know, the foundational principles of mathematics and science as our core fundamentals. Anna: What were you doing before you started this project? Matthew, why don't you start? Matthew: So I was actually at IBM. I was working at one of the national labs in Oak Ridge and I was doing high performance computing on GPUs. So this was with the, yeah, this was actually a, a joint position or a joint project with NVIDIA and IBM. So we were actually building out the well, we were actually delivering one of the largest supercomputers at the time for high performance computing. Anna: That's crazy. So you're coming more from hardware then, I guess. Matthew: Well, not necessarily. It was more that, that was just like, if you, if you look, I mean, I can discuss what happens with some of these national lab contracts and everything, but what essentially what happens is that there's a rotation of I believe three different national labs in a given year, will get a contract for delivering a new computer system, supercomputer system. And so then this is this happened, there's like nine different national labs in the U S or, or eight. So there's like a three-three-two rotation or what have you. So what happens now is that NVIDIA and IBM had a joint contract to deliver one of these supercomputers. And I was just a part of this team as a post-doc, but I was doing more on the software side and doing GPU computing. Matthew: I see, I see, I always kinda messed that up because it's like, you hear GPU, you think hardware, but actually there's an entire software component about optimizing for the GPU Matthew: And there's an entire team behind all this. That's not just me. Right. Like, Anna: Okay, sounds good. Okay. And what about you, Adam? What were you doing before you started this project? Adam: So I was finishing a PhD in mathematics. It was in probabilistic theory, but somewhat related to graph theory as well. I guess we got into a bit later and as almost every math PhD, like when I was a bit into machine learning, because like that was a big thing, right. So I was playing around a bit with with brain imaging and, and machine learning for, for neuroscience. But I was also interested in Bitcoin since I guess, 2013 when I started mining and doing things around, around Bitcoin network. Anna: So who convinced who to jump into blockchain? Adam: I think we jumped independently and we just met on various slack channels of, of various projects. I think Matthew jumped in a bit earlier than me, of all we met few years after, after we both were in the space. Back then a hash graph used to was, was the main, big things which interested both of us. I mean, for venue consensus, it looked like it's gonna be a big thing because of a fruit dent and low latency. There was this problem that they didn't really pursue open way of building community. They patented their consensus. It was not open source. So for that initial idea that we came up with was that it would be pretty nice to have this kind of high throughput, very fast consensus, but fully open source. And not patented, of course. Anna: So how did you meet then? So like, I actually know a little bit about the story of that project, the very, very close source project, but you had this idea to potentially build it open source, but was it like a research project to start or was it immediately like Adam: It was, it was a research project. I mean, the first few months we spent with just pen and paper improve upon upon existing consensus protocols. Like literally, I think it took us four to five months before writing a first line of code. So yeah, it was purely research initially. Anna: And Matt, what, like the work you were doing at IBM, did that lead to this as well? Or was this kind of a new problem? Matthew: Not necessarily. So what happened was, was essentially I'd been in this space since 2014, you know, had basically done pretty much a lot of different investing and, and everything, you know, mining old Bitcoin core clones back in the day. And essentially it just got to the point where I was not necessarily satisfied with what I was doing anymore I had at IBM and the corporate lifestyle didn't necessarily appeal to me. So then, you know, it was just at the point where we were able to, my wife and I were able to, you know, not necessarily rely on a, you know, a normal nine to five job and, you know, go off and do our own thing within the blockchain space. So I left IBM you know, at the end of essentially the end of 2017 and then went off and started doing, you know, sort of blockchain consulting and being in the space more active and full time. And then, you know, come several months later, you know, I was introduced Adam. Anna: Cool. Now I want to understand from, from this point that you're describing that you've kind of come across a problem, you're starting to explore it, but I know that like Aleph Zero actually employs a lot of zero-knowledge proofs and there's a lot of the cryptography that our audience will be quite familiar with. So what was the journey from that to what you're doing today? Like at what point did you start to explore ZK and actually MPC type stuff? Adam: So but it was a certain point in history of our product, which was, I think one and a half years ago where we have been pretty, pretty certain pretty convinced that we already solved what was to be solved in the consensus. And, and we felt pretty, pretty happy about it. And also we became convinced that we are able to build a chain around this idea. So, so the question arose what could we bring up to the table? Is it enough? Can we, can we offer something more and what our our, our good sites and the will, well, we decided to that we are kind of academic driven teams, so why not explore something, something else, which, which looks hard and and exciting. And at that point we decided that threshold cryptography is something that is kind of lacking in the field and this is definitely needed. Anna: So you started in, on threshold cryptography, but I, this sort of makes me wonder though, like, what is the goal of the project? What sort of problem space do you see yourself orienting towards or around? Adam: So a very, very broad goal is to enable private smart contracts. But of course there are dozens of projects with the same goal and it's not very well defined. So I can go a bit deeper into what does it mean to have a private smart contract? So what do you usually people think about when they think about privacy is ZKP design. So either based on SNARK, Starks or some other other versions of Zero Knowledge proofs and in its essence, this design allow users to prove that whatever they compute locally is correct. Without revealing a nature of a computation. So for example, and when it comes to privacy designs like Zcash or Tornado Cash, which is essentially the same but on Ethereum they are using on these ZK SNARKs and they offer privates transactions, which is well, they offer quite a lot of privacy, but there is a certain limitation to this kind of designs, which is what makes DeFi, at least in my opinion. So exciting is that it's built a bit different than traditional finances in the sense that it's built around the idea of trustless autonomous agents, which is smart contracts. So like in this most exciting, exciting DeFi products, you don't really trade with other users. You trade with this trustless agent with with a DEX, with some long platform or, or whatever. So this kind of designs are at least up to best of our knowledge, not really possible with ZK SNARKs. So we've SNARK there needs to be someone who knows all the ins and outs to produce the proof. So, for example, in Zcash, if I went to send some tokens to you, two of us needs to meet and produce the proof, and then the chain will verify this proof. And so if we would like to construct, for example, a private Uniswap, which would have a private state, there would need to be someone to produce the proofs using this private state. So there would need to be some kind of a manager or someone who actually would know what's under this, this privacy hood. And that's kind of defeats the purpose of of private, smart contracts. So, yeah, that's that's when we started to think, can you actually make it stronger? Can you actually fix it? And the answer is obviously as we're here, yes, you can at least to some extent for, I think definitely currently two main ways that people try to try to generalize this kind of designs. First one is hardware based and these are SGX, for example, chips. So they're called enclaves and as well. So with our various specific devices, we've usually, so let's say trusted manufacturer. And what they offer is that this manufacturer promises you, that you will not have access to the memory of a chip. So we own a computer with a single chip now, or several chips it's gonna compute some things, but you will not be able to keep inside of a computation. So using this kind of hardware and trusting this kind of hardware, you are able to, to have this trustless agent, which will do the computation, this is hardware, it was trusted agents. Anna: In this case, it would be like the SNARK production would happen within a TEE because then like the input to that snark doesn't need to be like accessible. Adam: For example, if you have this kind of chips, you don't actually, I think even produce SNARKs, I mean, there's chips. You can, you can assume that by default, via doing correct, correct operations inside. So yeah. So if you have a trust in manufacturer then it potentially can solve the problem. Although it's of course comes with the whole variety of another problems. One of each is of course, vendor lock-in. So you actually need to trust this manufacturer and you are kind of looking in on either single or just few entities, which produces, which must produce this service kind of chips. The other problem is that there are already attacks on the chips. I mean, there are groups shooting, some kinds of lasers into, into chips. I mean, it's not so covered with, for, to kind of treat anything out of it. So there it's another way it's purely software based. We are not dealing. I mean, yes, Matt has some experience with hardware, although rather theoretical, and we are not dealing with hardware right now, what we're building right now, it's going to be purely software based, and it's going to be revolving around another way of solving this problem, which is a multi-party computation. Matthew: So as, as Adam mentioned, the, the problem that you have is with hardware manufacturers, you get either locked in, right? Where if you're a company and you, you choose one particular solution, what happens if one of those manufacturers go down? Right? So if you have a global Foundry, for instance, where you know, you're no longer in the process of manufacturing chips, then you may no longer have a solution for being able to, to achieve these, these sides of privacy. You know, what, what you were really trying to get, and this is problematic in one aspect. And that, in the sense that from a supply chain perspective, if somebody at the top, and there's like maybe four or five major global foundries currently if, if one of them goes down and that's your, your primary manufacturer, well, what happens. Then the second problem is like you, you know, as, as Adam was saying, once again is sort of the trust factor. Do you actually trust that there is, you know, that things are proper, that there's no attack vectors other side channel attacks and what can be done there. And so this is why, if you look at say, multi-party computation for solving this, this you know, these problems within DeFi to, to, you know, mitigate against minor attraction value. And so on the idea here is that now you no longer have vendor lock-ins, you have different types of security proofs regarding by going back to the, to the mathematics and the cryptography proofs. And there's just a different approach as opposed to, with a software based solution, as opposed to a hardware based solution. And, you know, from, from our perspective, we also think that this is actually going to be a little bit more attractive because we're giving people options not to get locked into a particular hardware based solution. Anna: You're using an MPC instead of this TEE, I guess? I feel like I always hear it more like TEE or ZKP. So why like, what exactly is the MPC doing that a zero knowledge proof couldn't? By the way, for our listeners who should probably know this MPC - multi-party computation. Yeah. Maybe tell us a little bit more about how that is working? Adam: So how we see it and how you're going to use it. It's that MPC is essentially a carrier for global secrets. So what you can do with MPC is you can have a secret, which is not owned by a single entity. It can be owned by a smart contract. For example, internal state of a UniSwap could be secret and even though it's secret. What MPC allows you to do, it allows you up to update it, and it allows you to perform computations on it. So, for example like taking the simplest possible example will be like AMM DEX we've just two types of tokens and with contact product model. So this internal state, how many tokens are in the DEX will be private. And then what you could do, you could send a private token and the, how many of the tokens on the other side you would get, but it will be private. You would know it, and no one else would know it because it will be computed under the hood of MPC. So the ownership of a smart contract will be updated as well. Although that will happen in private. Anna: What would an alternative also be like using FHC for something like this? Had you looked at that as well, or was that just not, it doesn't actually have the properties that you would need to be able to do this? Adam: You mean homomorphic encryption? Yeah. Yeah, we looked into it. It's because it's one of the main, main paradigms in, in private competitions. Of course the problem with, with homomorphic encryption is that someone would need to have a key for this encryption. And yes, you can have threshold homomorphic encryption of, of this kind of designs are not really effective and fast enough to, to cater for, for such scale, as we want, even leaving aside the other problems we've had. Anna: Kind of want to go back to the problem space that you described at the beginning of this broad, you want like private, smart contracts. So is the smart contract itself living entirely within an MPC? Maybe can you explain where the smart contract actually is? Adam: So the code of the contract needs to be public. Currently, there is no reasonable known way, like cryptographically obfuscating the code so it still works. Obviously like really, really futuristic stuff, to make smart contracts code private. So, so the code would be public, but that's okay. I mean, everyone would know that this is UniSwap or some other DeFi contracts, but that's okay. What would be private are, or all the ins and outs and the internal state. So you would have, let's keep with the UniSwap example. You would have a private internal state. You could deposit some, some funds there privately. No one would know how much have you deposited into UniSwap as a supplier and you could trade against it and no one would know how much you put in and how much you get out. Matthew: Maybe, maybe think of it this way. Whenever we talk about the private internal state, how this is being stored is via the threshold multi-party computation mechanism, where you have a portion of those you know, some type of portion of the state held by different parties. And then whenever they combine some and do some type of aggregation scheme, then you're able to actually get the actual state, but they're never going to actually reveal, or this type of aggregation. What they're really doing is they're performing local computations on some type of private input, doing updates. And then from there being able to, to, you know, have, you know, this sort of internal private state being stored. Anna: A few months ago, there was a kind of a paper that Tarun my, sometimes co-host on the show put together around how, like, one of the challenges of making a private UniSwap is that there's, you need an Oracle, you need these, there's like other ways that information is leaked. The minute you do trades. So does this actually solve for it? Or is it just solving privacy on like a certain point? But at some point it does, like, there are ways to kind of gauge like how big these trades are, maybe what volumes are happening. Adam: Very valid point that you bring up. So it does solve it on the infrastructure level although. When you actually think about this device solution, this private UniSwap. There are still ways to extract the information out of it. If you don't take any counter measures. So for example, the paper, but you're referring to, I think it's a, it proposes a way of extracting information by transacting with even contract. So for example, if you have this UniSwap and to make a transaction, actually, before you make a transaction, you don't know what variety inside these because the private state is stays private. So what you can do is you can transact like 10 times a second and check what rates are you getting? And based on the rates, you can try to guess what's the ownership status of of the contract. Anna: There's leaks, leaks of info. Adam: There are some leaks, there are ways to mitigate that. So for example, you can randomize the rates a bit, so it's gonna get harder and harder to actually guess, but yes, you're, you're right. You're right. And when you get out of infrastructure layer and start to think about what actually can you construct, there are certain limitations to it. Anna: But I still, I still sort of understand you're using MPC for the inputs of these, this sort of smart contract that would be doing the trade. Do you also use zero knowledge proof somewhere in your system? Adam: Yes, we do. Because of the problem with MPC is that it's not terribly fast, like to actually compute the computations you need in UniSwap. Like few multiplications are enough. It's it's high school grade mathematics. What happens in sites, like few multiplications, few additions, this things are simple, but what you would like to do as well is for example, you need to identify users. So you need to check signatures for example, and that is very slow. I mean, to check signature, a single signature under the hood of MPC, that would take probably something like 10 seconds or even more. So we definitely don't want to do it. So for that reason system that we're building is gonna use SNARKs as well for user identification and dealing with everything basically, which does not touch this idea of global private state. Anna: Tell me then what, like when you're using ZKP for identity or for signing, what does that mean? Adam: So this design, so we did not actually invent this design. This part is very similar to what Zcash is doing. And of course their system now is super, super complex, but the basic underlying idea was to allow users to update their internal state and then validate with updates with SNARKs against the blockchain. So it's going work in very, very similar way, but you every user will have their own internal state, which will for example, write down how many tokens of each kinds and for user has. And whenever user would like to transact with either another user or with a smart contract, when will be an update of this internal state, which will be validated with SNARK against, against the blockchain, without revealing what actually happens. Anna: Okay, this is the state of the user, but is this so much an ID? You sort of suggested it as an identifier. Adam: Formally not, I mean, you're correct formally you don't identify you what you actually do, you update, your own internal state, and when you prove that you have rights to do it and that everything went okay, so formally you're right. It's not an identification. Anna: Okay. But it's closer to that part of the stack, I guess it's more like close to the address or something. Matthew: Okay. Broadly you can, you can think of, as, you know, what we're, what we're really doing is with this scalable privacy framework is to identify everything that can be done and using ZKPs as much as possible because that's going to be much, much faster. And then as a final fallback move to MPC, because obviously this is a little bit more powerful as, as to the type of operations that can perform, but that, that power comes at a price. Right. Meaning that it's, it's inherently slower. Anna: That's funny. Cause I always, I always think of ZKPs as potentially being slow. I like that. You're like, they're fast. I'm like, oh, but I didn't know. I guess MPCs are slower. When you, like, what kind of zero-knowledge proofs are you actually using? I mean, that might also explain why they, cause there are more simple circuits that I think can be done quite quickly. And then there's more complicated ones where like the proof generation could actually take quite a long time. So yeah. What is the kind of zero-knowledge proof that you're using? Adam: So we are using zkSNARKs so far we've been looking mostly into ZoKrates library although, we're still in very much into discovery process when it comes to that. Anna: I see. So you're using SNARKs are you just using like a Groth16 that kind of like standard SNARK are you using because there's some more advanced proving systems that have been developed maybe since then? I don't know. Adam: So none of our innovative parts actually refers to SNARK. We are using very, very basic and basic design so far. And as I said, the part which relates to SNARK is basically not our invention. We are kind of repeating ZK style design. So we don't actually need much fireworks here. What's what's innovative in the solution is that we are amending, but we've MPC to construct this thing, which is to the best of our knowledge, not really possible to, to be constructive with SNARKs only. Anna: Understood. So that's the kind of, it's the combination, the way that you're putting them together, the architecture of this that's sort of unique. Matthew: Yeah. And, and also as far as we know, nobody else is actually approaching this type of hybrid paradigm. Anna: Yeah. Actually I don't, I'm not sure I've heard the ZK-MPC other than MPC for the trusted setup of the zero knowledge proof. Like that's the combination that you often hear, Adam: To be honest, I wouldn't say that we are so much of a SNARK experts as thresholds cryptography experts. So so perhaps there are some gains to be still some, some benefits to still be gained from, from other SNARKs, which we just don't see yet. Matthew: And I was just going to say, I can kind of go back to the, the other part with the zero knowledge proofs, just, just, you know, since Adam saying that we're, we're doing something similar to Zerocash in and what they were doing, but I'll, I'll start it here. So what you have with say Zerocash in, and sort of Tornado Cash these cryptographic accumulators. And whenever you combine these with a zero knowledge proof or in this case as zkSNARK, this is where you're able to get this type of anonymity where, you know essentially users are updating their own local, private state, performing some type of proof where they can, you know, show that this is a way of updating their, their state and then publishing that to the, to the blockchain. Anna: Got it. So you had mentioned kind of earlier in this episode that the consensus was solved, but we haven't actually talked about how you solved it or what is even like, what kind of system is this? Is it proof of stake? Maybe we can kind of go back to that part. I'm very curious to hear, were you able to basically recreate hashgraph, open source or did you change directions completely? Matthew: So it would not be a complete redesign of hashgraph. You know, in, in the distributed systems world, there was an observation by Leslie Lamport, which actually predates when he wrote the Byzantine General's problem paper with show stack MPs. And he actually observed that in any distributed system, a natural way of describing messages that are being sent across a system is using this partially ordered structure. And it turns out that these partially ordered structures are in some way, mathematically equivalent to directed acyclic graphs under some type of appropriate reduction argument. And this is, are actually categorically equivalent. So this is high level, you know, you know, category theory that's going on here, but what this tells us is that anything from, you know, from category theory, you have these objects and you have functions on these objects. And if you have two categories that are basically by objection that they're equivalent, then the objects themselves in the functions will map over to this other object and function. So functions that you do on one, one type of object. So actually we'll push over to this other, other set of objects. And the idea is that proofs that you get in one area can actually be applied in the other area. So in the sense of directed acyclic and partially ordered sets, what you have is a, in a directed acyclic graphic, a topological sort, this is way of taking all the vertices and performing some type of total order on the vertices. And then the equivalent of function that you would have in a partially ordered set is a linear extension. So this is actually taking that partially order set, adding in some additional order relations and making a linear or a total order. So, you know, going back to what landlord observed is that if you just automatically just save and store this message, information, this message history, you get this partially ordered set, but what you're really trying to get at the end of the day with a Byzantine agreement protocol as a total order on all the messages. So what the, the the protocol that Adam and Michal and and Damien the both Damien's were able to work on. They basically were able to prove how to construct a linear order in such a way where all the honest participants of this you know, of, of the protocol will choose the same one at the end of the day, without actually messaging a lot of extra information across the system. And so then what this allows you to do is make this quite, quite efficient, but I'll, you know, Adam can kind of dive deeper into these details. Anna: Actually, can I ask something real quick? Cause what you just said, when you talk about this order, is this the sort of order of the blockchain, actually, this is the order of the blocks and the blockchain that you're talking about? Adam: Order of transactions. Anna: So the word you've used also is directed acyclic graph. Is that correct? Yes. Okay. Or shorten sometimes to DAG, DAG. And I think back in 2017, there was a few projects that were going around talking about DAGs a lot, but maybe they weren't entirely sure of what they meant by that. Is this actually a different kind of consensus or is this a different kind of structure? Adam: I would say it's a different kind of language to talk about very similar things. So most of us can not all of them, but most of these products are using DAG as a kind of a formalism on them. So how do you construct the DAG? So in any consensus, which is proof of stake, any of a proof of stake family, you have some agents which are sending messages among themselves. Anna: Nodes, I guess Adam: Nodes say no. So how do you construct a DAG out of it? Let's say that every event of creating a message will be a one vertex in DAG. And then it's going to be higher than the vertices, which we're creating before and about which the creator of a message already knew. So basically what this DAG constructer is trying to encapture is the knowledge structure of who knew about what during, during what time Anna: And this always, when you say this, it just makes me think of like longest chain where it's like, you're looking for the longest, most accurate, most agreed upon like things that everyone knows is correct? Adam: Yes, although we don't deal with trees and we are not choosing chains anymore. Things are more complex, but yes, the intention is very correct. Okay. Anna: Yeah. I'm clearly so used to the Tendermint style thing there, but yeah. Matthew: Yeah. Well, one, one way to think about it is that as a vertex is added to the, to the DAG, it's choosing its parents and, you know, as an implied result, it's also choosing all of its ancestors. Yeah. And by, by choosing the parents and then all the ancestors what's really happening is that the, that particular vertex as issued by a node is essentially approving all of those previous vertices. Right. Anna: And that sort of locking them in to that particular Adam: Is a statement. It's a statement I know about this messages during the time of creating, of my message. Matthew: And I locally, as a node am agreeing with these messages are valid, right? And so then what you need to have is, is essentially a threshold of more than two thirds of all the nodes agree that some vertex in the past is seen and validated. Anna: And when, when that happens, it's all, its ancestors are consistently validated. Matthew: Right. But the issue here not necessarily an issue, and this is where the, the Byzantine agreement protocol comes into play to choose the final total order. Is that just choosing the, the, you know, having reached that threshold of more than two thirds of the nodes approving a vertex as being valid is not enough. All this tells you is that we have a, you know, agreement on validation of a, of a vertex. This doesn't tell you what the actual ordering of those vertices are. And any Byzantine agreement protocol is, is actually equivalent to an Automic broadcast protocol. And by definition of an atomic broadcast, you actually have to have a total order on all the messages. So what you have now is this mathematical structure of a partially ordered set. So partial order, and what you need to be able to do is choose the same way of making this into a total order. And the way that this has done is by, is sort of in a series of steps where you, you validate transactions or vertices in this case that everybody's approving, you've reached the threshold that they're approved, and they kind of get all put into a bucket if they have yet to be ordered or totally ordered they're there in a partial order now. And, you know, as, as the protocol you know, moves forward, eventually there's, there's some, you know, a portion of it where you now have a choice as to what the ordering of those unordered transactions are. And so then now you get to go ahead and say, okay, well, transaction A actually happens before transactions B and which happens before transaction C. And so then you end up with a totally ordered a set of transactions. So the, the goal of the, the actual Byzantine agreement protocol is to have every single node choose that same order, but without doing it in a way where you're, you're sending and submitting a transaction orders to say central leader, and then that central leader proposes a block, and then everybody else approves this block and some type of signing message and sort of like a classical PVFT model. And so we're not doing it that way because that's actually creates some extra bottlenecks in communication. And, and just sort of how you can think about how the, these, these protocols operate. Adam: Actually, that's the coolest part about being the DAG that you don't actually need to think about consensus when, when sending messages, what do you do? And if it's not true, only for us, it's true. I think for most of the DAG protocols, basically nodes constructed DAG and the consensus happens afterwards on offline. You just look at the DAG and apply some pre-specified rules, and then you see what the nodes should agree upon because you're in the DAG, you have encaptured all of this communication structure already. So you can simulate the consensus inside of the DAG, which is kind of a crazy and a bit of a paradigm shift. Interesting. Anna: Interesting. There's a word you've used throughout this whole thing, and I didn't want to interrupt you, but it's you're saying Vertex. And I actually don't know what that is. I followed all the logic, but I don't know what that thing you're actually doing at the start is. Matthew: Sorry, just a note on the graph. So whenever you have a, you know, like a point on the graph that has an arrow or an edge, that point is called a vertex. I see. Okay. So you have vertices or vertex, right? Vertices is plural. Vertex is then connected or not connected by edges or in this case of a directed acyclic graph, they're called arcs. And so then you would connect vertex A with vertex B via some type of arc, right. Which is a directed edge. And that's just the terminology that's used in, in a graph theory. Anna: Each of those is not a transaction, is it? Matthew: It doesn't have to be, so what we... Anna: They could be transactions Matthew: They could be, but this is not optimal. Adam: You would probably want to have them as transaction containers. I mean, you would usually want to put more than one transaction into a vertex. Do have like a reasonably high profit. Anna: Are they almost like the blocks then, but they're like floating blocks. Yes. Matthew: And they're almost like the blocks. And then whenever they become totally ordered, you have a, a totally ordered set of vertices, which are the blocks are now the vertices or the transaction containers. And then they, you know, once they're totally ordered, it looks and kind of feels like a blockchain because that's the end goal that you're trying to get to is a totally order set of transactions. So that whenever you're, you're doing a, some type of database and state transition updates, everybody is executing in the same way. Right. So you don't want to have some type of you know, potential, you know, parallelism problems with you know, that where, where you might end up with a different state or, you know, floating point errors, or what have you, just because you did transactions and state transitions in different orders, even though theoretically, this is perfectly fine. There's just some things that will happen at a software level that can be, you know, sometimes problematic Anna: In the vertex itself, the transaction order, you said it's container. Is there any way that that's defined or does it, is it sort of random? Adam: It's not defined by the consensus protocol and not on this level. Usually you could either do this let's say alphabetically or any other way. I mean, it needs to be deterministic. So you, you need some role of defining it, but federal does not matter. So actually the producer can, can order them however, Anna: Is that wouldn't that create some sort of, I mean, the issue of the topic of the blockchain right now of MEV issues, because then you have some node who could do it? Matthew: This is, this is problematic, but there are ways of... Anna: I'm being annoying, bringing it up again, every episode now. Matthew: Th there, there are ways of, of of mitigating this to an extent where if you think that each producer, each block producer is, you know, or somebody who's producing a, a transaction container of vertex, they have local information. They don't necessarily have all the other outside information of other block producers. So it could be the case and looking at how this could be done as to how to mitigate it, where you could possibly use some extra outside information that the other node block producer isn't immediately aware of because there's maybe latency on communication or some other type of information that may be, you know, added in like, through some randomness beacon at a later point in time, which could then permute those transactions within that transaction container. Right? And so this is not necessarily something that we're immediately looking at, but it is something that may be possible to do. Anna: Interesting. Adam: Although I'd like to mention that it's a bit better than in normal designs normal like classical designs. Like Ethereum 1.0, because here at the time of producing this vertex, the producer does not know of transactions, which will go before it. Anna: It knows all of them within the vertex, but not within the other vertexes I guess. Okay. Adam: The vertex does not know where, where exactly the vertex will fit in. So of course are some tricks a producer can do, like for sandwich kind of kind of treading attack. Yeah. Although like the things we think we talked before, but threshold cryptography is the technique which could help to, to solve some of this problems. Anna: I mean, that's the one the thresholds do, what is it? Threshold decryption is the technique that like Osmosis, we talked about that in there, in that episode, they're proposing to mitigate MEV sandwich attacks specifically. Adam: I think you can using threshold cryptography and the idea of submarine sense you could generally mitigate most of or even all of MEV kind of attacks on DeFi, at least. So, but submarine sense. It's not our idea. I think there is a project named submarine centric. Although the idea is that you submit the transactions and at the time of ordering they are encrypted and decrypted on the sometime after they are ordered. So even the block producer does not know what's the transaction content. So, there is no way of optimizing ordering based on what happens actually in DeFi. So without threshold cryptography, you can kind of do it in the way that user sends hash of a transaction and then sends a transaction, for example, half a minute later, although this kind of design is problematic because then you don't really have normal markets, you have option markets because users can just not reveal the real transaction if the market changes in the direction, they please. So what's what needs to be done here is the, the users cannot just send transaction hashes. They need to have decrypte the transactions with with the key, which is split among some committee. So if a committee event reveals all of the transactions 30 seconds later and on the van, you can kind of be sure that this will be normal markets, not the option ones. Anna: That's, I mean, that sounds just like the Osmosis one, right? That's the, I feel like I've seen the graphics in their, in their, in their presentations and stuff. Cool. Adam: Threshold cryptographic is cool. Anna: I'm just learning about it. Are there some references that you would actually recommend folks read or get into if they want to like dig a little bit deeper into this? Matthew: So, as I mentioned earlier, there's a really good paper by Lamport called "Time Clocks and the Ordering of Events in a Distributed System" that is sort of a precursor to kind of get an understanding of why we use a DAG structure for the, essentially for the message history of of the network. And then from there move into a, a totally ordered set to, to get the transactions. The Lamport paper is, is pretty, pretty accessible, I'd say, and is a good sort of you know, a good historical to know. Anna: This is built as an L1. Is it built using any sort of blockchain framework? Are you in another ecosystem? Are you connected? Is it like an EVM copy? I'm just kind of curious about like what the, the actual execution part of this looks like. Adam: That's a very good question. So initially we started to code everything from scratch using Golang language, and we went pretty far. I mean, we've been already at the stage where we've been able to have a pretty good benchmarks. And then we learned about Substrate and and we did a big pivots which basically we started to code everything thing from scratch again, which went way, way faster this time, because we had all the decisions that we struggled with before. And also because Substrate solves a lot, a lot of problems, but we do not want to solve at home. I mean, like in the end, there are some domains where we want to innovate and there is really a lot of blockchain related domains, which we do not necessarily want to innovate right now. We just want to take a working function you can audit solution. So, so we've found Substrate is really, really helpful. Anna: Interesting, but you would have had to switch languages in that case. Did you, are you now more of a Rust-based based project, But was that hard to like good to go? Cause Golang and Russ are different now, did you have to hire new people? Adam: No, but we had to learn quite a lot. So definitely there was a, Rust has a pretty, pretty steep learning curve indeed. Although it's a really, really nice language. So like, of course it's hard to learn, although everyone likes it after they learn. So people don't like, it never gets to the point where if they learned it. So it goes as a very, very likable language. So there was definitely slow down and related to the learning, learning curve, fortunately, we got a grant from Web3 Foundation for, for coding randoms beacon, which was, uh, was in Rust. So this was kind of our way into Rust and also into, into the ecosystem. And from then it went pretty, pretty smooth. Anna: So is this a UTXO or account-based system? Adam: So our system would be account-based, although in Substrate, you are free to choose. Substrate is pretty modular and there are pilots for most things that you can imagine. So of course, there is a pilot for the UTXO systems as well. Anna: And you did account, was it because you wanted the smart contract enablement because you actually want it to be a platform for smart contracts? Adam: Yes. This was one of of the main reasons. Also it's slightly more optimal when it comes to the storage size and yeah, it's basically I think two main reasons for us. Matthew: Yeah. I think that the, the, the thinking of of smart contracts as stateful objects and then treating the identity of a smart contract as an account, I think this is a, an easier choice from a, you know, for developers to, to have an understanding of them to do something like this extended UTXO model which is still doable obviously, but it does take a little bit more time to wrap your head around how to do it properly and how to think about it. So part of it is also just from an ease of you know, onboarding developers and people interested to deploy applications you know, and as opposed to just, well, we can do either one, what what's going to be easier for people in the end to. Anna: What kind of language would someone need to write to actually do a smart contract on your system? Are you trying to make it EVM compatible, or are you doing a different language? Adam: Currently we have no plans to make a EVM compatible. We are, we'll be using WebAssembly, so have a preferred language in Substrate is Ink, which is Rust but kind of simplified. And so so it's like a counterpart of solidity kind of WebAssembly and the reason for which we are currently planning to use mostly WebAssembly is is speeds. So EVM is significantly slower on benchmarks, then, then WebAssembly and most probably it's not gonna change anything soon. So that's why Anna: Are you actually a Polkadot? Are you planning on eventually like being a Parachain? Because I know like Substrate is separate, like substrate is like the framework and you don't, it doesn't actually put you in that ecosystem. So I'm just, yeah, I'm curious. Matthew: The way to think about this is that by definition, in order to be a Parachain on the Polkadot, you actually have to be using a GRANDPA as a finality gadget. And because we're using Aleph as the finality gadget in the end, we actually have to have a separate Layer 1, but that doesn't mean that you can't do a kind of an identity bridge where that within that identity bridge is, is acting as a Parachain, but then is using GRANDPAs for the finality gadget, but then still having sort of your, your other Layer 1 the Aleph Layer 1 sort of operating and running things sort of independently from Polkadot. Anna: Do you also have any sort of strategy to bridge to other chains like, like everyone else, like to Ethereum, is this something that you're considering? Adam: Of course creating the platform with DeFi-centric privacy features without connecting to other channels wouldn't make much sense. So what we are actually aiming at is eventually to have a bridge to, for example, Ethereum, which would allow to use our privacy feature for if you're doing smart contracts. So that would work this way, that we would host this global private state. And that could work for if you're in a smart contract as well in the way that the smart contract would send the message to the bridge, the bridge would then read and compute something on the private state. And then that would go back to Ethereum. So definitely we want to be, to be connected. Anna: I am a little curious what kind of customers are like the space that you want to inhabit is, and that the reason I asked that is like on the website, there seems to be like an enterprise angle. And I'm wondering if that is actually your strategy. Do you see this more as an enterprise chain more for like bigger corporations? Or do you want this to be like more of a, I don't know what you call it, the opposite of enterprise. Matthew: We definitely were. We're interested in having a corporates or enterprises deploying applications on, on the Aleph platform. I think that the, you know, what we're seeing here is a as a shift from private consortium based chains where it's just a permission blockchain, or, you know, so back in 2017 and 2018, the narrative was a permission consortium chain with multiple companies that were made. Matthew: Yeah, exactly. So, so what you, what you have is a, is at least from our discussions with others and you know, other corporates, other people who have, who are introducing us to, to enterprise clients or potential enterprise clients, the, the idea here is more about if they're doing a permission chain, they're sort of cut off from the rest of this, this sort of public ecosystem. And now the, the question then becomes what are the advantages? Of course there are advantages for a permission chain, but there are also a lot of other advantages are by deploying on a public platform. So if you, if you think about what are the primary reasons that corporates aren't necessarily adopting the blockchain technology one is just from a high level of understanding, just lack of a proper security models and formalism as to making sure that this is a, a safe and secure platform for them to, to be, you know, sort of engaging in, and this is sort of a addressed by us whenever we went ahead and did the peer reviewed process for consensus and did our code audit with trailer bits and so on and so forth. And then, the other aspect is, you know, the privacy angle, meaning that there is information that corporations don't want to necessarily disclose. And how do you actually solve this as a problem? And that's by actually going ahead and creating a privacy framework to allow them to deploy applications that can use these different privacy features sort of natively on, on the, the Aleph Zero platform and without actually, you know, sort of restricting themselves to only being on a consortium chain. And this gives them a wider opportunity to interact with the larger sort of public ecosystem. Anna: So from what I gathered, like, you are kind of like pitching it to enterprise, but it's not one of these permission chains. You're not pitching it as like we're going to run you a blockchain, you know, like that kind of has proven to not really work out the way people had originally assumed. Matthew: Correct. I mean, there, there are, there are ways of still doing some type of permission chain where you, you want to be able to do some type of maybe even a zero knowledge proof regarding the state update of a, you know, a permission database. The, the problem here is that at some point you, you have certain limitations regarding the trust factor of whether or not this, you know, the state over here or the, the permission chain, which nobody else's, if it's private, nobody, the public doesn't see it. And you, you may not be able to get all the same type of interactions that you may possibly want in, in sort of a completely public ecosystem. Anna: So then going back to my initial question, like, what is the goal? Like, what kind of audience, what kind of users are you aiming for? Adam: So we don't want to be seen as a purely enterprise focusing, focusing chain. Currently we see DeFi as a very convenient playground, so to speak, because things are way faster to be deployed on DeFi. I mean, if you have a nice idea and you deploy on DeFi, next week, you can have like literally hundreds of thousands of users. So we are mostly focusing on building privacy framework and definitely we see DeFi related things to be the most achievable showcase scenarios for that, because it's going to be much faster to deploy that then anything enterprise related. Anna: Got it. And you are, you're working in an open source way. Am I cracked? Of course. Okay. Very good. It's usually a question I should've asked earlier, but I was kind of assuming when you said that you were trying to, that you were kind of unhappy, you were unhappy with a project, maybe keeping something to close source. So you do like you have the open source ethos. This is the idea here is to like, use and share. It sounds like it, if you're using Substrate frameworks or using like existing ZK libraries at what stage is the project like, is, is it fully out on Mainnet? Is it still Testnet like, where are you guys at? Adam: So we have a functioning Testnet we are just about to release the Mainnet within the next few weeks. So, bigs events are coming, although for now is going to be pretty minimalistic Mainnet not have a whole of a privacy framework, which by the way, is called Liminal. With Liminal we are still very much the research and pretty early phase. That's going to be coming later. So we have it's, that's where we are when it comes to the threshold cryptography, we have our own protocol for threshold, the TCTSA, which is in a way subcase of MPC, but from academic perspective is actually harder to do threshold TCTSA than general MPC. So we have a peer-reviewed paper about it, and we have an implementation of it in Golang. We are working on slightly generalizing ... And, and of course rewriting it in Rust. Anna: Yeah, actually that's, that is a question. So are there still parts of your system written in Golang that you still need to bring over or is that like the last piece? And then you're a completely Rust-based project. Adam: We have things which are in Golang and which we do not have any Rust just yet, although we do not use them. So no, there are no parts of our system, which are in Golang avoid, this thing, which is our threshold components this is still to be, to be recoded in Rust. Anna: Okay. And then what about going forward? What other kind of research or implement, like you sort of say, it's very minimal mainnet, what should be coming down the pipeline? Matthew: So how we, how we approached this is with a sort of a tiered, a tiered process. Whenever it comes to the different types of devnets and testnets that we have, we'll have three different concurrent networks. One is the devnet, which is basically the one that we can try to break as much as possible. We're updating it constantly. Then separately you have the Testnet, which is essentially the previous version of a devnet that we fixed the features on and freeze those those features. And we want to be able to have that operate for a good portion of time. Say one to two months, depending on how complex the new features were that we added and then finally have a mainnet. And so then the sort of the upgrade processes to keep pushing things, to devnet, freeze the features on testnet at some point, once testnet is, is proper. And we have say a two month period, then go ahead and do an upgrade to mainnet. So currently our testnet has been operating for three weeks without any, any changes. So, you know, it's fairly stable. We don't have any concerns about it. It'll be the same essentially binary that we use for the mainnet. And we'll just use this as a sort of an upgrade process for releasing new features into the mainnet. As, as we continue. Adam: Generally, we see Liminal as a very, very long road. I mean, yes, we do have some threshold pieces already, and we have some ideas how to integrate that with SNARK, although the road to having a convenience library for smart contract developers to use privacy is still a pretty, pretty long. I mean, definitely we'll be able to write our own private solutions faster because we, we know the system, but to enable it for, for a broader audience in a convenient confident ways, as I said, it's a framework, as a library that's a very long long cross through and take out a lot of architecture and this kind of research, will have to go into that. Anna: Cool. Well, listen, I want to say a big thank you to both of you for coming on the show and exploring this project, which I did not know very much about when I started the interview. Thanks so much for diving into it. Adam: Thanks for having us. Matthew: Truly a pleasure. Thanks Anna. Matthew: Cool. I want to say a big thank you to our podcast producer, Tanya, podcast editor, Henrik, and to our listeners. Thanks for listening.