00:05: Anna Rose: Welcome to Zero Knowledge. I'm your host, Anna Rose. In this podcast, we will be exploring the latest in zero knowledge research and the decentralized web, as well as new paradigms that promise to change the way we interact and transact online. 00:27: This week, Tarun and I chat with Niraj and Anish from Ritual. In this episode, we revisit the AI crypto intersection. We then explore the Ritual product, which is modeling itself as an AI coprocessor. They are building tools for ML-enabled dApp developers and are offering ways to integrate AI models into smart contracts. Now before we kick off, I just want to highlight the ZK Jobs Board for you. There you're going to find a lot of new jobs from top teams working in ZK. There's been a lot of postings lately. So if you're looking for a new opportunity to get into the space, to work in ZK, be sure to check it out. And if you're a team looking to find great talent, be sure to add your jobs to the Jobs Board as well. So I've added the link in the show notes. Now, Tanya will share a little bit about this week's sponsor. 01:15: Tanya: Aleo is a new Layer 1 blockchain that achieves the programmability of Ethereum, the privacy of Zcash and the scalability of a rollup. Driven by a mission for a truly secure internet, Aleo has interwoven zero-knowledge proofs into every facet of their stack, resulting in a vertically integrated Layer 1 blockchain that's unparalleled in its approach. Aleo is ZK by design. Dive into their programming language, Leo, and see what permissionless development looks like, offering boundless opportunities for developers and innovators to build ZK apps. This is an invitation to be part of a transformational ZK journey. Dive deeper and discover more about Aleo at aleo.org. And now here's our episode. 02:00: Anna Rose: Today, Tarun and I are here with Niraj, co-founder of Ritual, and Anish, an engineer on the founding team. Welcome to both of you. 02:08: Niraj Pant: Thanks for having us. 02:09: Anish Agnihotri: Glad to be here. 02:10: Anna Rose: Hey, Tarun. 02:11: Tarun Chitra: Hey, excited to be back and talking to a team that I both am an investor and advisor in. 02:17: Anna Rose: Nice. For today's episode, we're going to be diving back into the topic of crypto and AI, something I don't think we've covered on the show for quite a while. Early 2023, we started to sort of think about it, especially in the case of ZK and AI, but yeah, I'm really excited to get a chance to jump back in. And I'm also excited to have you on the show, Niraj. We have a nice backstory. I mean, I met you a long time ago back in 2018, 2017 maybe, you were the person who introduced me to Tarun. And I was looking this up today just to see that post, but you were just like, I know this guy, he's got this company, super smart, you should meet him. Something like that. And I'm really glad that you had made that introduction. So yeah, and it's very cool to have you on the show. 03:09: Niraj Pant: Yeah, I'm excited, I guess, to see how far it's come, and I'm a big fan of the show and I listen to it quite often and reference it quite a lot. So I'm glad to have made the connection and see how it's come. 03:22: Anna Rose: Nice. Also, we were talking about this. You and I, we did do an interview long ago, one of the earliest interviews, but it doesn't exist anymore. So it's the long lost ZK podcast episode, episode 18. 03:37: Niraj Pant: It's like an Easter egg. 03:38: Anna Rose: It's an Easter egg, yes. 03:39: Niraj Pant: I don't remember a ton of what we talked about, but I remember we talked about Non-Interactive Proofs of Proof-of-Work. That was like the big research thing. 03:50: Tarun Chitra: Oh, NIPoPoWs, the Andrew Miller thing. Yeah, yeah. I remember. That was like, that's such a different era when people cared about that. That's like it's so divorced from right now. Like those signs that like amusement parks, you must be this tall to ride this ride. You must be like 80% cypherpunk or more to actually care about that type of stuff in 2024. 04:14: Anna Rose: That was also the era... I don't know if we talked about it on the show, but that was like the era of Truebit, which came back... 04:19: Tarun Chitra: Oh, yeah. I keep seeing all the Truebit people everywhere. So I feel like there's something... 04:24: Anna Rose: Oh really. 04:24: Tarun Chitra: Something's up. Yeah, I was at an academic conference last week, Financial Cryptography, and I saw a bunch of Truebit people. 04:32: Anna Rose: Is that still around? Because I feel like that project was so ahead of its time. It really was, like it had the makings of things that have since come out, but maybe it was like missing something or it was too early. I don't know. 04:48: Tarun Chitra: Too early. 04:49: Anna Rose: What was it? Wasn't it kind of like a... Wasn't it fraud proofs? Wasn't it optimistic rollups in a way? 04:54: Tarun Chitra: Yeah, it was basically off-chain compute with some form of fraud proof. 04:58: Anna Rose: Okay. Amazing. All right, Niraj, you introduced me to Tarun, and I've always sort of wondered, how did you guys actually meet? 05:05: Niraj Pant: Maybe Tarun should answer it. 05:07: Tarun Chitra: Yeah, I guess we met through this other investor in crypto, who a lot of people know, but has one of the most minimal public presences of anyone of that stature. And then over time, Niraj invested in my company. So, yeah, it kind of was the early beginnings of the post-ICO new beginnings of crypto 2018. 05:36: Niraj Pant: Yeah. I remember looking at... When Tarun was fundraising, I remember looking at his deck and I was like, this is one of the most interesting decks I had seen for DeFi. He was talking about some really unique, interesting concepts that I think they're kind of working on today. Things around insurance and monitoring and governance and things like that. I mean, it was right around the time that we also invested in the Compound. We led the seed of Compound Finance. And it's just cool to see how the industry has changed in the, I guess, five, six years since. 06:12: Tarun Chitra: I definitely feel a lot older, let's put it that way. 06:17: Anna Rose: Niraj, you just mentioned "We", so let's talk about the company that you were at before Ritual. You're part of Polychain. Were you one of the earlier people there? Were you the first generation Polychain investors? 06:30: Niraj Pant: Yes. I was in the apartment office. 06:34: Tarun Chitra: Wait, what employee number were you, actually? Would you say less than five? 06:39: Niraj Pant: I think somewhere around there. The investment team was three or four people. 06:44: Anna Rose: Had you done anything before that, Niraj? Were you a student coming out of school, or had you already worked in VC and stuff like that? 06:51: Niraj Pant: I dropped out to join Polychain. 06:53: Anna Rose: Oh, wow. 06:55: Niraj Pant: I was doing research in college. I went to the University of Illinois, and I worked with Andrew Miller for two years. And the summer of 2017, when Polychain was kind of growing, I met Polychain through one of their LPs. And I was looking to kind of do something in the space. And he said, you know, you should go talk to Olaf. And I remember meeting him, and he was one of the most weird, kind of eclectic, interesting people I'd ever met. And through that, I first joined as an intern. And after a month, I knew that this was way more interesting than what I was doing in school, and I could take my research and try to kind of apply it in an investing context. And so I ended up dropping out and joining in a little bit later that year. 07:50: Anna Rose: Is that 2017? 07:52: Niraj Pant: 2017, yep. 07:52: Anna Rose: Yeah. Crazy. That was a wild time. You were at Polychain for quite a while. What was your time like there? I mean, you guys got to be in so many of those early deals when there were... I mean, I just remember at the time there, I mean, I don't know how many VCs there were, but like really legitimate kind of serious VCs who understood the tech seemed like few and far between. And I feel like you guys were definitely some of them. 08:19: Niraj Pant: Yeah, it was a really amazing vantage point to be a part of and also to see how the industry evolved and changed over the six years that I was there. I just remember the types of things that we were investing into in 2018 was vastly different from 2023. I remember 2018 was, I was reading white papers of different Layer 1s and it was like new consensus mechanisms and things like that. ICO is really big. And the industry, every single year in crypto, you get these compressed market cycles. So you really can kind of age as an investor, I think, a lot faster than you can in the traditional world. 09:03: Anna Rose: It sounds good and bad. Age like fine wine, get wisdom or yeah, age... 09:12: Niraj Pant: I think both. 09:14: Anna Rose: Okay. Nice. 09:16: Tarun Chitra: Given that you invested in a lot of ZK stuff early, and this is the ZK podcast, it would be remiss if we didn't ask you what your favorite ZK investment was. 09:25: Niraj Pant: Oh man. I will say one of my favorites. I don't wanna pick any one favorite. So one team that I really enjoyed working with was the Mir Protocol team, so Daniel and Brendan. I remember we met them, I think in the summer of 2019. They were building on top of Marlin, and they were trying to build a new ZK Layer 1. This was just at the time when recursive proofs were just starting to come into the fold. And they had some really interesting ideas around how you can basically start using ZK proofs for things outside of payments. During the sapling and pre-sapling times, generating those transactions used to take a long time, which now feels like a relic. So working with them was great because they both were not cryptographers by background, but were both very established software engineers with math backgrounds that very quickly became some of the kind of leaders in the ZK space. And so getting to watch them do that was exciting and getting to work with them. And then fast forward to, I want to say it was maybe 2021, I don't remember exactly, working with them to kind of help the Polygon acquisition come together, and I think they've taken it to some really amazing heights. You know, the series of Plonky stuff that they've done, and seeing it actually get to a really massive production scale with Polygon amongst the kind of other ZK acquisitions they made. So that was just a really cool experience for me to see and also couldn't have happened to better people. 11:16: Anna Rose: Very cool. That's a nice shout out. Anish, I want to hear a little bit about you and your background. What were you working on before joining Ritual? Are you coming from crypto or are you coming from AI? Or neither or both. 11:33: Tarun Chitra: Very much. I feel like Anish has the greatest street cred of everyone on this podcast. So I think... 11:38: Anish Agnihotri: Tarun flatters me with that. I am coming 100% from the crypto side of the world, and picking up the ropes on the AI as I go. I started working professionally in crypto for Niraj, coming up on four or five years ago, I think. I sent him a Twitter DM in high school that he graciously opened and actually responded to and that's what landed me my first internship at Polychain that summer. So a good full circle finally getting to work for Niraj again a couple years after the fact. But yeah, spent a couple years, I think a year and a half at Polychain. I worked with Niraj, I worked with Akilesh, who's the other co-founder of Ritual on the trading side of things. And I spent some time at Paradigm on the research team doing open source development. And finally, prior to Ritual, I spent about a year and a half doing on-chain MEV. And eventually, Niraj and Akilesh reached out and I said, absolutely, it'd be a privilege to hop back on. 12:43: Anna Rose: When you say you were doing MEV stuff, what... Were you a searcher? 12:48: Anish Agnihotri: Yeah, I was a searcher. 12:49: Anna Rose: Like, were you just doing that on your own or was this part of a crew? 12:53: Anish Agnihotri: Just by myself. 12:54: Anna Rose: Okay. What was that like? 12:56: Anish Agnihotri: It's a lot of fun. I'd say you get to experiment with the limits of what's possible on-chain. And so definitely a lot of fun things there. 13:07: Anna Rose: I remember having, a while back a Dean from Dialectic on and he described that folks who do that alone as it's kind of a lonely existence that you're out grinding but there's no one around you. And that when you... In his case, he had built a team of folks doing that and sharing strategies and stuff. Did you actually have people also to share strategies with, or were you really just doing this on your own? 13:36: Anish Agnihotri: It's a competitive space. So you share strategies when there's a benefit to sharing a strategy, but I wouldn't say I went out of my way to disclose what I was doing or what I wasn't doing. But Dean and I go back and forth on the team versus independent. 13:50: Anna Rose: Which ecosystems were you focused on? 13:53: Anish Agnihotri: I'd say 70% was Ethereum mainnet. So that's definitely where the bulk of opportunity is. And I'd say 30% was a long tail of chains that a lot of folks have never heard of. So chains where there's 10 validators, you have to email them hard drives so they can get you copies of their archive node. The long tail of things was the other 30%. 14:15: Anna Rose: So with today's episode, something that I'd like to do is I want to start us off talking about the intersection of AI and cryptography because I know that Ritual lands in that intersection. But I want to talk first very broadly, and I feel like since both of you have experience on the VC side, you will have seen a lot of things. And so I kind of want to cover some of those, go through what seems feasible, what doesn't seem feasible, and then I want to dig down into what Ritual's focused on. So I had a few already off the top of my head of how these two things can intersect. I mean, we've over on the ZK side thought sometimes like ZK could be used to save us from AI, save us from deep fakes. But it could be also that cryptography is enhancing training maybe, or proving something, or keeping something private. Maybe we use the immutability of blockchain plus AI and that adds an extra feature. Maybe there's some other way in which they intersect. Tarun, do you have some ideas here? 15:21: Tarun Chitra: Yeah, I was just arguing that like the different components within AI engineering systems, some components are more friendly to privacy-preserving technologies. Some are less friendly and less useful. And sort of there's kind of the spectrum that you have to kind of establish of which parts are worth having high amounts of trust in either economically or cryptographically versus which ones aren't. And that tradeoff surface is quite large. 15:48: Anna Rose: So that was sort of just like a rundown. But I want to hear from you guys. Like, A, is there anything I've missed there? Or do you want to dig down on any of those particular kind of ways that they can intersect? 15:58: Niraj Pant: Yeah, I mean, I think a lot of those are interesting areas and very applicable. And I think maybe in your questioning, you're highlighting that this is a really vast space, and there's a lot of teams working in very different areas. To give you, I guess, a little bit of the story of why we wanted to go work on this problem. The company was started in 2023. Largely, speaking for myself, I mean, others had different motivations, but what really interested me when I was investing at Polychain was trying to find out what developers would be doing... What they would be developing with in the coming years. Everything from new blockchains, interoperability solutions, ZK, and then obviously newer things today, like rollups, restaking, et cetera. I was always really fascinated by what would developers be doing on blockchains in the future that they're not doing today. And as I started learning and kind of looking at the existing meta of the AI space in 2022-23, it became really prevalent that there were some really interesting intersections that were kind of bidirectional. Particularly, obviously, cryptography can help AI in a lot of ways because AI today is quite centralized, both from a development standpoint, from a deployment standpoint, a hosting and sort of usage standpoint. It's really only a couple of companies and people that are driving a lot of this development. And being able to apply cryptography to a lot of these actions allows you to have guardrails around this technology development and really foster a lot of interesting open-source work that you really couldn't do otherwise. And then maybe I think less obviously, although it's maybe becoming more obvious now, is AI can help supercharge a lot of Web3 use cases. Being able to take this new type of AI computer and attach it to different types of Web3 use cases, we find that there's a lot of interesting examples in a bunch of the different subcategories of crypto. And so as we're thinking about what's the opportunity space to tackle, there's really sort of three or four areas, if we can think of it in buckets, of what you can do in the AI crypto space today. There's the compute. So this is everything from training to inference, fine tuning and also attaching privacy to all of these things. There's things at the model level. So this is things like model governance, sort of provenance of how the models are actually being used. And then one level above that is the data. So the data that's being fed into the models, being able to track the provenance of that, labeling how things come into the models. And then finally is the applications. So the applications that are built on top of all of this. So we can dive into it a bit more, but that's generally how I like to kind of dice the space in terms of the different opportunities that are available and people are working on. 19:24: Anna Rose: I really liked the way you kind of talked about how they're useful to each other, that bi-directional intersection, this idea that AI can use types of cryptography, but then like cryptographic systems we have can also benefit from using certain kinds of AI. But I do feel like all of this, it becomes much more nuanced when you get into what you're actually using in which direction and how maybe broad that AI is. I get the sense that if you use AI for blockchain, you tend to be using it in quite a narrow way so far. Would you say that's kind of correct? It's for a singular thing, like figuring out what... Like getting it to do one very small, very clear action. 20:09: Niraj Pant: My feeling today is that the reason that people have not done AI on-chain today or why is AI crypto happening in 2024 and why was it not happening three years ago, is a couple of reasons. One is that the compute was too large to scale on top of existing blockchains. It would take too much gas or the cryptography wasn't ready to even do it in an off-chain capacity. So that's one. The second was, I guess you could call it, the flourishing of generative AI models, really opened up a new type of use case that we didn't view as possible before. I think the view for the general public for AI four or five years ago, was that it's really interesting, but there's not a ton of use cases that are very consumer facing, maybe it's for things like recommendation systems or self-driving, but we didn't have as visceral of a use case as, say, ChatGPT or Midjourney or Runway. So after those things came out, after we had the transformer in the AI research landscape, and then we had large language models, and then, frankly, ChatGPT using the GPT system to show the power of this AI intelligence, we now found this really exciting, interesting use case that now hundreds of millions of people are using in a matter of two years. And so I think it's those two things largely that, why today? Well, today we have new use cases for AI and the cryptography is a lot better and we've just gotten smarter. There are more people thinking about what the interesting things are at the intersection. 21:57: Tarun Chitra: I guess one question I have is, you have actually spent time investing in both AI companies and crypto companies before. So I'm kind of curious, like what about the kind of changes that occurred and things you saw made you make the jump, right? I feel like in the time I've known you, you've been excited about lots of things, but none of them quite made you jump off the diving board. So what do you think is the thing that catalyzed that spring off the diving board metaphorically. 22:30: Niraj Pant: My investing has largely been driven by my own curiosity. And crypto AI was an area that I had started spending time in in the summer of 2022. So around the time of Stanford, the Stanford Blockchain Week or event that they usually put on, I started spending time with both AI academics, like some folks at Stanford and in some of the corporate labs. And then I was doing my usual talking to researchers in the crypto space, particularly in the ZK space. And what started as just a very simple problem, can you do inference inside of a ZK circuit, created this whole web of interesting questions to dive into. The initial answer after looking at it quite a lot was sort of no, it's putting a really big model inside of a circuit is still a large problem today and the proving overhead is still quite high. But what that catalyzed was this question of why is this even interesting? And so as I started spending more time in both areas and getting a sense of what people were looking at, what they're excited about, as a crypto person seeing that the way that AI has started to develop into the board issues at OpenAI and transparency issues around the foundation models, as a crypto person, these alarm bells started going off in my head. I was like, this is like the same problem we've seen in so many other centralized systems, and it's why we're building in crypto today. And that, I think, catalyzed something really prevalent inside of me to go and try to do something in this space. I couldn't find quite the company that I wanted to invest into in this area. I think it was still a little bit too early. And so I'd always wanted to be a builder. It's something that I told Olaf day one of joining Polychain was I wanted to go found a company and that day ended up being in May, 2023. That combined with being able to work with Anish, Akilesh, and so many other kind of amazing people on the team was the reason to dive headfirst into it. And as last thing I'll mention is crypto AI is exciting because it has a lot of skepticism around it, rightfully so and sometimes wrongfully so, by both apes and also researchers and people in AI and people in crypto. And so there's kind of a little bit of a chip on our shoulder to go make this work as a space. And there's not really many examples of this having worked in the past. Crypto AI is brand new. Building a Layer 2 or building something... Building a Layer 1 has data from the past six years, but this does not. And that excites me both as an investor and as a builder. 25:37: Anna Rose: Were you looking into the ZKML stuff when the spark came? Because I mean, I've sort of mentioned this on the show. I think early 2023, there was a ZKML group that was created, and there was incredible excitement for a very short period of time about that. And there's two companies that spun out from there, but then it chilled out. Were you in that kind of cohort as well? Were you watching that? 26:03: Niraj Pant: I was. Okay. As I was looking at some of the newer ZK areas of investment and what would be exciting places to deploy capital and spend time in, one that came up quite a lot when I was looking in 2022 was building these proving networks, which now have become quite a popular area for development. And these proving networks, I think, especially would be valuable in the cases where you need to generate really, really big proofs. So I'm talking ZKEVM proofs and proofs of ML compute. And so as I looked into this early ZKML work, it felt like an area where you're doing... In some ways you're doing similar types of optimizations as you're doing in the proving landscape. You're kind of optimizing how to parallelize the FFT and MSM calculations. And in the AI vector, it's just optimizing the matrix multiplications. And it felt like the ZKML world could go through a similar cost curve reduction in costs that we saw in the ZK space, especially around 2019, 2020, when we had that plethora of 20 papers come out. 27:28: Anna Rose: Snarktober. 27:29: Niraj Pant: Exactly, Snarktober. And that was, I think, a primary driving vector was, if we can get the costs down, and we can talk about later other techniques that we're deploying and employing today to try to bring the cost down, then we can enable this decentralized future, this decentralized AI future that was impossible five or six years ago. 27:55: Anna Rose: I want to spend a bit of time understanding where Ritual is in kind of everything we've just described. So I want to go back to where you outlined the two ways that they intersect, where we could bring cryptography to AI or we could use AI to supercharge blockchain or cryptography based systems. Where would you place Ritual on those? Is it more like bringing AI to blockchain? 28:22: Niraj Pant: I don't know. This might be a better one for Anish to answer. 28:25: Anna Rose: Okay. 28:26: Anish Agnihotri: I'd say it probably starts with bringing AI to the blockchain. The idea is we have a good mixture of folks that have worked in crypto and in AI in the past at Ritual. And I personally don't have any experience doing anything other than working in crypto. So I'm going to borrow the experiences from the folks that I hear from. But I mean, Tarun can probably comment on this, but in the traditional financial world, it's almost unheard of to not use AI and ML models for day-to-day actions. If you go to the bank and you want to take out a loan, if you want to check your credit score, all of these things are risk models and ML models that are running thousands of times per second to finally come to a conclusion. Only because it's not like a finite state machine, right? It's not five if-else statements that determines whether or not an issue is qualified to walk into the bank and take out a loan. But in contrast, that is the way that the chain works today. Right? If you take a look at Compound or you take a look at Aave, or you take a look at other lending protocols, almost all of that computation of can someone take out a loan, can someone take some action happens within this 22 kilobyte limit that's imposed by the chain of yes, everything has to fit within this machine. Anything that doesn't is sort of abstracted out to governance. It's, let's go off-chain and figure out these parameters. Let's go off-chain and figure out should we upgrade the protocol? Should we not? Let's go off-chain and figure out do we flip on a fee? Do we not flip on a fee? And so the initial sort of vision for Ritual was we've seen these inefficiencies to date, and a common solution seems to be why can't we bring all of this on-chain? Why can't it be that any ML model could just plug into any crypto protocol and be able to lead some of that decision making on who can do what? What permissions can change? What parameters can change? And being able to do that in a similar fashion to how traditional finance works today. So I would say part of it at least what we're focusing on to begin is bringing that AI to the chain. But it's a logical conclusion that when you bring AI to the chain, you can also bring the chain to AI in the inverse of you benefit from the provenance, the digital scarcity, the payment flows, all of the things that we've sort of figured out in blockchain systems for the last decade already. Those just come naturally to the AI space. 30:50: Anna Rose: It's funny as you describe this though, it's going... I'm doing the throwback to that first interview I did with Tarun because I feel like Tarun when we met, Gauntlet sounded sort of like this too. Weren't you kind of exploring some of those ideas? 31:06: Tarun Chitra: Well, I think like the hard part actually, and I gave a talk last week on this sort of philosophical differences in research between crypto and AI where one fundamental kind of difference between them is sort of, crypto is very much focused on worst case analysis of what is the worst case... Under the worst case adversary, how well does the system perform? Can it be resilient to that type of behavior? And AI is much more about pure average case behavior. Like, I give you five million examples, I just want to be really good on average. You can have one really bad example where you completely mischaracterize, but as long as the average or the whole batch is good, I'm relatively happy. That's a very high level of distinction between the two. What that means is that in the average case world, you can handle higher dimensionality, higher sizes, higher bandwidth of data. So a neural net, in some cases, some of the number parameters in some of the newer models is like the size of entire blockchains that have existed for a long time, right? So it's like, it's sort of a sense in which because you're willing to only care about the average case behavior, you can expand the set of things you can do. Whereas when you care about worst case behavior, you have to restrict what people do. You have to have a threat model for what the types of actions an adversary or non-adversary can take. And over time, there's some middle ground between these two, right? Like right now you kind of have AI in the high dimensionality, but average case, crypto and DeFi in the low dimensionality, but extremely adversarial, you can't make many assumptions about... You have to place all the restrictions on the assumptions you can use. But somehow, hopefully over time, we find a way to like something in the middle where it's like, you can have more expressivity than you currently have on blockchains, but you can also still have some of the guarantees. Maybe you don't need the guarantees to be as strong as say, like safety and liveness and consensus or soundness in a zero knowledge proof. But there's sort of something in the middle. And yeah, I think there's like many different ways to try to find what that looks like. 33:25: Anish Agnihotri: For what it's worth, the average case is also rightfully so, the case that's applied 99.9% of times. 33:33: Tarun Chitra: For sure, right. 33:33: Anish Agnihotri: And then if we've seen anything in DeFi and crypto governance, it's, in most cases, you will just be on a well-defined set path updating parameters. And I agree, it's like there's a lot of places where these models can slot in where you hijack out to like a human in the loop when the time comes, if you pass some risk threshold, but it's really effective today to automate a lot of these things with these models for the average base case. And the best example I like to bring up is like, we talk about these optimistic rollups and fraud proofs, but if you look at it practically, we've only ever had one case of a fraud proof being disputed ever across any chain. And that was for I think an Optimism fork on ETH proof-of-work a couple of years ago. So that's like in my example of... In 99.9% of the cases, you will never get to the long tail of here's an adversary or here's a risk model. But when you do, you can always open up your escape hatch and go back to the system as it exists today. 34:35: Tarun Chitra: Yeah, I mean, I think this is sort of a larger form narrative of the general meta of either finding things in between crypto and AI, or things like EigenLayer of like, hey, maybe you can reduce the amount of trust you need for this application. Or you can like, you can scale your security in some abstract sense, as opposed to just strictly over always assuming worst case, you're always overpaying, right? Like there's sort of this just in time, worst case security guarantee, which I think it's a very nice thing philosophically. It's actually quite hard to achieve sort of formally and mathematically and engineering-wise, but I think that's kind of like the blend of everything is kind of trying to move in that direction. 35:22: Anna Rose: It almost sounds like if Ritual is able to create what you just described, then wouldn't it kind of replace some of the stuff that Gauntlet is doing? But Gauntlet's currently doing it manually or like more like with human brains? 35:34: Tarun Chitra: I think it kind of, this gets back to this question of like, what's a worst case parameter and what's average case parameter, right? The worst case parameter, like margin requirements, are very hard to set because they have a lot of reflexivity to your security model. Imagine you have an AI model that is allowed to choose a slashing threshold for a proof-of-stake network. And the model is very good at average case behavior. But one out of every million samples it gets, it suddenly sets the slash to zero. And constructing that event that causes the slashing to zero and then you can double spend or whatever, choose your favorite attack, is a lot easier to do. Right? So there's this kind of threshold of things that are really easy and regular to optimize. Like, okay, fees, like what should the fee rate be? That's something where... Yeah, if the model fucks up, no one cares. Right? Like, okay, the protocol got slightly less revenue or like a user took a worse fee. But I think for these extreme mechanistic things, there's kind of this hard threshold between the two. And I think, ironically, not related at all to anything we talked about, but the debate in the Solana world right now over whether to have mempools or not is a very explicit example of one of these types of things where the average case and worst case can deviate quite substantially in terms of things that we're trying to automate, be fully automated or not. And I think, inevitably, this is one of the reasons why crypto is so much more, in some ways, difficult than AI is like the MEV type of stuff is like the real adversarial type of stuff just doesn't exist. Like adversarial machine learning, if you remember, we did a podcast with someone on this. Yeah, Fabian. 37:25: Anna Rose: Yeah. Long time ago, Fabian, yes. 37:27: Tarun Chitra: And adversarial machine learning is taking this kind of average case optimization and giving the model bad examples, like this is a cat and it's a picture of a dog, right? And trying to see if it can unlearn how to do that. And that's the type of stuff I think is closer to what AI models and crypto do. But these hard threshold type of things, these things are very sensitive to MEV and the worst case is like, oh, you blow up the network. Those are things you have to kind of... And so I think there'll always be a world where you end up doing both. And I think there's kind of... It's kind of hard to imagine it completely the worst case stuff handled all the time. But to Anish's point like... 38:12 Anna Rose: You mean like handled purely by AI. You're saying you still need to have the human... 38:16: Tarun Chitra: Yeah. I mean, like trading is a great example of this, right? Where people have been doing quant trading for a long time. Obviously you think they were like, hey, we'll just do fully automated AI models. And basically, no one may be barring one or two trading firms actually really successfully deploys fully autonomous things at scale. They have a lot of automation, they have a lot of automation to their strategies, but there's kind of this thing where, I don't know, the US invades Ukraine and all of the AI bots all do the same thing and buy defense stocks at the same time or something. There's kind of this adverse selection if you're fully... And so these worst case type of things are always going to need some notion of like, how do I tell the model to like regime shift? But to Anish's point, I think crypto right now is focused on making... 100% of transactions have this kind of worst case guarantee. But if you want to expand the usage to more normal usage, then you need to have a system where 99% of the time you can trust the average case thing that's automated and 1% of the time or whatever you need the. So I would just frame it in that lens. It's like, I think there's always inevitably, especially in a world where... Permissionless world where someone makes a new smart contract that completely interacts with all the existing ones and breaks your fixed model, you kind of still have to have some kind of not fully on-chain stuff. 39:51: Anna Rose: Or like way out or governance to fix or something. 39:54: Tarun Chitra: Yeah. Maybe it doesn't even need to be like the full governance, right? It can be some restricted subset of things. But I just think in the permissionless world, it's actually this... The on-chain agent stuff is actually a little bit more complicated actually than, say, even trading. Because in trading, you have kind of a restricted action space. It's not like people are adding and removing new assets all the time. It's not like people are adding and removing new exchanges all the time. But in DeFi, it's like I go on Solana and I look at aggregator and suddenly there's five million shitty meme coins on 300 different DEXs I've never heard of that just got launched. Right? And that's now your agent has to be able to know that the state space just blew up, right? It increased a lot or shrunk a lot. Right? So, there's still like, I think that's something to keep in mind. And that's why I say the holy grail is sort of something that's like, can be optimistic, but then falls back to worst case, but is able to figure out when to do that. Right? 40:53: Anna Rose: Autopilot, but with a pilot still there. 40:56: Tarun Chitra: Autopilot, but yeah, exactly. I mean, look at self-driving cars, right? 40:59: Anna Rose: Yeah, sounds very similar. 41:00: Tarun Chitra: They kind of do have the same... It's a very similar... And that's where you want to be with blockchain. So the difference is self-driving cars started from this 99% of the time we want to work, 1% of the time, okay, you need intervention. Blockchain started from 100% of the time you need... 41:16: Anna Rose: You need intervention. 41:17: Tarun Chitra: Right? And we're going the other direction, right? And that's sort of the... 41:22: Anna Rose: What are the places where there can be more of that flexibility or more of that freedom? And I guess is Ritual looking into those specific places right now? 41:30: Anish Agnihotri: Yeah, I mean, I'm happy to start with that. I'd say, yes, that there's a broad range of places you can slot in. I'd say part of it is there are places you can slot in a model for 99% in the average case. And to your previous question, what does Gauntlet's role look like here? In my mind, it's like we are making Tarun and Co's life easier. We're letting them focus on the things that they do really best, which is being brainiacs that understand the internals of all of these protocols and understand the worst case and letting them automate out the average case of... I don't know, Tarun, what does Gauntlet ship? Like 100 governance, 200, 300 governance proposals a year? 42:09 Tarun Chitra: Yeah. I think it's somewhat increased. The annoying part, and this is the part exactly where the AI stuff, if it was easy to do all the deployment, is that if you think about lending protocols, just a simple... This is just simple back of the envelope type of thing. What am I doing? I'm putting in one asset, I'm borrowing another asset. There's parameters for every pair of assets. So the lending protocols parameters are growing quadratically with the number of assets. What happens every bull market? The number of assets grows a lot. Now all of a sudden your parameters space grows a lot, right? And you want to try to only focus on the ones that are easy versus hard. There's a lot of stuff like that that exactly is like the long-term goal in my mind of this. 42:55: Anish Agnihotri: Yeah, and the example I give is like, Tarun and Co. is currently making these Gauntlet proposals to update these parameters to Compound for the average case or for the worst case. And I don't know how many people read those proposals, right? I certainly don't read those proposals. I'm sure 99.99% of crypto doesn't read those proposals. 43:13: Tarun Chitra: People in universities. 43:14: Anish Agnihotri: People... Exactly, the DeFi clubs in universities are reading those and deciding whether or not to vote. And the idea for us is there should be a way for Gauntlet, for Tarun to just take the work they're doing off-chain, take the risk models they're preparing, take the ML work that they're doing to come up with these parameters and to just execute those on-chain. Right? If we're already not having a majority of participants actually look at what's going on, understand the system. And you don't have to take my word for it, I've made four Uniswap proposals thus far, and two of them almost butchered the protocol, had the team not swooped in and said, hey, this is incorrect. So it's like most people don't read these governance proposals or figure out what's going on. So in my mind, it's like Ritual kind of slots in, is letting Tarun, letting Gauntlet, letting other people run these models on-chain. Keep some safe parameter guards where it's like, say, bound the range. Say if it's between 5 and 15% these slashing conditions, then the model is executing correctly, else kick back to governance, kick back to the old way of doing things. But that makes the base case, it makes the 99% case, the average case, much more efficient, which I think saves Gauntlet a bunch of time not having to make these hundreds of governance proposals and get buy-in from the community and figure out these PDFs that you have to upload to the Compound governance forms and things like that. 44:37: Anna Rose: Is that now what Ritual is focused on being, or is that just like one area that you think tools like this could help? 44:45: Anish Agnihotri: One area I'd say. I'd say Ritual is focused much more on being a general purpose execution layer for AI. So, sort of bring your own models, be able to do inference on-chain, be able to consume those outputs in your protocols. And it just happens to be that one of the easiest examples of how can I use this practically today, which a lot of crypto X AI folks are like, sounds really cool. There's a lot of money flowing into this, what do I actually do today? Here's a practical example. There's hundreds of DeFi protocols that have governance to update parameters. 45:15: Anna Rose: Little adjustments. 45:16: Anish Agnihotri: Little adjustments, small things that keep the protocol going. And the best thing is we can bring those on-chain so you can actually see what's going on, see that execution, see a proof of here I actually ran this thing, I'm not just taking somebody else's word for it or relying on the five people that actually read this form to make sure the discussion is correct. So it's like one application. 45:37: Anna Rose: Why do you have to do the inference on-chain though? Could you not do it in sort of an off-chain, bring it on-chain way? 45:45: Anish Agnihotri: Yeah, so to be clear, the inference itself is "off-chain." We're not running these ML models within Solidity VM environment or things like that. The way that we do it today is we're exploiting pre-compiles and sort of these ways to hatch out of the Solidity VM and do some execution and then bring those results back on-chain. I can break down into that. There's really two ways to do compute on-chain today. If you have the EVM and you want to do something that's expensive, the first way is an Oracle system. This is what Chainlink does really well. This is what other band protocol, things like that, do really well, which is, hey, I have some compute, let me issue off a request of, hey, I want to do 1 plus 1. Some off-chain node executes that, and then they return that response back on-chain. That's how Oracle networks work today. Our first product, Infernet, Oracle network works similarly. But you have your pros and your cons. And your biggest problem there is that it's very asynchronous. You're sort of issuing a request, and you're waiting for that request, which makes it difficult and complex to build a protocol. If I'm a developer and I want to get some response, usually it's I request it and I get it back immediately. Not so much I have to wait on blocks, or I have to issue some MEVs so some keeper off-chain can return my response, or I have to introduce some Oracle to my system. And so the way Ritual works is we have these opcodes and these pre-compiles that extend the EVM and extend other virtual machines so that a developer can just... As they do a math operation or a multiplication operation, also do inference operations directly within the environment they're familiar with, sort of just abstracting out any thinking of where it happens. But to your question, that does still happen off-chain, it's just kind of we're blurring the lines between what is on-chain and what's off-chain. 47:30: Anna Rose: This also sounds a little bit like you're working in the coprocessor territory, because there is something happening on-chain, how are you communicating out to those extensions exactly? 47:41: Anish Agnihotri: Yeah, definitely in the coprocessor category. And that's how I would describe part of what we're doing at Ritual. We are communicating out via pre-compiles. So every node that runs our modified virtual machine is also running these implementations of our hyper optimized AI operations, whether that's like an inference operation, whether that's matrix multiplication, a vector DB search, and we just expose those to the VM. So that's sort of the interplay communication. And this is something we've only started to see, and I'm surprised, we've only really started to see it in the last year or so, come up in sort of the blockchain ecosystem. And it's like Arbitrum is doing stylus, which is like, let's execute Wasm code via pre-compiles and now all of these like optimistic rollups and OP Stack and Arbitrum Orbit are starting to do like extend the capabilities of your blockchain by embedding these computations into the VM. So that's kind of what we're exploiting. 48:40: Anna Rose: Interesting. What is the input to these models? Are they I mean, you sort of mentioned general purpose, but are you trying to get historic information in like, if you're trying to get some outcome, you're trying to get it to give you a new rate or make that proposal. Yeah, what is it taking in? 49:00: Anish Agnihotri: That's a good question. Candidly, we're starting simple. And the idea is let's get something into people's hands that they can play with and figure out what's needed and then expand from there. To begin, it's like, you can imagine that at least all on-chain inputs are exposed. So whether that's past block history, whether that's current inputs in the context of your Solidity smart contract, you can put those in, like a black box inference call, taking some inputs, return some outputs. And based on that, you can modify state. So that's kind of what we're starting with. And I expect that the initial models won't be something as fancy as don't imagine, like we're running GPT 3.5 call and that's returning some text input and we're parsing that on-chain. I think it's going to look much less like that and much more like simple linear regression models, classification models, things like that, 50,000, 100,000 parameters that take in some vector inputs or some integers and return some integer, constrained output that can actually be used in a protocol. But to be honest, we don't know yet. Nobody's done anything like this, and so it's much more the developers might go wild with it and we'll go from there. 50:07: Anna Rose: Yeah, because you're building just the ability to use inputs that I guess you choose and have outputs. You're not actually creating for one particular application, but it's a little bit more general. 50:18: Tarun Chitra: Maybe one thing worth talking about here that's extension to this is, how are you thinking about sort of the network aspect of this, like the fact that people are paying for resources of like running a particular model, people are maybe paying for the data sources, maybe they're paying for... How are you thinking about sort of the marketplace kind of creation problem here, not just like the technology part? 50:42: Anish Agnihotri: The way that we're thinking about it is there's a lot of participants in the Ritual ecosystem. And this is in contrast to sort of how blockchains work today, or if you take something like the Ethereum chain, if you take mainnet, the only real participant is like a validator or a node that executes... All of them execute the same operations, right? If I send a transaction to Ethereum today, every single validator, every single node will execute that transaction, come to a conclusion and say, yes, this is valid, let's move on. We've at least in Ethereum optimized for a very similar set of node parameters, right? It's like, if you have 8 gigabytes of RAM, if you have some storage, if you have a CPU, great, you can validate the network. We're thinking about it a little differently. For us, it's, we will have multiple classes of nodes, where whether that's compute nodes that handle just like basic EVM operations, whether that's AI inference nodes that handle a long tail of inference operations and even subcategories between that too. You might have a consumer grade GPU, you might have an enterprise grade GPU, you might be a DePIN network that's running a fleet of thousands of GPUs, things like proving nodes where today you can generate proofs on a CPU but there's a bunch of specialized, and I'm sure the both of you are very well versed, specialized hardware companies that are building ZK ASICs and proving systems. And so the idea for us is how can we sort of separate these types of compute and the types of things that are happening on the network into different buckets where some participants might be better versed for that type of compute and they can really shine in executing those things. But that opens up a can of worms in terms of how do you price this? In terms of like, the broad example we like to take internally is say I have an LLM inference call. That might take four minutes to do on a consumer grade GPU, but it might take 10 seconds to do on an enterprise H100. And so for us, the question is, how do we price that sort of time sensitivity, that resource sensitivity of, should I be willing to pay more to have access to a better GPU, or should I be allowed to pay much less if the time sensitivity isn't a big deal for me, and I'm willing to wait four minutes? And so that type of research is something we're working on internally of, now that we have this heterogeneous class of nodes, how do we actually price it and figure things out? 53:08: Anna Rose: Are you creating the network of the provers like the GPUs, like the hardware that would actually be running these models. Is that part of what you're building? 53:17: Anish Agnihotri: Yeah, exactly. Not to get into the deep end of all the things we could be creating, the most succinct way is we're focused purely on sort of the execution layer of how can we bring AI to an EVM, to an SVM, to another virtual machine environment. And downstream of that, you get questions like, okay, who's actually bringing this GPU compute and things like that? It just happens to be that our node, our network allows all of these classes of participants to very easily plug in. If you're an existing DePIN network, if you're an existing GPU resource network, it's a very minimal addition for you to also help validate our network. And it's like a symbiotic relationship even. 53:57: Anna Rose: Tarun, I need to add this to our... You know how in the episode we did with Uma, we talked about all the different roles that a validator eventually could take. 54:06: Tarun Chitra: Oh, yeah, yeah, yeah. 54:07: Anna Rose: The prover... 54:07: Tarun Chitra: Another meta other than just like only pay for as much security as you need, that seems to be true in the fundraising and project world is also the meta of actually we're going to have many different types of validators and there's going to be sort of a marketplace for what subset are needed at what times. And again, I sort of feel like the thing that has both of these at its core center is probably EigenLayer because there's just so many different types of node operators built into that protocol. But I think there's also a lot of other things, whether they're in the AI world, like Ritual, things like I think how a lot of the ZK proving markets want to evolve to will also be multi-resource based, like so different types of nodes. And then I think there's some people who want to do ZK compute without a blockchain that have another version of the world that has multiple types of node resources. So there's a lot of stuff that's actually all at the same time kind of going for this multi-classes of node resources thing. And it's one of those things where it feels like when enough smart people are all going after a similar design or architecture, there's some real truth to it. 55:21: Anna Rose: We hope so. Because it actually feels like a lot of different teams shooting in all these different directions, and yet there's a similarity between them. Because I think of the EigenLayer restaking, and then I think of the DA version of that. And then there's the prover networks that stand alone, and then there's the sequencers that we think, especially if it's rollups, especially if they're using ZK, that they'll also be proving. And yet, I feel like because there's teams that also have dedicated their... Like they're building some sort of new infrastructure layer, their goal is to educate one on what they're doing, what a shared sequencer looks like, or what all these different roles will look like. I think it's because I have a validator company. I'm off, like, we're looking at it being like, so what are we gonna run? And are we gonna run all of those things is sort of the big question that we've been thinking about. And they do seem to fall under the same... Like into a category or categories that are close to each other, and yet, I don't always know in which direction they're going. Like, I don't know if the prover network takes on the sequencer, or the sequencer generates the prover network. Or in the case of what you guys are doing, is it connected to the... Maybe not sequencers in this case, but some sort of proving? This actually leads me to something else I wanted to ask you, which is, is there ZK in anything that you're doing? Are you using ZKPs? 56:46: Niraj Pant: Yes. So there's, I guess, a couple of pieces where ZK could fit in. I'm going to particularly talk about integrity, and I'm sure Anish has stuff to add here as well. So one of the pieces that's important for certain blockchain users is being able to have a proof of integrity of the compute that's actually being done, whether that's inference or fine tuning or whatever else. And so our approach here is not to have kind of a one directional sort of monolithic approach to security. We don't enforce one particular security paradigm on all of the users. We instead do more of a piecewise fit the proof to the user and use case type of approach. So there's many different ways that you can achieve integrity with blockchain systems. There's the put it on Ethereum route, there's ZK proofs, there's optimistic proofs, doing the compute inside of a TEE. So our approach is to really offer the wide variety of all of these different types of proofs for various different use cases. So, for the DeFi use case that Anish and Tarun were talking about, that probably is a good fit for a ZK proof, because in many cases, the compute is not gonna be massive in size, and you want some sort of finality or guarantees around the compute happening in a certain timeframe. Whereas let's say that I'm building a recommendation system on Farcaster, I don't really care if it's ZK proved. I think that's probably fine to have an optimistic proof, because if I don't like the results, I'm just going to refresh it and try it again and keep trying it until I get the results that I like. So we kind of... We call it a proof marketplace, but we have a pretty wide variety of groups that we'll be integrating with to offer these proofs of integrity. So ZK is definitely one piece of that. And one of the applications that we released, I want to say it was in November, December, was this application called FrenRug. 59:00: Anna Rose: You rug a friend? 59:02: Niraj Pant: Rug a friend, exactly. 59:02: Anna Rose: Oh no. 59:05: Niraj Pant: What it allows you to do is, normally in Friend.tech, you go and you enter a room and it's that person that you subscribe to on the other end responding to messages that you might send. Instead, we stuck an LLM or a series of LLMs behind that system. And what you try to do is you try to convince the Friend.tech bot to buy and sell your keys. And so we threw in a bunch of examples to kind of make it difficult to actually go and have it buy your keys. And we seeded it with a prize pool. And the idea of this hack was to show a pretty wide range of things that you could do on top of Ritual. It shows you using an LLM, it shows the LLMs being fed in, the responses being fed into an on-chain classifier. And then we used a ZK proof to actually prove the integrity of that thing, and it's also all fully open source. So we want people to kind of follow this model and build these kind of on-chain or partially on-chain systems that leverage Ritual's infrastructure in some way. So it's definitely a big component of the space and a piece of what we're building for sure. 1:00:25: Anna Rose: Cool. 1:00:26: Tarun Chitra: One of the interesting things about these examples is they have different levels of complexity of the models used, different types of data sources. I mean, just to give maybe the listeners a bit more of a direct feel, what kind of model complexity do you think people can expect right now? What kind of models do you think people will be able to use in a year? And how does that impact the network, all of these different participants you are talking about? 1:00:52: Anish Agnihotri: I'd say it largely depends on the trust assumptions that users and protocols are willing to have. And the reason I'll say that is we are first and foremost a product-focused company. And so we sort of plug and play into the long tail of open source Optimistic ML, ZKML, closed source private models, TEE systems like Niraj mentioned. And each of those have their own sort of range of models that they support. And I'll give you an example of that, right? If you want to run with this FrenRug example Niraj gave, the entry point for that was sort of three LLMs running a Mistral 7B model. And today you can't actually snark the Mistral 7B model in a circuit and generate a validity proof of execution because there's just far too many parameters and you would be blowing up the compute requirements, the proving times, things like that. And so we eventually went with an optimistic approach for the first part of FrenRug, for this LLM system of let's just wait a period of time. And let's get three separate council of LLMs to come together. So it's not just one LLM that could mess up, it's now you have three, so you're increasing the sampling size. And so for some applications, practically speaking, if you wanna use these massive LLM models, these 65B parameter models, you're probably going to default to the OPML, the Optimistic world of things, and you'll be able to use that. On the other side of the thing, the second part of FrenRug was this classifier model of where we took in the three inputs from the LLMs and we said, okay, classify and summarize these into one action that should be taken. And that we trained on sort of like a hundred, 200,000 parameters. We generated a Halo2 circuit, we had a succinct proof that you could execute and use entirely on-chain. And so the answer to that is really broad. It's given where research is today, there are a lot of simple classifier models, regression models, we've gotten pretty deep on the tree depth of how big you can go that you can use succinctly on-chain today. On the other side of things, there might be use cases that people are experimenting with that require much, much, much more compute, which may not be possible today, but will be possible a year, two years, five years from today. And the idea behind Ritual is how can we stay as agnostic and as open as possible such that when that research catches up and when we get to that point, you can slot in different proof systems and different ways to execute those models on-chain. So practically, you're not restricted from running any type of model, as long as it fits within some bounds of available proof systems or if you bring your own. That's kind of the way we're thinking about it today. 1:03:29: Anna Rose: How much has been built? Like you kind of talk about having those options, but are those all mapped out and built already? Or is that something more like roadmap? 1:03:38: Anish Agnihotri: Yeah. So we have two core products. The first one is a system called Infernet. That's what FrenRug is built on today. That is a Oracle network. It is proof system agnostic. You can bring in your own proving system, you can bring in your own Halo2 verifier, your Plonky3 verifier, whatever you choose. And you can start to use these applications that bridge AI to blockchain today. That's been deployed for a couple of months and we've seen an explosive growth in people running these nodes and interest in developing these applications on top. And you can deploy that to any EVM compatible chain today. So whether that's base, whether that's mainnet, whether that's a testnet, you can use this today and get started. The second product that we're working on is the chain itself, which sort of enshrines some of these AI operations natively into the virtual machine. And that's a work in progress that we've been building for upwards of a few months since we started the inklings of it. And that's what we're working on today to get that out into people's hands. 1:04:40: Anna Rose: That first part that you talked about, the Oracle, are you saying that today someone can already make those choices that you had outlined before? 1:04:48: Anish Agnihotri: Exactly. 1:04:49: Anna Rose: Okay. 1:04:49: Anish Agnihotri: Yeah. You can go to docs.ritual.net and you can choose to bring and integrate this into your protocol today and you can choose, do I want an optimistic proof here? Do I want a ZK proof here? What do I want to actually do in my protocol and make those decisions? And that's available right now. 1:05:09: Anna Rose: The builder would be building in their own. It's not that you have those pre-made for them, is it? 1:05:15: Anish Agnihotri: So we have some examples. 1:05:16: Anna Rose: Like the Halo 2 one. 1:05:17: Anish Agnihotri: Like the Halo2 one, like using ZKML to generate some proof systems using Halo2. We have some optimistic examples of here is one where you wait one hour and if nobody goes and says, hey, this is wrong, then you continue execution. We have some that don't use a proof system at all. A lot of early developers are using these things for things like generating images for NFTs, right? And there, the hallucination, the long-tail problems are actually part of the excitement, right? It's like, oh, look, my Diffusion model messed up, but it looks so cool that I don't really care if it's wrong. So we have some examples, but yeah, it's up to the developers at the end of the day. We've kept it generic enough where you can slot in things as you'd like. 1:06:01: Anna Rose: Cool. So when you talk about the chain, when you talked about these sort of different classes of nodes, is that what your chain is going to look like? Is that the part that you're still working on? 1:06:11: Anish Agnihotri: Yeah, exactly. And I'd say the question folks could ask is, why do you need a chain for something like this? Or why is this what you're building? And there's sort of two answers to that. The first one is there's a lot of applications that will not want to move, right? If Compound is deployed on mainnet, we're not going to expect Compound to wake up tomorrow and say, well, now we're deploying to a Ritual chain to use this infrastructure. For those systems, we have Infernet. We have a product that plugs and plays into any EVM chain. It has its downsides. You have to deal with an Oracle system, it's asynchronous. There's limitations to what you can and can't do, but it's available today and any protocol can use it. But the way we're sort of thinking about it is that, the DeFi protocols, the crypto applications, five years from today will probably look very different and much more efficient and advanced than what exists today. And for those, you really need to pop open the escape patches and actually go and build things at a very core execution layer, consensus layer of how can we optimize these things and build it out. And that's sort of where the chain comes in, where it's like for the future applications, for the people that do want to exploit sort of that 100% of what can AI bring to our protocols, that's where the chain slots in. And for that, of course, you need the different types of validators, the proof systems, all types of intricacies of making this thing actually work and be at least simple enough for a user to just hop on and use. 1:07:39: Anna Rose: All right. Well, good luck with all of that. And thanks so much for coming on the show and sharing with us kind of a revisit to the crypto AI space, how you're thinking about it, what you're actually building, the areas you're focused on. Yeah, thanks so much for coming on. 1:07:55: Niraj Pant: Yeah, thanks for having us. 1:07:56: Tarun Chitra: It was exciting to chat about this. I think the thing about crypto and AI stuff is, 98% is a scam, 1% is an honest effort, and 1% is real but hard to understand. So it's always good for the last part to get illuminated. 1:08:14: Anna Rose: Yeah, thanks again. And I want to say thank you to the podcast team, Henrik, Rachel and Tanya and to our listeners, thanks for listening.