[00:00:00] Katherine Druckman: Hey everyone, welcome back to Reality 2.0. I'm Katherine Druckman. Doc is vacationing somewhere very cool, so he will not be joining us today, but we also today have Kyle Rankin who, you know, he is President at Purism a company that makes really cool privacy respecting hardware that you need to check out. And we have Dave Dave is now part of cryptid tech [00:00:24] Kyle Rankin: Good. Okay, [00:00:30] Dave Huseby: cool, [00:00:31] Katherine Druckman: fun, new, uh, project that he's working on. We've had Dave. The show a few times before, and Dave always brings his big ideas and we appreciate that and we're going to explore one of them. But most recently in particular, he has written something of a love letter to Elon Musk. And you might wonder what I mean by that. And so, so to find out, I encourage you to stick around because we're going to talk about some interesting concepts around pseudonymity and maybe a little social media as it is topical right now, and individual digital sovereignty and freedom and Liberty and history. And it's going to be great. So with that, Dave, can you tell us a little bit more about this love letter to Elon Musk? [00:01:15] Dave Huseby: Uh, sure. Um, I guess, uh, it's a blog post and I'm sure there'll be a link associated with this, um, show when it drops the point of the article. Was to try to bring some sanity around the debate of authentic authenticating, real humans while preserving some degree of anonymity or privacy. And the reason why you use the term pseudonymity earlier, um, in the open. And I think that's actually the correct term to use. Um, and, and the reason there's, there's legal reasons for this, and there's also moral reasons, but the difference between anonymity and pseudonymity is really anonymity. You have no name and there's no way for EV for you to ever be discovered of who you are. Um, and it also means that you can never accurately prove that you were the author of that as well. Um, and, uh, so anonymity is kind of out, I know that like Elan used the term anonymity, but I think what he really meant with pseudonymity and. There's been a lot of back and forth and a lot of hair pulling and a lot of screaming about, you know, Twitter can't possibly provide any kind of privacy. You have to be who you are, your real name and all that stuff. But at that point, if you do that, you can't really speak your mind. Right. And so the, the article opens up and this is why it's kind of a love letter because I know that he likes really interesting historical tidbits. And, you know, he's a bit of a history nerd. Um, and there was something I discovered, I dunno, 10, 15 years ago when cryptography started to come into question again, you know, 10, 15 years after the first crypto wars, I discovered that James Madison wrote a letter to Thomas Jefferson in 1788 ahead of the first session of Congress discussing. The creation of the bill of rights, how James Madison wants the bill of rights to be structured and what are the topics that are going to be addressed and who, and who were political shakers, movers and shakers in early America were going to be brought on board, uh, with this, uh, Federalist idea of we need to have a central government and the constitution and all that stuff. And, um, what's interesting about this letter is it's encrypted, at least in part, it uses a cipher that Thomas Jefferson invented. And I just think it is so perfectly American that two of the framers of our constitution discussing one of the most important parts of our constitution use technology to ensure privacy and confidentiality so that they could speak freely about it. The parts. That are encrypted in the letter, R James Madison's opinions of other people, you know, he discusses like, oh, we're going to do this. We're going to do that. And then he encrypts, like, I think that guy is kind of a jerk. And I think their, their point of view is, is irrelevant. And, you know, and so his own personal political opinions were the things that he encrypted to Thomas Jefferson. Um, and there's a bit of a timeline here. Okay. I'm just gonna, I'll wrap up the history lesson here real quick, but there's a timeline here that's really also important to Twitter. Um, so that was in 1788, but way, way back in 1724, I think, or 1722. So like a full 60 years prior. Okay. Now those of us of a certain age who can remember 1988, imagine being in 1988 and thinking back to 1924, that seems like. You know, black and white history, like that was a long time. So there was a long time between 1720s in 1788, back in the 1720s, Benjamin Franklin, as a teenager started writing letters that were published in their public newspaper there, his brother's newspaper in Philadelphia as Mrs. Silence do good. So this is like the classic, you know, cartoon right on the internet. Nobody knows your dog. Well, apparently in 1720s, Philadelphia, if you published on in the newspaper, nobody would know that you're not an elderly widow, but they wouldn't know that you were a teenage boy. Um, anyway, he started writing these letters in the 1720s and then all through the thirties and forties and fifties and sixties and seventies as we got closer and closer and closer to the revolution, um, Benjamin Franklin continued to write publicly under false name. Right. He published, um, uh, political pamphlet that to raise a militia, to defend the city of Philadelphia against private tiers called plain truth. And he published that as a tradesman of Philadelphia. Right. And then of course he wrote a poor Richard's Almanac. You know, Richard was a made up character. Right. But that was him. And as we got closer to the, to our, after the revolution, as we were arguing for the constitution, others picked up on this too, that, you know, you have John Jay and James Madison and Alexander Hamilton, all publishing, you know, the, the, the Federalist papers under Publius. Right? So this was a proud tradition and a very common thing to do to, I mean, pardon my language, but the modern equivalent is shit posting on Twitter. They were writing rabble-rousing. Somewhat treasonous, somewhat offensive, somewhat thought provoking, and very often humorous political speech, hardcore political speech as I will under a false name, right in a public space with that was read by tens of thousands, hundreds of thousands of people. Right? So the reason the history is important here is because this establishes, I mean, if you are at all a traditionalist and believe that there is an American tradition and that we should at least understand it and pay attention to it, this is, this establishes the root of that tradition. That technology has always been used in this country to aid in our discussion about politics and our zeitgeists and all that stuff. And more importantly, pseudonymous speeds. Publishing under pen names and being edgy and calling for revolution and calling for a whole new constitution. When we had just, you know, we had just become a country and we already had an article's Confederation, but then everybody's like, no, no, no, no, no, no. That sucks. And they're like, wait, wait, wait, we have a country. What are you talking about? We have to redo the country. Was this just a beta? Or are we going to know like reboot the whole country? And everybody's like, yeah, well, not everybody. A few key people. Alexander Hamilton, James Madison, Thomas Jefferson, George Washington, all those started arguing for this Federalist constitution. And they did it under, under pendings and for good reason, because back then. Nobody really had any loyalty to any central government. They had loyalty to their states. They had just fought a revolution, which was still tenuous. People thought, you know, oh, in 10 years, maybe we'll just go back to being British colony again, or, well, we're just renegotiating with the British crown, you know, like we're going to get our new terms and we'll still be part of the, the, you know, the empire, um, and with the creation of the constitution, that was basically closing that off. And there were a lot of loyalists that were still in the colonies, or they're still in the fledgling in the United States. And so there was serious risk to themselves personally, financially, politically, if they were publishing under their real names. So there was a 18th century equivalent of being canceled, just like we have today. Like literally this has not changed. This is how it's always been. People [00:09:42] Kyle Rankin: look into it, [00:09:43] Katherine Druckman: exposed, assuming they're exposed. [00:09:45] Dave Huseby: Yeah, exactly. Right. And everybody looks at Twitter thinking like this is a modern thing. And it is to a certain degree in the sense that it's our current version of the newspaper and the pamphlet tiering and the mean posting back in the 18th century and then all the muckraking, you know, in the early 19th and 20th century, and then in the 19th century, right. All the muckraking and the late 19th century, I mean, any student of history, American history knows that our first amendment and our free speech rights, um, have always protected wild political arguments. And so, anyway, that's the basis of this. So people today who are arguing that Twitter should not allow the full spectrum of political speech, um, on the platform are actually running contrary to American, uh, And this history clearly shows that to be true. And then the other piece, like I said, the technology using technology to, to support that and to enforce it, the use of photography and the printing press, um, that is also key to the American tradition. So when they wrote the first amendment, they wrote the fourth amendment, the use of technology to enforce those rights. I guess the government is always right, because this is, they actually used it. The, the people who literally wrote the constitution, he used encryption and anonymous shit posting in public. So to say that today is any different, is, is, um, to lie about history, right? Sure. We have technology, but we should be aiming for having a robust and honest debate that respects people's privacy. Um, you know, you hear people say like, oh, your free speech rights doesn't mean you you're free from consequence. That may be true, but I should also be able to publish under a false name and then be somewhat free from consequences. Um, but then, you know, you get the pushback where it's like, well, then everything on Twitter is going to be like, you know, illegal content, whatever, you know, like stuff that we have deemed illegal for various reasons, you know, like content that of, of people being harmed. Let's just put it like that. Right. Um, and you know, or like direct threats, actionable, direct threats to people, right? That's something that has been carved out as an exception. I think it's important to point out the exceptions to the first amendment have happened roughly once every a hundred years. So that's the rate at which our society has accepted restrictions and they've been very narrow in scope. Right. So anyway, that's how I opened it. Something very interesting, hoping to catch the attention, but then the rest of it is to discuss this product. The this research project that has turned into a company that we've talked about before, um, on here, it, this started about three years ago. Um, and it's been a stealth mode company until just recently. We just came out of stealth about a week ago, two weeks ago, and it comes out of 20 years of my personal research. Um, I have a long career in security and privacy research. I worked at Mozilla as a security engineer and studied, uh, surveillance and anti-federalists techniques on the web. Um, worked at Hyperledger for four years. Is there a head of security, um, and studying how blockchains could be used for, you know, everything right in the end? What I realized was that the problem that everybody's scrambling for. The, or the problem that everybody's trying to solve is authenticity by authenticity. I mean, how do you know any piece of data came from where it says it came from and that it has been, has not been modified and that the, the thing that created the data, the person or the computer, or whatever has not recanted or hasn't revoked, it is the term we use. Okay. And the reason why that is actually the problem, the reason why I start there is because it took 20 years for me to get to that point. That that is actually the underlying problem. And I think everybody else in the world, maybe they haven't gotten to that point yet, but there's a lot of people still searching there. They're trying to solve this problem. Uh, the web three stuff, the NFT stuff, um, that's all an attempt to solve off the. Like I have a legitimate copy of a boarding. That's all in an empty is right. That's why the joke of ha I right. Click and stole your art is not the point, right? The point is, is that I have provable provenance and I am one of, I own one of the legitimate copies. There could be a million copies, but I own the provenance on it. Right. I'm one of the few people that says, I, my copy is the legitimate, right? And so the reason that's important is because our identity, our digital identity on the internet is just a narrow form of authentic data. Um, because transmitting authentic data on the internet is the same thing as transmitting trust. Okay. If there is a company in the world that everybody trusts say like the DMV of Nevada, which would be a government organization or say. The bank of America or whatever, an institution that people trust to do a good job of verifying who I am. Right. If they give me that data as authentic data, meaning it has cryptographic provenance associated with it. That can be independently verified by anybody. Then you don't have to trust me. I can give you that data and you don't have to trust me. You just have to trust who created the data. And what I give you is the data. Plus the provenance log is what we call them and you, or anybody can independently verify it, using cryptography to verify that it's authentic, that it came from bank of America, that it hasn't been modified. I didn't modify it. And the bank of America still stands behind. You know, they are like, yes, you're Dave. And you know, here's your, here's, here's a piece of data that says, we recognize that that's your first thing, right? That we. And so why does this apply to Twitter? Well, in the modern context, if you go and do something, say something illegal on Twitter. And I mean, by illegal, I mean like the most libertarian aggressive interpretation, the first amendment, meaning you can use any word you want, you can say pretty much anything you want, as long as you don't post a direct, actionable threat against somebody murder [00:16:43] Katherine Druckman: you. [00:16:43] Dave Huseby: Yes. Yeah, exactly. Or you post some content that's obviously created by, you know, harming other people. Um, then I should be able to say anything. If I say, if I do that on Twitter, then we want to be able to know who that is. But as long as we are not making those kinds of threats or posting harmful content, um, then we should be able to say whatever we want and I sh I should, and everybody should enjoy pseudonymity by default and. So here's how this works. And I'm trying to sum up like, you know, 20 years of general research and three years of very specific research into the solution. And this is what the blog post is about. Um, if you think about what Twitter actually needs to know, they don't need to know who you are at all. All Twitter needs to know is that some institution that Twitter trusts knows who you are and that you give Twitter some piece of data, it could be a big, random number or whatever. And I'll explain that in a second, um, that allows them at any point in the future to cooperate with law enforcement, if you should ever post something illegal. Right? So I go to Twitter and be like, you don't need to know who I am. Um, and I can prove to you using cryptography that say like inclusive, which is a compliance as a service company, they do KYC. Um, they know who I am and by the way, it should, you should law enforcement ever come with a warrant and need to know who I am because I did something illegal on Twitter. Here's a little piece of data you can give to them and it'll walk it back to inclusive. And then I will be unmasked. So this is like perfect. Fourth amendment privacy. It's pseudonymity by default, it would allow people to be on Twitter and say, whatever the heck they want. Um, this, as long as you don't do anything, actually illegal in the narrowest interpretation, the most libertarian interpretation of the first minute. And, um, one last detail, that piece of data you give to Twitter, uh, it has to be verifiably encrypted, and this is a different kind of encryption it's encryption that comes with a proof of who can decrypt. So Twitter gets it and they can independently verify that that piece of data, um, can be decrypted by. Bank of America or inclusive or whoever. So what this is, is it's a clever application of, of cryptography to align incentives in the game theory sense, kind of like how Bitcoin is like game theory, hack. That's something I think I'm going to take a quick aside here. That's something I think people don't realize about Bitcoin is that Bitcoin actually didn't invent any technology. It's a cover rearrangement of existing cryptographic techniques and, and, uh, computer science to create a game theory hack so that all everybody's incentives are aligned to behave so that they all do the right thing to make Bitcoin work. So we're trying to achieve the same thing here, where Twitter doesn't actually know anything about the users and that's on purpose so that they don't have any real moderation capability. This would allow Twitter to fire their entire moderation staff actually. Um, because. There, their role now becomes much more of a general communications tool. Kind of like the phone system where they just take these verifiably encrypted things, verify that, you know, each person has been KYC by some company on their list, and then you eat each one of us gives them this verifiably encrypted piece of data. Should we ever do anything illegal law enforcement can go and do their job. So it's sort of the, it's the American compromise between anonymity and, you know, having free speech and still having nice things. And it's the one thing that nobody's talking about in all of these debates about the future of Twitter, because I don't think people understand that this technology actually exists. This is a problem that's been, people have been trying to solve for, I don't know, ever, ever since the internet, I can remember, you know, shit posting on and using it. And then people tracking down you're using that gateway and trying to find out who you. Um, and so this has been a problem since I started on the internet in the late eighties, early nineties. Uh, and it wasn't until just recently that we had the cryptography and the understanding to really make this work in what we've built, we're open sourcing. So cryptid technologies is a completely open source company. We're open sourcing all of the tools. Um, we're open sourcing all of the protocols, um, when we decided to make a company or when we decided to go down this road, we wrote, and I, I think I've linked to it in the past. We wrote some, some articles that just stated our purpose and stated our principles, you know, the principles of user sovereignty [00:21:56] Katherine Druckman: to them. Yeah. [00:21:58] Dave Huseby: Yeah. So everything we've built is in line with that open protocols, open file formats, strong open source. Cryption always privacy by default pseudonymity by default consent based power. Right. Revocation and consent based power structure, that kind of stuff. So, yeah, we're, we're just now coming out of stealth, we're open sourcing all these tools. Um, and what better way to, to introduce it to the world than to talk to you on? So I wrote him a love letter. I was like, I hope he hears [00:22:28] Katherine Druckman: it right. Coming back around to the intro, I guess. [00:22:31] Dave Huseby: Yeah. Sorry. I'm a bit long winded about this and it's hard to shrink 20 years of, of trying to figure this out, down into something that's a general audience, uh, um, explanation, but really what it is, is I'll be able to go to everybody would go to some KYC vendor and, you know, get your, your identity known, like your name or whatever. They give you that data as authentic data, which means it has provable provenance. That's independently verifiable. Then when you go to log on to Twitter, they actually send you. Uh, check to you and to say, Hey, prove to me that you've been KYC by one of these companies. And they have a list that they approve and you just say, yep, here it is. You know, bank of America knows who I am. The DMV of Nevada knows who I am, you know, whatever. And then Twitter is like, Hey, welcome shit post away. I think it would be really interesting for us to double down on our American tradition, I think. And, uh, I, I just want someone to call my bluff. I want Ilan to call me tomorrow and just say, prove it, Dave. Cause I would love to demo. It is so cool. What we're doing is so cool. I, it, it just blows my mind every day. When I see the, what this new, what these, these new cryptographic techniques do. So [00:23:51] Katherine Druckman: I, so I've, I've had many opportunities to ask you questions about these things. Kyle is not. So, I would, I would, I think I'd like to hand it to Kyle to see if he has any questions, although I will say the thing that I agree with you most on here, is I, I agree that privacy is essential to, uh, you know, maintaining democracy and all of those things. And I think people tend to forget that because with all of the changes and evolution and, and the way that we communicate online, I think it's, you know, it's something that is easy to forget. But I do agree with it. Yeah. [00:24:29] Dave Huseby: I just, I just want to push back on the evolution of our language. It's just really stupid. This here. People say, well, like my political enemies are domestic terrorists. It's like, no, they are not, they just disagree with you. Okay. We shouldn't, your enemies are like, like I'm talking about like, you know, political arguments in this country are now, like, what is the rule where it's like, eventually somebody mentions Hitler and then it kills the thread. Um, [00:24:54] Katherine Druckman: I can't remember [00:24:55] Kyle Rankin: the God. [00:24:56] Dave Huseby: Yeah. So the new one now is like, eventually they'll just call you a domestic terrorist, right. Disagreements. Now online tend to be just like, I'm going to report you to the FBI. I'm going to get you swatted your domestic terrorists. Like this is ridiculous. Or the [00:25:10] Katherine Druckman: alternative. What is, what do you call people who are far left? I don't know. Anyway, so the cryptography though. Um, yeah, I wondered if. [00:25:20] Kyle Rankin: Well, so I had to get a couple, a couple of questions came to mind, but if, if I don't know how much you folks have already talked about this sort of thing in previous shows. So if I've covered, if I'm asking something that's already been covered, feel free, Catherine, the veto, or say, got it. And then we can move on to the next one. Um, so I guess the, one of the first, uh, questions that came to my mind when I was listening to this was I was wondering where you felt in an imagery fits into this world, uh, where like, cause it sounds like you're, you're maybe proposing that Twitter do away with anonymity, uh, in account creation. It sounds like you're, you would be asking for a stronger proof of identity than is currently being asked for to join that network. And so I'm wondering in your mind where you think in [00:26:13] Dave Huseby: nymity fits in. Yeah. So. So in the context of Twitter, if you read between the lines and actually you don't have to anymore, Elon Musk has been pretty explicit about it just recently. The goal is to try to make transparency happen. So things that are bots, things that are not really humans are marked is not real humans or real humans are marked as real humans and everything else is not. Um, and I think that that level of transparency is important simply because for many years we've seen false consensus be created on Twitter, through, um, botnets, right there. That's a real problem because humans are social creatures and people who don't think too deeply about certain topics, because they're busy up with other things. Uh, our human default tends to be, you know, psychologically it's like, well, what does everybody else think? Right. And it's like, oh, that seems to be the consensus. Okay. I'm on board with that. Right. And so if you can create false consensus on Twitter, it's a form of property. It's a form of like manipulating people, because most of us just want to get along, you know, go along to get along. And as I know I do on a lot of things that I don't think too deeply about. And so that can be manipulated to, uh, change the outcomes of elections. I'm not gonna make any direct accusations, but we do know. I mean, there's been lots of evidence. I mean, look at like the documentary, the great hack. I don't know how accurate that is, but it suggests that like Facebook was manipulated to change outcomes of elections. And so we want to be able to basically have an anti civil attack mechanism built into Twitter, anti civil attack. Is it, you know, civil attacks are like, I can have a hundred thousand accounts and I can make them all say the same thing. And so anybody observing it will think there's a hundred thousand people who think that even though it's actually. Right. So it's a, it's a signal amplification attack, uh, on Twitter anyway. And so we want some mechanisms to do that. Now, where does anonymity or pseudonymity come into play? I think that protects people from doxing. I think it would just, we're not doxing, but canceling. I think it would open up the breadth of what's acceptable on Twitter. People might disagree with it, but you know what? This is the United States and people say mean things. That's, that's what has always been, you know? Um, but there are some special cases outside of Twitter where actual anonymity is really important. Um, you know, I used to work tangentially with the secure drop project, uh, at the freedom of, or what is it, the freedom of the press foundation. Right? So I was involved with the Tor project, um, a little bit when I was at Mozilla. And that's a case where you have whistleblowers who want to be able to securely rat on, you know, For the interest of society, right? That's always been the thing, one of the things that has been a tradition in this country, and we've always protected whistle blowers, or at least we try to. Um, and so being able to anonymously post, maybe Twitter's not the best place to do that because you would be at least synonymous. So, you know, to get on Twitter as a real person to leak something, there would be some other institution that would know who you were. And in some cases that's not ideal, right. Um, especially in countries that don't have as robust a protection system in the United States does, uh, you'd want to be able to dump this stuff anonymously. I mean, look at like, we got all the leaks that Juliana Saanich gets it and Wiki leaks, right. That stuff is all reported or given to them using secure drop. So there is some times where we do want that, but I would not allow that to be. Straight to public consumption. You know, it's like, if you want to leak data to say Wiki leaks, here's our secure drop, or here's a way you can get it to us without revealing your identity. Uh, but that doesn't immediately go to public consumption. It doesn't get trending on Twitter for instance. Um, then there's, uh, we're going to say, oh, I, if you, if you have more to [00:30:31] Kyle Rankin: say on that, please go ahead. No, I was just going to transition sort of riff off of that to then sort of the next, so it sounds like you're, you're sort of proposing that there'd be stronger identity requirements to create a Twitter account or otherwise have some sort of, um, purgatory is not the right word, but some sort of, uh, uh, probation for accounts that aren't identified in some way, it sounds like potentially as a way to get around, if not, or yeah, maybe you can clarify that first before I did the [00:31:01] Dave Huseby: up. No, it's just like, there would be a tiered system on Twitter. You can create an account all you want. Um, it just won't be marked as a real human right. It'll be come off as a bot. So you would be downgraded well, whatever, because if I look at an account like that, that is not a real human, I'm gonna assume it's some bot and they're trying to sell me or it's propaganda or whatever. It's somebody doing a signal amplification attack. And I would be able to filter those out of my feet. I'd be like, I just want to see real humans. Right. [00:31:31] Kyle Rankin: So, so since, since sort of Mozilla and we're talking about. [00:31:35] Dave Huseby: Um, we out, I was, I [00:31:38] Kyle Rankin: mean, you were, I mean, because you were at Mozilla in dealing with certificate issues of at various points, I imagine like most people, most people that work in web browsers please have some sort of CA system, [00:31:48] Dave Huseby: um, [00:31:49] Kyle Rankin: uh, background. So it sounds almost like you're sort of proposing, uh, the way that we currently think of certificates, like DV certs versus tra like regular old sorts, uh, versus no sort at all kind of thing, where to go to a website and you see, oh, it has no cert, well, I don't know. Oh, it has some kind of certificate. Okay, cool. Something happened. Um, and then if it went, if they, if they splurge and they're not using let's encrypt right now. And then, so they went and bought that a hundred dollars [00:32:19] Dave Huseby: cert or whatever, and they did the KYC then that way, I guess [00:32:22] Kyle Rankin: that would be the blue check where someone's not just getting the student. Uh, anonymous Sudan, [00:32:31] Dave Huseby: Sudan. It said pseudonymous autonomous, [00:32:35] Kyle Rankin: right? Yeah. But I cannot say yes to darkness. Um, so, so like the middle tier would be that with like, I have some sort of verification, but then like the blue check to me, the reason that people make such a big deal about that, I [00:32:49] Dave Huseby: believe is [00:32:51] Kyle Rankin: that in the purpose of it would be say, here's my actual identity. Um, that my first name, last name, and my tied to that. It's not just my first name, last name, but also my role, which has some sort of significance for some reason. So it's not just that I happen to have this first name, last name, but I have this first name, last name, and I am the session such at so-and-so and that's been proven. So when I say this it's coming with that gravitas or authority of whatever, at least that's the idea behind it. But yeah, it sounds like you sort of, it's sort of like a three tiered domain system in a way or the TLS [00:33:25] Dave Huseby: system. Yeah. So that brings up two really cool points. One is, uh, I'll start with the point. And then the second one is a great transition into the next part of what I want to talk about. So the way I see Twitter is, so if Elan calls me tomorrow and says, okay, Dave, fix this. Okay. Here's what would come, what I would do with Twitter. Uh, anybody can create an account for any reason, any number of them. Okay. It starts off as an account. That is a bot account. We just assume that. Right. And there would be ways to filter out based on this tiered system. So I could say, I just want to see only real humans. Okay. So then if I want to upgrade my account, I go and get KYC feed. And I assumed that like, Twitter would do a loss leader on this cause that costs money. So I'm assuming they would just say, you know, what, use any one of our vendors, because they're already going to make a list of vendors that they trust. So it's like go to any of those, you know, here's the, here's the link that gets you to the free KYC process. Um, and what's important here is that Twitter doesn't do it. They're actually trusting a third party to do it for them. So there's a division of, of concerns here, division of labor on purpose, right? This is like the principle of, of separating powers in our, in our government. So then I'd go and I'd get KYC. And then I'd come back to Twitter and say, I got KYC seed. They know who I am. And by the way, here's my verifiably encrypted what we call a view token. So at some point. That can be used to de-anonymize myself. I would like to think that that is, um, data that Twitter doesn't ever keep online anywhere. Like it goes into an offline storage somewhere. And the only time we ever, uh, go back to that data is under judicial review. So like I did something illegal, a judge says, yes, here's a warrant, go to Twitter and get it, walk it back, figure out who they are and, you know, charge them with crime. Okay. So I come back to Twitter, I say, yep. I've been KYC to here's my, my look token, my view token. And so then immediately my account gets a heart. It's a heartbeat account account. Right. I have a heartbeat, I'm a human. Okay. And then I can make any number of those. I can make a thousand heartbeat accounts. Okay. Because, or maybe there's some limit. Maybe I can make 50 heartbeat accounts or whatever. I don't know. Um, but. They can have different levels of Sudan pseudonymity, right? So I could have a Mrs. Silence do good account, but then I could also have a heartbeat blue check account that has Dave who's the CEO of cryptic, right. That would be the, my real identity account. That's the one that gets the blue check mark, because it's my role and my real name. But at the same time, I have any number of these heartbeat accounts where I can say whatever I want. You know, I could be one that is an elderly widow, you know, saying humorous things about the state of, of 18th century America. So that's how I would do Twitter. I would make it a three tiered system bots, heartbeats, blue checks, and blue checks are your real public identity where I would talk about concerns and, you know, cryptography and all that stuff. But, and then all the heartbeat accounts I could. Uh, my mind about politics and about whatever is going on my local school board, you know, that kind of stuff. So then do you have any questions with that before I move on to the transition? [00:36:53] Kyle Rankin: Um, I mean sort of, uh, I, the next question, and so you can, we can table this if, if you're going to cover it or get toward that. But my next question would be about the fact that because Twitter is not a national service, but it is an international, [00:37:11] Dave Huseby: um, uh, network [00:37:14] Kyle Rankin: that, um, in thinking about how this would be implemented internationally, uh, certain, uh, governments are fans of, of having important services, be state run. And so I could see certain governments, uh, having their citizens use KYC services that are state run. And in that case, It's true that Twitter wouldn't know who you are, um, without getting, without going through the extra step of asking, but a government, um, if KYC and the government are one in the same, I guess is my question. Could you maybe talk through that [00:37:58] Dave Huseby: threat model? Really good question. There's a really great question. This is a problem that we don't seek to solve, but does exist in the world with like you touched on certificate authority systems. Um, this is a Western debate primarily because non Western countries who you, you know, use state run services like that do so to, um, sensor and to, you know, enforce their dictatorship or whatever. Right. Um, what I'm proposing is let's just apply the American tradition to Twitter. We're going to set up a pretty hardcore line in the sand. Um, but I recognize that there's a lot of Western countries that like, for instance, Germany, who issues digital ID cards with, with, um, uh, key pairs and stuff tied directly to your identity, uh, there might be some debate to this. Now, the way the certificate authority system has always dealt with this as well. Not always recently, they have a global transparency system. So when credentials are issued or MIS issued, it becomes obvious, right? It's, it's a radical transparency system. So I don't know about it, how you would do it in, in Western countries. If you trust your government, say you're like a Swedish citizen and you're okay with the Swedish government KYC in you, they do that anyway for all of, you know, your medical benefits and education, all that stuff. I'm sure. I mean, one of the things, one of the most, the easiest KYC solutions here in the United States would be the DMV. Right. They give us all driver's licenses. They identify all of us. Right. They KYC all of us. And no, I don't think anybody would really care if like, whatever state you live in was your KYC source to be on Twitter. I don't think because we have that fourth amendment protections of it would have to be under judicial review to de-anonymize you, um, in other countries, obviously they have different religious traditions or privileges. Well, yeah, some religious traditions, but legal traditions, um, which don't have as strong protections for individual privacy and you know, and, uh, criminal proceedings. So I don't really have a comment on this I'm American. I think what this does is it gives them a tool set to apply whatever is the legal tradition in their country to this. So if Twitter were to operate in the UK or in Norway, they have different. Laws around free speech and different laws around how criminal investigations happen. This would allow Twitter to operate in those countries, according to the law and, um, ignoring obviously the concerns about using VPNs and IP masking and blah, blah, blah, right? Like, oh, I'm in the UK, but I'm really not in the UK. Cause I'm hitting it from Argentina or whatever. Like we're just going to ignore that. I'm going to pay for over that because I don't think Twitter is in the place to try to solve that. Actually I think at this point, if, if this toolkit went into place, Twitter could just say, look, we are a communications platform. You know, uh, maybe part of the KYC process is, is a way to identify where someone is like a nation, like the legal jurisdiction or something. I don't know. Maybe that's part of the disclosure. I wouldn't, I would argue not, I would argue that in the United States, all I have to do is prove that. Right. And that might be the dead giveaway that I'm an American because everywhere else would have to require, you know, I I'm a UK citizen. So I'm under these laws versus, you know, the default position that all the Americans enjoy. That's how I would look at that. It's a difficult problem. And I think what we really need to do is just acknowledge that that is difficult. And what we really need is a set of tools that at least get us to the point where we have privacy by default it pseudonymity back people is once you're there, then in each legal jurisdiction, you can walk it back from there. Right? So what this does is allow us to have absolute privacy, absolute pseudonymity by default. So in the American context, it's perfectly aligned with our constitution, but in everywhere else, we can, you know, the disclosure of that, you know, view token or whatever could include other things like I'm a UK citizen or I'm this whatever for Twitter to operate in the. And this would be up to the negotiation between Twitter's leadership and those other countries. Uh, I actually like the idea that we would export our American values to as many places in the world as we can. This one in particular, there's a lot to be said about our country. Good and bad, but I think this one is truly exceptional and I think we should do well to try to encourage its adoption elsewhere. I think that would happen. I don't care. I just don't think that we should feel bad about our view on free speech and the first amendment and the fourth amendment. Everybody shames us for it. And it's like, I'm not, I'm not going to feel bad about it anymore. I think ours is an enlightened view that actual. [00:43:12] Katherine Druckman: I think my concern with the explanation, it, it seems like this runs the risk of creating kind of a digital caste system. Like you have the free world and you have the not-free world and you have, you know, you have a huge parts of the world that would just not be able, or for the most part, willing to participate in a system that authenticates them as humans. Therefore you have, you have a legitimate humans from, let's say certain countries around the world that that may or may not trust their government to shoot such an ID. Um, and then you have, you know, huge probably the majority of the rest of the world that does not. And then, so you have, I don't know that that's, I'm glad, I'm glad you asked that question because I think that's a, that's a pretty serious consideration that I can see being very problem. Well, [00:44:02] Dave Huseby: that's the web, that's the web today [00:44:05] Katherine Druckman: where I wouldn't want it to be authentic. I wouldn't want to be authentic. [00:44:08] Dave Huseby: Yeah. I mean, that's the web today. I mean, that's why the Tor project exists, but we're talking about Twitter and what Twitter can do and what the leadership of Twitter can do. What the engineers at Twitter can do. There are legal corporation who need to follow the laws. Okay. Now the Tor project is an edgy project that provides tools for anybody anywhere, and they don't make a distinction. So I don't believe Twitter can operate like that. I think we need to acknowledge that other countries exist. Other legal structures exist and the Twitter should operate under that assumption. This is just a toolkit that would allow them to do so. And in somewhat of a S of a selfish position, I'm an American, Twitter's an American company. I think we should. I think it should have American ideals baked into it. I really do. I personally want to be able to go on Twitter and actually have an honest political debate. I want to read Twitter and hear an honest political debate. I want to hear what people really think when they're not afraid of losing their jobs or when they're not afraid of being, um, you know, protested or are not afraid of like being attacked personally for their opinions. Right. What's in my head. Um, I should be able to share with you, this is my opinion, and you can take it or dis or dismiss me and ignore me. But I think it's improper for people to hear my opinions and then try to get me fired or get me swatted or, uh, you know, attack my kids or, you know, get me canceled. Anyway. I think that's a destructive, destructive thing for our society. It destroys our society. Isn't that a separate [00:45:48] Katherine Druckman: issue though? Couldn't you make, you know, isn't pure true and anonymity better in that scenario. Like why, why do you necessarily need [00:45:59] Dave Huseby: the, [00:46:00] Katherine Druckman: yeah, I mean, I, I get, I get where you're going. I just, I, I wonder if you're truly out there putting out, putting out ideas that you don't want to be associated with yourself necessarily. [00:46:12] Dave Huseby: They come [00:46:13] Katherine Druckman: from a very verified human [00:46:16] Dave Huseby: because, um, there are things you can do. Like the goal here is we want nice things, right? So nice things come from a balance of laws that are applied equally to everybody, um, that are designed through some kind of consensus. Right. Plus the ability to protest those laws and have those laws change. Right. So, nice things is, is falls in the gap, right? Is, is that the balance? There is. I think what's wrong with this whole debate right now is everybody's like, you know, you can't authenticate everybody because then I won't be able to come on and, you know, run my button it and create false consensus and do all those things. Right. And then everybody [00:46:59] Katherine Druckman: does, I get that right. [00:47:02] Kyle Rankin: And [00:47:03] Dave Huseby: so I think for us, if we really wanted to double down, um, on our American tradition, we would expect people to prove that they're humans and to mark them such mark them as such, so that each one of us gets to decide to what level we want to be exposed to what information on the internet, or at least on Twitter. Right? Like if I don't want anything for bots, I can just be like, no bots, please. And if I want to only see blue check marks, it's like only blue check marks please. Right? Like I want to see people who are using their real name, but I do want that middle ground where there are real humans and they're not falsely creating consensus. Um, but are able to speak their mind because there's no real world consequences. Cause it's not tied to the real world. I think people should be able to speak their [00:47:49] Kyle Rankin: minds. [00:47:49] Katherine Druckman: I liked the idea of, let's say in the Twitter example, I blue check mark for anybody who wants one. I mean, I, I liked that idea in general. Um, anyway, sorry Kylie. You have a question. [00:47:59] Kyle Rankin: Oh, well I had a, I had a question and maybe my question, we'll take a minute to ask, but I will set the stage by saying, I'm going to ask about revokable. Of your pseudo identity. Um, and for, I guess at least two reasons came to mind. First was I was, I keep going back when I think about identity on the internet to a section in a student's autobiography, permanent record. Which the sec, second kind of, this is what I think led to the title in many ways, but he's talking about being coming of age on the internet in the 1990s. And he talks about being a teenager or young adult, uh, talking with other people on the internet in the late nineties. One of the things that commonly happens when you're coming of age is you are becoming pin. Adult is you're starting to form your opinions about things and they're not, and they're, maybe they're very fervently felt, but they're not completely thought through usually they're half-baked and it takes awhile into adulthood before they start getting fully baked. Anyway, what he talked about was, is he found valuable back then was the fact that he could post under a pseudonym on a forum and try out I'm going to be a hardcore libertarian today, or I'm going to be, I'm going to be a communist tomorrow, um, in a different account, try those suits on and see which one fits. Um, but more importantly than trying the suit on and see what fits and throwing those ideas out there and then seeing how they're refuted and then changing his mind, um, was the fact that he could take that suit off and throw it away, needed it. And so. So that's the first reason that I would ask about revocability of this identity, because, um, if I, if I have many, if I have a lot of pseudo identities on, on-site say Twitter, that ultimately could link back to me, but don't without extraordinary circumstances, but I want to revoke one of them because I don't like how that suit fits anymore. That was my, I compartmentalize all of my hardcore communist ideology in this one account, let's say, and then was, was trying that on for size. It had all kinds of, you know, whatever. Um, then later on, like, you know what, I don't, I don't like that anymore. I don't believe that I tried that on people got into an honest political debate with me and convince me otherwise. And I don't want to be associated with those thoughts anymore for the rest of my life as part of my permanent record, because they're not really my ideas anymore. I was trying them on for size and realized that I, I, I need the freedom to change my mind. So all that, that, all of that to say, like in the system. So that's the first part of why I would say it. And I will remember the next one later. Um, but how would you handle revoke ability of this link? Because, um, I know what the other one is, but that's separate. Uh, well, no, I'll tie it together. So how would you handle a vulnerability of the link and the other reason? I think it's important. It's one to have the freedom to change your mind without your, um, your previous thoughts being associated with you forever. And then having to say, will I change my mind 10 years later? Because I'm more enlightened because we even see now how there's for example, many founding fathers who changed their mind about a lot of opinions, um, over the course of founding the country. And for some people that's sufficient for others, it's not. Uh, anyway, the other thing is, uh, going back to the CA system, the notion of trust agility, which is if a certificate authority has been proven to be untrustworthy, you want the ability to revoke trust in that entity. Um, in the case of this fifth, get afforded it's often you don't trust them because either their security is so poor that they could issue identity. They could issue certificates, um, that are, that seem legitimate, but they, you know, but someone hacked in the other reason is, is, could be their verification was so lax that it's very easy to get often something that seems authentic, that they had vouched for. So for both of those reasons, I was wondering if you talk about revocability either, so you can change your freedom to change your mind or second, your, um, the person who has your trust in escrow, your identity in escrow, um, is [00:52:17] Dave Huseby: found to be untrustworthy. Yeah, so this is cool. So I could write a PhD thesis on all this, but, um, let me see if I can answer some of these really quickly and get to the meat of it. Um, the, the first question was, how do I, like what's my road to redemption, right? Like there's a religious principles about, like, everybody should always have an option to redeem themselves, to change their minds, to get right with the world, whatever right. Pay their debt, whatever changed their mind. Okay. And to be respected in that I'm no longer that person that was, you know, indiscretion of youth or, you know, whatever. I'm a different person now, um, that is easy to do with this system. And I'm going to start talking a little bit about technicals here, and I'll try to keep it as light as I can. Like I said, we haven't solved this problem up until this point. Uh, and it took several key innovations. Uh, reusing or misusing existing cryptographic techniques to not only get fully decentralized, meaning no central control, no silos, anything fully decentralized, independently, verifiable provenance on any data. Um, what that means is that if I want to create any kind of data, so KYC data, any file like a photograph or anything like that associated with that needs to have a log, a Providence log. And what's the first thing you do when you create a problem on site is you create a key pair and think of it as like a mini blockchain. Although there's no consensus, there's no network or anything, it's literally just a file of, uh, linked records. And the reason I call it that that it's a Providence log. It's different than like a web log, you know, like a web servers log file or anything like that because each. Uh, contained some cryptographic link to the previous one. So it's more like a blockchain than it is like a log on your computer. Um, and it has crypto puzzles in it. So that the next event, or the next thing, next item added to the log has to solve a previous crypto puzzle. So you can project security into the future. And then we use this new, uh, well, it's not new, it's, it's fairly old technique, but, uh, it's becoming popular now it's called a strobe, which is a new cryptographic. I don't even know a new, old, whatever. It's an approach where you're constantly feeding back the new data into the old data. It uses a sponge construction so that you can add some data to it, to a file, right. And then you can get what's called a PRF, a pseudorandom number. Basically. That's like the hash of all the data up to that point. And then you can add more and you can get another value out of that. And what that does, is it enforces order? That the order in which data was added enforces, what data was added, so you can get into in verifiability of it. Um, but it would more importantly does, is it allows you to check point the state of a file as it's constructed, that allows you to create a provenance log. So I created a key pair. Here's the first record, blah, blah, blah. Then I get the, basically the hash of that state. Okay. And then I want to go and rotate my keys. I had another one rotates my keys using crypto puzzles. And then I got a new state, right. Uh, if I take a photo and I want to link that, um, provenance log to the photo, then I mix in the photo into the provenance log. Um, and there's different ways to do that. But essentially what that does is it provides key history. And more importantly, the use of cryptography allows me to become what we call a controller. I control it. I possess the keys. I know all the secrets. I'm the only one who can update it. So I control that. I own that. Now to get to, to a revocation, I'm going to show you. So what we really needed to do was to have those and then how are we going to bless it in a way like the certificate authority system does. Okay. How do you turn a provenance log into the functional equivalent of an X 5 0 9 certificate? Okay. And how do you do it in a, in a very decentralized way? Meaning there's not one company there's not, you don't have to use a certain storage mechanism. You don't have to use any particular blockchain or any service or anything like that, you know, decentralized like in email. Um, and what it really takes is you have to have a root of trust in the certificate authority system. That's the root CAS, right? The root root certificate authorities, members of the CAAP forum. Um, and you, you called it correctly. I do have a background in the stickered authority system. Um, but instead of having this. Of organizations that we all have to decide to trust. Uh, we have this thing now called blockchain. And so what we designed was a system that doesn't require any one. It doesn't require a blockchain, first of all. So this isn't a blockchain solution necessarily. It's not a cryptocurrency solution. So it differs completely from NFTs and the web three. So there's no cryptocurrencies, no fees, none of that stuff, no minting, whatever. Um, but the, but the blockchains are very handy for this because they can replace the entire route certificate system specifically Bitcoin can because of its size and in the fact that it's so resilient to attack and all these other things. Um, and so one of the things that people don't realize about Bitcoin is, you know, currency aside, let's just ignore all of that. It is a really, really handy way to have a clock that everybody agrees. Bitcoin, the Bitcoin network is a clock that ticks every 10 minutes, roughly. And each tick you can say is unpredictable. So the hash or whatever, the, the number, the identifier for that tick in time is somewhat random, right? It's hard to predict almost impossible to predict. If you could predict it, then you could mine every block, basically. And so the other cool thing about this is we can attach arbitrary data to each tick each point in time. Okay. And this allows us all to agree on the ordering of events. And that's why the currency works because, you know, I paid you, you paid Catherine, that's all recorded the blockchain. That's, you know, the order of events, that's how it went. Um, and Bitcoin has exactly 80 bytes in a transaction. We can put random data. So we had to solve, how do you get. Potentially billions or trillions or quadrillions or 10 of the 18 off-chain updates to these Providence logs, creating a new state in the provenance. Like how do you get that state recorded in Bitcoin in a way that everybody could verify it. And we looked for ways to do this. Microsoft's ion system for identity uses, uh, off, uh, uses what's called a side tree, which is a form of Merkle tree. And it, it sums up the state of that and puts it into a Bitcoin transaction every so often. Right. That's one way of doing it. It's highly, it doesn't scale as much as our, our approached us. Um, but one of my co-founders was going back through old photography and found these things called cryptographic accumulators, and then extended the use of those. And this is what I meant, like kind of misuse, sort of a sort of hack. I don't want to say that it's anything less than. Then good cryptography math and everything, but it was like, these weren't originally designed for this, but they solve this problem. But it allows us to do is to take those new states. So I have a provenance log for my identity. You have one for yours. I have one for each of my photos as I take them. You have them for, you know, every one of your mechanical calculators, you know, tracking ownership of them. Um, and as I changed the states, like I rotate keys or I update something like I edit a photo or whatever, it creates a new state on the provenance side. And it creates that that PRF, that stroke, that sponge construct, that's basically the hash of that state. We put those into these accumulators and those accumulators are actually only 48 bytes long. And then there's a 32 byte hash of the public key associated with accumulator. And then we put that like, is this 80 bytes? And we put that into Bitcoin blockchains. What this effectively does is it makes infinite the Bitcoin blockchain transactions. It doesn't create cryptocurrency tokens. Like you get on Ethereum and it's not about trading Bitcoins actually. Um, it's just that we can have these off-chain records that are fully decentralized. I have mine, you have yours. Katherine has hers. There's no central storage. There's no central coordination, but as I live my life and change the state of all the provenance of me and my data and everything, and Catherine does, um, and you do Kyle, we're all hitting these, these, uh, anchoring services, which by the way, we're going to open source. Some, you can run your own anchoring service, um, and it will record in Bitcoin, or it can record in Ethereum or if it's inside of a corporation and it's, you know, access badges to get into the building that can be a SQL database or an L L DAP server or something, right. It just has to have public storage and the ordering of events. Uh, in the public context, in the global context, Bitcoin is the most convenient way to do this. And it was actually originally designed to do this. Although again, it can work with any blockchain or any storage mechanism, as long as there's a clock and storage, that's public and readable in the quote-unquote domain of trust. Right? So to get to revocation. So what the accumulators are is essentially a set proof. Um, given an accumulator, you cannot know what numbers went into calculating it, but if I give you an accumulator and I give you my Providence log and you calculate it out to a certain state, you can replay the CA the Providence. I can get the hash of the curtains, certain state. You can see that the accumulator has my value in it. Okay. And so you can, what that tells you is that my Providence log existed in that state at the time that Bitcoin transaction. So my identity existed, the file, like the photo I took existed and I'm the first person to a certain ownership rights over that photograph. So it is actually mine. I'm the one who took it. Right. And those set proofs can be used to do the opposite. So let's say you are, let's pick a random company. You're well, I won't pick on any specific company. Let's just say you're a credit rating agency. Okay. And you want to publish everybody's credit reports to each person and say, we don't want to store your personal information actually. Cause that's kind of like toxic waste, right? So we're going to do the credit check on you and then we'll produce a report and we'll give it to you as authentic data. What we need to do as a company. What I need to do as a company is if I find out that that credit report is now false in some way, like you skipped out on all your bills or you declared bankruptcy or whatever. And so your credit history and your credit score goes down or. Um, I need to be able to revoke it in a way that anybody you would give your credit history to would know. And the way I do that is when I gave you your credit history, I took the hash of your provenance log and I put it in a, an accumulator that is a revocation set, a non revocation set. So if it, if it's included in there, then your credential or your, your credit history is still valid, it's still valid, the authentic, right. I haven't revoked it. But one of the cool things about these accumulators is I can do batch updates. So if you and the 10,000 other people out there that did something that messed up their credit today, I batch them all up and I remove everybody's credit history from that accumulator. And I publish a new one. And then I anchor my, that updated thing in my provenance log in, say, Bitcoin, Your credit history and its associated with Providence law. When you try to go and say, get a loan or rent a, uh, an apartment, the person asking for your credit check will want to verify that it's authentic. Well, the provenance log says, well, this was issued by Equifax or by, you know, Experian or one of them. And, uh, it was issued and it was anchored in areas the revocation set is here, right. And they would be able to immediately go and check to see if that, uh, the provenance, the credit history was still in that set. So what this allows me, that's part of the verification process. So what this allows me to do as the credit rating agency is to give you your credit history. And as soon as I change it, I just broke it and I can re-issue it to you automatically. I mean, we have apps, right? So I could just push a new one out that is in a new set, but your old one in the old set know has been removed from the old set. So you cannot use your. But your new one is valid. It's in a, it's in a non revocation set that I just published. So what this allows me to do as a company is I can push data out in the world. I can protect my brand, protect the integrity of my services and the data I create. And then should any of that change about the data I published? I can go and revoke it and say, and signal to the whole world. This no longer. I no longer stand behind this. Okay. Now what this works for in the context of Twitter is if Twitter suddenly became like secure scuttlebutt, where everything associated with an identity was linked to the previous ones, and you had like a flow of information, all linked to a cryptographic key pair. Um, I would be able to revoke it because it's mine. And so I could remove it. And as long as I didn't link that identity to any of my other identities, there would be no way for anybody to link it themselves. And there are techniques for that. I can do cryptographic tricks. Where at I can do like a commitment it's called the commitment, um, where I published some data that doesn't reveal the link between this one identity and my real identity or any other of my identities, but in the future, should I ever want to prove it was me? I can reveal the secret and that commitment. Well, you know, I'll produce a pre-image or, you know, the secret key or whatever, and it will prove cryptographically that I have been that person all along. It's interesting. You bring up, hold on. It's like, does she bring up about the nineties? Right. So I've been an open source developer forever, but one of the problems I face right now in my career is I can't prove it. I used to publish, I used to commit code changes all the time anonymously under false identities. I always have, because cryptography has always been really interesting to me. And I've always loved the idea of being like secret, you know, like nobody knows who I am. And so all of my open source contributions over. 25 years that I've been participating in projects I've always done under false identities, you know, random emails, false. And in fact, I still have a bunch of them today that I still use on occasion. Um, and the problem is, is I didn't have any way of tracking key history like we do in provenance logs. And I had no way of doing these commitments to link any of these identities. And so I've lost UV keys. I've had laptops die, I've lost old keys. I'm sure you've lost GPG key chains and don't have those keys anymore. So I can't ever produce a proof that those contributions are mine. I claim it and people just have to trust me, but there's no way for me to prove that those contributions to the Colonel or whatever were actually meet. I don't have that old data anymore. And that's why this provenance log, that's why I say I've been working on this for 20 years. It's like, how do I contribute anonymously? And still at the future, be able to prove to a prospective employer or an investor or something that no, I really have been doing this for this long. And I can produce a cryptographic proof that that was me. I am a kernel developer. I am this, I am that I am this right without actually revealing what you know, which one of those contributors I actually was. So anyway, the key thing that we've innovated on that solves problems that nobody else has solved is this replication at scale thing, using accumulators and these tiny proofs, um, the best identity systems, digital identity systems in the world right now only go up to like, I dunno, a couple of hundred thousand a million identities and Twitter has what, 200, 300 million active daily users, uh, 500 million. Um, what we are doing is the only system I think could, that could do it, right. We can issue revokable credentials that 10 million per second per CPU. And that was on like a four-year-old laptop. Um, and the revocation sets are so small and the decentralization is so great. That it's the only thing that could easily go to the global scale. Like we've done back in the napkin math, and it's like of the 8 billion people on the planet. Everybody could have a billion of these things and the revocation data would be a fraction of what everybody else's is. You know, that the best technology in the world are these things called a bit vector revocation sets. And, you know, if you gave a digital ID to everybody on the planet, it's something like 78 terabytes of data that everybody would have to have access to. And using our technique, it shrinks down to like three to five megabytes, or sorry, three to five gigabytes, something easily you could have on your phone. Right. Everybody could have it on here, have a copy. So. Anyway, where are you gonna say Kyle? [01:10:18] Kyle Rankin: Oh, well, so what, uh, what I was wondering is, um, it seems to go back to the Nevada DMV example. Um, so if I'm creating an identity and I have Nevada DMV vouched for that identity, and sort of the purpose of doing this in the Twitter example, at least, and at least in that use case is that if, if my, uh, pseudo identity does something naughty on Twitter, that then the authorities can come with with all proper Jewish prudence and with a warrant, and then go to Twitter, who then can go to, um, the Nevada DMV and get my. [01:11:02] Dave Huseby: Yeah, well, Twitter doesn't do that. They just hand that, that view a token thing, right? The verifiably encrypted thing, they hand it over to law enforcement and it identifies where it came from. It says, oh, I came from Nevada DMV. So then they go to the Nevada. It's important that Twitter gets out of the business of enforcing speech. That's really what the point. [01:11:22] Kyle Rankin: Sure. So, so in that case, um, these, uh, remind me the name that you're assigning, the, the folks that are, uh, that are, um, verifying identity, just so I can use the right terminology. [01:11:35] Dave Huseby: Oh, the KYC vendors, like, know your customer, I'm saying, [01:11:39] Kyle Rankin: do you have a name for that? You're applying to that or can I just [01:11:42] Dave Huseby: th no, they're just, they're just anybody who does a legal process of identifying you there's company agreements, there's organizations. So [01:11:48] Kyle Rankin: I'll just, I'll call them right now is acting as identity. Um, and so let's say that because the reason I say that is because they are holding onto just like at any Astro agency, they're holding onto something that they have control over that by, it sounds like you don't necessarily have control over, um, completely for it to work, but also the, like the Twitters of the world that would be using this, they can't see it at all. They just trust this third party, the, um, and because they trust that third party, they, they know that that information is retrievable from the third party though. That's the thing is that because it's in, it's in their hands, it's under their escrow. So. Because of the reason I asked, that's why I'm asking about a replication, because one property of this is that if I'm someone who's doing something naughty on Twitter and don't want to be identified and I'm using, and I have this level of verification through this third party, they do have the ability to unmask me. And I'm wondering about because they have that data. And so I'm wondering whether replication in your case requires that that third party be also be trustworthy. And if there's some mechanism for that escrow agency, as it were to then say, we have a race to the link between your real identity in [01:13:13] Dave Huseby: your pseudo identity. Yes. Yeah. So, um, I think it's important to point out again, getting back to game theory and incentives here. It's important to point out that the Twitters of the world get to choose with. Identity, um, you know, they get to choose which identity provider or, you know, KYC providers they trust. Okay. So if you know, the B forum has this problem, right? The certificate authority has this, this, uh, system has this problem. There have been state run telecom, not to pick on any one particular. I won't identify them, but I think we all know who they are, stay wrong. Telecom agencies that miss issued, uh, TLS certificates for well-known sites like Google, Facebook, whatever. So they can man in the middle of their citizens. Okay. So Twitter would want to carefully vet these, uh, KYC vendors and, um, it's in their control to decide which ones they trust and which ones they don't trust. And that's their role in this. Um, and what what's important here. So they, they get to choose which ones those they're going to trust. Right. Okay. They present the list to all of us and all of us get to choose which one we're going to use, the revocation piece, you know, link, keeping things in linked. Um, uh, you know, I'm trying to, I could get into the technicals on this one, but what you're really doing, let's trust that you're doing it correctly. [01:14:52] Kyle Rankin: I'm just wondering how from a high level it would work. I'm assuming, like, let's assume that all the, all of the math behind it enforces this, but like, if I'm a person that wants to revoke my identity, do I need to trust my escrow agency? [01:15:08] Dave Huseby: I deleted it [01:15:09] Kyle Rankin: or not. Then how can I then do I have the ability to gain that? Say I did something intentionally illegal and then provoke my identity for instance, um, [01:15:20] Dave Huseby: before the hammer comes down. Yeah. So you've got to remember that, like there's several, several things here. So the KYC. Verifies who I am and gives me verified data as you know, my identity as authentic data. Okay. Using the authentic data, I then prove to Twitter that I have been KYC. Um, and I can create as many accounts as I want. Now there's game theory here too. Like, you know, maybe we don't want people making a hundred thousand accounts that are heartbeat accounts, right? Cause then what's the difference between those and a bot accounts. So, you know, maybe you can have an account. The first one's free, the first five are free. Um, you know, the next five costs you $50 each, right? And the next five costs you $5,000 each, you know, like there could be graduated costs to having lots of these identities. Um, but each one of those, if they get the heartbeat. Um, are, are linked to a cryptographic proof that you have in KYC, but not necessarily the KYC credential itself. So Twitter doesn't link them. And more importantly, the KYC vendor can't link them either. They're fully under my control, but to use an account at depost as that on Twitter. Okay. I have to give them a piece of data that will then reveal my identity will, will reveal my KYC data at the vendor. Okay. And the way I've always seen this as this, like these, these KYC companies are actually going to be, uh, I think they're probably going to be fiduciaries really, um, where they are there to act in my best interest, but then they are also bound by the law to support law enforcement and all that stuff. So there's sort of a give and take kind of like your attorney, right? I think this might be a thing that a lot of lawyers get into because you have attorney client privilege and then, but that can be broken under certain circumstances. Right. So, um, We have existing systems for this. I think it's as important to point out that I don't have to like my KYC vendor who gave me my crave ASEE data's on any data. It doesn't know about all of my accounts that I created associated with that data and Twitter doesn't know all the different accounts either because the verifiably encrypted data I give them is presented differently each time. So it's a technique and proof presentation where they're on linkable across time and space. So I can interact with the service multiple times using the same authentic data as the backing for my access to it. But Twitter itself can not link to my presentations. So they can't be like, oh, that's that person? That's the same person. That's the same person. That's the same person. Um, and so Twitter doesn't know that those accounts are linked, um, which actually would be interesting. How would you enforce heartbeats? Um, keeping people from having a hundred thousand art beats? I'm sure there's. Oh, I know how it does. I know. So one of our other inventions is a limited use credential. So it's like a punch card. It's a thing you can get and only use it so many times and you can do it in a way that, um, is not linkable. You can just prove you still have punches. So, and then if you've used it five times, then it's invalidating, right? And so there's some game theory here we can do. Or some technology we can do with limit the number of accounts that we could create with a heartbeat. It's not fully baked. I mean, it's 99% baked. There's a couple of corner cases we'd have to work out. But, um, like I said, this is a narrow use case of what we call the authentic data economy. This is how we're going to do authentication. I think for everything, uh, in the future, it reverses the arrow in e-commerce. And what I mean by that is we're no longer sending data to code. Like I'm never giving my data to Amazon. Instead. Amazon sends me verifiable computations that I execute privately over my private. To produce cryptographic proofs. So sharing is obsolete. That's the coolest part here. We've scaled the issuance of authentic data and revocation of that authentic data to, uh, magnitudes, you know, eight, 10 magnitudes, bigger than anything that exists in the world today. And that allows us to then touch basically every part of the world. Every part of the economy, we can do NFTs without a cryptocurrency, essentially this automates third party licensing of intellectual property. This makes every person who takes a photo or records, audio, or films of a movie or whatever into a media company, because you immediately anchor your, you know, creative Providence log for every photo, every video, everything, and anchor that the abstraction of anchoring using an accumulator, turning the transaction throughput rate of flight Bitcoin to infinite for all of this, um, means that anchoring is basically. Because we can amortize billions and billions of billions, of state updates into a single 80 bytes in a single Bitcoin transaction. So, um, asserting your digit, your rights over your digital data and your digital life and everything like that becomes a ubiquitous thing that is free. And that's our distinction between say what the web three people are trying to do right now is in my opinion, they're building new silos. These are new Google's new Facebooks, right? Data portability, portability of data, linked to a certain cryptocurrency or blockchain is zero. Like I can't start on one and move to another one. Or if I can, I have to use some cross chain atomic swap or something. And, you know, I just don't like the idea that I'm going to have to pay fees for every little thing. And I don't believe that they're going to get scalability up high enough to make those fees cheap enough that it's essentially free for everybody. And so we took a totally different approach. Um, I mean, my time at Hyperledger taught me that blockchains are horribly inefficient for anything on layer one. And I think that Bitcoin and other cryptocurrencies and other black blockchains make sense for money where you need to have double spend and distributed consensus. But I think everything else where you don't need double spend protection, where you have positive assertion of digital rights, um, you can do off chain and we're essentially an infinite layer, two solution. If you want to use blockchain speed, you can do it off chain using a provenance log anchored in Bitcoin or Ethereum or work. Right. And now what we're doing is just playing around with, well, what can we do with this? And we have vertical where we're building out. Um, well, we're, we're an open-source company and we're trying to like, we're very inspired by get lab. So, you know, get was open source everything. And then they built an enterprise solution around it that they sell. And so that's what we're doing too. We're just showing the world how to think about authentic. How to manage their own authenticity and identity and all that stuff. And then we're going to be building out verticals for different industries. You know, we're looking at financial services right now because one of the cool things about this is it automates regulation, compliance, financial and privacy regulation, because companies don't have to collect any personal information anymore. It's like, I don't need to know who you are. I just need to know that there is a company that knows who you are. So if I ever need to work with law enforcement or there's an audit regulatory action or whatever, I'm a good, you know, as a co co a company, I'm a good citizen and can participate. Right. Um, so anyway, this whole thing about Twitter is really interesting. I wrote the blog post, just trying to change the conversation. I felt like people were missing it. Like we now have tools that radically changed. And maybe what I said here, isn't the right way to go about it on Twitter. I think it is. I personally do. And I would love to challenge Ilan. I would love the challenge. Call me, [01:23:07] Kyle Rankin: um, [01:23:08] Katherine Druckman: at least you started the conversation. [01:23:11] Dave Huseby: Yeah. I just want this to be, I mean, we're doing our best to make this a well-known right. Like I said, we just came out of stealth like couple of weeks ago, we've opened sourced a whole bunch of our libraries. We're going to open source even more over the coming weeks. We're going to start doing some demos here. Um, and we're going to stand up an API so we can play with it. And yeah, just look for that. I'm hoping to come on floss weekly and, uh, and point everybody in all the reposts and stuff like that, you know, we had, oh, one last thing. I I'm supposed to say one last thing because it was in my list of to do's for this. Um, we did, we did open source about four or five months ago, a new API security. Approach it's called Oberon. And it's now out in the wild, there are companies using it to secure their API. It uses this kind of technology. Um, it doesn't require the Providence logs, but it does use the, the presentation of, of credentials and stuff. And the set proofs. It is a substantial upgrade to API security, um, from what exists in the world. And there's an obvious migration path from say, like open ID or whatever. Um, essentially what it is is like companies who issue what we're switching to a capability token system. So instead of using a username and password you're can hand out capabilities and it uses a technique, uh, cryptographic blinding techniques, so that on the issuance, the holders have to commit to by blinding. So like a fingerprint or password or whatever. So it's low it's client side multifactor, um, without using authenticator apps or text messages or anything. And like servers that verify these tokens only have public keys. So even if they're taken over or hacked, they can't be coerced into miss issuing capabilities. Um, and the issuance can be done using a distributed or decentralized signature multisignature system. So an attacker would have to compromise, you know, a majority or whatever the threshold is of issuing servers, which can be located all around the world and isolated from each other and all that stuff. Um, they would have to take over a bunch of servers to miss issue credentials and it's privacy preserving what happens now is instead of telling the service, who I am, it just tells them that I have a valid token to access. And it's presented, you know, in an anonymous way so that I'm not linkable across interactions, something like that. So just to point out that we are truly an open company, it is a substantial increase in security for API APIs, and we just gave it away. And at the last internet identity workshop, just a couple of weeks ago, some companies came and presented their rollout of it and their adoption of it and to great fanfare. So look for more from us coming soon. Cool. That's it. Thanks you guys for listening to me for an hour. [01:26:17] Katherine Druckman: Yeah. Any further questions we have to save for next time so I can still have time to edit tonight. [01:26:22] Dave Huseby: Yeah, I do want to thank on air Kyle for the wonderful laptops I've been using for a long time. Thank you, Kyle. And thank Todd for me. [01:26:31] Kyle Rankin: Thank you very much. I appreciate it. [01:26:34] Dave Huseby: Yeah, I'm a fan. I'm a peer as in fan. Totally. Yeah. Everyone [01:26:37] Katherine Druckman: should be. Somebody's got to do that work. So, um, yeah. Well, thank you both for being here. I, you know, thank you. Especially to Kyle for coming in, it's pretty loud, pretty much the last minute there and asking some really good questions that I probably would not have thought of. So yeah, I really appreciate it and yeah, until, until next time, I'm sure we'll talk about this stuff again, because it [01:27:02] Kyle Rankin: seems to keep coming up. [01:27:04] Katherine Druckman: Yeah. Well, we'll see what happens with Twitter. I'm kind of an extra anxious. Now, [01:27:08] Dave Huseby: everybody knock on, you know, on store, everybody at Elon Musk on Twitter and be like, you should talk to, I can fix all his [01:27:14] Katherine Druckman: problems, all of his problems, all of his political, well, I don't know. [01:27:22] Dave Huseby: I brought the challenge. I, you know, if this, if nothing ever comes to this, I want to build a Twitter. Just to demo it just cause I don't want you to tell him I don't use any social media at all. [01:27:36] Katherine Druckman: That's why that's still how it makes this whole conversation. Even kind of funny. Well, [01:27:40] Dave Huseby: because it was never a never hear to my, my parents for interaction, right? Like I have rules for relationships with other people and computer systems and Twitter and all of the social media systems are like at, yeah, we don't recognize your rules at all. We don't even come close to them. And so this is my chance. In [01:28:00] Kyle Rankin: fact, why not just mastered on [01:28:03] Dave Huseby: it's federated identity [01:28:05] Katherine Druckman: that Nope. Nope, [01:28:08] Dave Huseby: Nope. Big data stuff, but I got really excited about the blue sky stuff, but then they're going federate identity. That's not real identity. It's not, no, it doesn't have the right properties for me. So, but we do have a Twitter account now for the company. And so I'll be tweeting through that. Believe it or not [01:28:26] Kyle Rankin: funny. [01:28:27] Katherine Druckman: Yeah. Okay. Well, until you launch a Daver okay. Y'all I think, uh, I think that about wraps it up.