[00:00:00] Katherine Druckman: Hey everyone. Welcome back to reality. 2.0, I'm Katherine Druckman Doc Searls, and I are talking to Adrian Gropper. He is an engineer and a physician a serial entrepreneur working with medical devices. He is currently the CTO of a patient privacy rights organization. So, um, yeah, I think, you know, the ultimate Renaissance human, perhaps , we're lucky to have him on the show. So thank you for that, but we're gonna, uh, we're gonna talk about a few things. I think we'll get into some human rights and personal agency as they relate to technology. We'll probably talk about health data privacy. That's a safe bet. And maybe if there's time, we can get into GitHub co-pilot, which is a whole can of worms that we will get to. But anyway, thank you. Thank you very much for joining us on the call and, um, sharing everything and getting into the weeds on some really difficult issues. [00:00:56] Adrian Gropper: Thank you very much to, to you guys for inviting me. And I really appreciate Doc's help and, uh, encouragement through this, uh, over a decade now, I guess. Uh, so, uh, this, uh, this is very important to me and I'm happy to be here. [00:01:13] Katherine Druckman: So, how did you get in, how did you go from engineering to medicine to data privacy? How did that happen? [00:01:22] Adrian Gropper: I was at MIT doing mechanical engineering, but, uh, I was only interested in the computer work and the lab work. I wasn't interested in the, uh, basic science stuff. And my, uh, professor, my advisor got me a job at the Harvard school of public health, where they were instrumenting, uh, research experiments and physiology, pulmon breathing, pulmonary physiology. And, um, it was the very early days of sort of applied computing of computing that would be embedded what we now call embedded computers. Okay. Um, and I was basically embedding, uh, early deck computers, small computers, PDP aids, PDP elevens into a physiology experiment. And I was there for about three years, most of my undergraduate years at the slab. And I realized, um, that the only people who controlled their destiny were the doctors, and so, um, it was obvious that if I wanted to do what I wanted to do, which I still do to this day, I'm basically unemployable as a result. Um, I, uh, I had to have the MD, but I thought I was gonna do research. And so I went to medical school. I had no intention of practicing medicine. I came back after internship cause I wanted to have a license mm-hmm , uh, to practice even though I wasn't gonna practice. And, I was picked up in, by a consummate entrepreneur who adopted me, made me director of biomedical research of this small public company in, uh, in Cambridge. And, uh, I developed my first product there and, um, it was very successful and it was a lot of fun. [00:03:21] Katherine Druckman: That actually is weirdly appealing, to have the knowledge of a physician and never actually had to practice because that sounds like, [00:03:28] Adrian Gropper: well, no, so fun. Well, it, it actually relates directly to the question you asked about how I ended the second part, how I ended up being a, a volunteer advocate, um, I ended up as a result with a very unique, uh, education mm-hmm because I was always putting stuff through the FDA for the first time. So I understand corporate regulation, right? Uh, centralized government regulation, as well as anybody, because this was the early days of computers and people trying to figure out what to, you know, how to use them and in various ways. And that was my specialty. It still is, I guess. But I also was living amongst physicians and, had trained with them and stuff like that. So I understood. Firsthand from that perspective, what it meant to have decentralized regulation. Mm-hmm, where people are prescribing, uh, you know, off label. And, uh, people are credentialed not by the federal government, but by their professional board or their local hospital, or, and so a lot of what I'm gonna be talking about today is a direct result of this very unusual background, where I understand both aspects of the regulatory space for technology, uh, computer technology, network technology. I just decided that I was tired of selling things to hospitals and wanted to do something that was more directly focused on, uh, on doctors and patients as I. As I sort of got to not retirement age, but, uh, you know, got to be able to afford to do it. Let's put it that way. Put my own money in instead of taking money from other, from investors. Anyway, that's how I got to do this. [00:05:27] Doc Searls: Um, I just wanna let everybody know that, uh, Adrian's one of my favorite people and one of my favorite, uh, thinkers and doers in technology in addition to being a genuine poly math, as he just explained, he sees what really needs to be done with healthcare. I mean, and, and not just healthcare, but generally technology in general, it comes from the, from the patient side, from the individual side, from the doctor's side and looking at, at systematic problems from an angle where the systems themselves can't solve them. and, uh, and he knows an awful lot about standards and all kinds of stuff. So, um, and he is been around project VRM, which, uh, I started. 2006 at Harvard's, uh, Berkman center. Now Berkman Klein center, and he's been participating ever since. We've got a few very active ones, Adrian, Adrian's one of those and, uh, always contributing solid, good stuff. So, and that's on the public record if anybody feels like looking at it. Uh, but I just, I'm just, I'm just, I've been eager to have him on the show anyway. So here we are. [00:06:33] Katherine Druckman: So tell us a little bit more about the work that you're doing right now in health data, because I think, you know, that's, that's obviously the best starting point, I think, but it's also deeply interesting and not something we've ever talked about before, except in passing, because we all have opinions about privacy and our own, our own health data, but we've never talked to anybody who has the deep level of experience that you have. [00:06:56] Adrian Gropper: So, uh, I'm shifting from privacy to human rights as a framing for. Okay. What I do and how I suggest things be engineered, uh, as an engineer, by standards groups and, and, and others regulators. Um, and, uh, this is relatively recent. I'd say it's about a year in, uh, having spent, uh, quite a bit of time trying to sort of convince people that privacy was important. Um, but, uh, privacy suffers from a, uh, a terrible problem, which is in general, it's, uh, puts the burden of consent, the burden of doing stuff onto the individual. Uh, and the individual gets overwhelmed and often, uh, you know, accepts things that, uh, uh, they really wouldn't if they had a meaningful choice. So, uh, the alternative to consent and privacy is human rights. As my partner at, uh, patient privacy rights Michael Stokes says you cannot consent away your human rights. And once I realized that there was potentially a better way to I think through regulate regulatory and technology practice, uh, from this human rights perspective, uh, that. Sort of resulted in the, the past year's worth of work, as I've tried to figure out how to explain this more broadly as an advocate. So that's number one, sort of shifting the conversation to human rights and away from privacy and fair information practice, things like that, that are based in privacy. Um, the problem, cuz it's always nice to have an anecdote or a problem. The, the problem that I'd like to start out talking about is how do we regulate these global platforms? Like, uh, Google or Facebook or Twitter? Um, what does it, you know, uh, or even apple when it comes to their app store. Right. Um, we all understand that the technology, um, Has advanced to the point where these, uh, corporations, these private corporations often run by just one person have outsize influence on elections and, uh, and human, uh, just human life in general. Um, and there is a call from the platforms themselves, excuse me, in the case of Facebook to regulate us, you know? Um, and, um, so the, the stuff I wanna talk about initially is a framing for what this means for what it would mean to regulate, um, the technology, uh, from the perspective of human rights and in the sense, uh, In this sense, I'm in good company, uh, Doc hosted on his other, uh, session, uh, recently Shoshana Zuboff and, uh, professor Zoff, uh, made the point over and over again. And doc could probably paraphrase it much better than me, that fundamentally to deal with surveillance capitalism and, uh, the, the problems that she is so articulate, uh, about explaining we needed to have regulation that this was not going to be something that technologists or engineers like us would necessarily solve. Is that a fair one sentence summary? [00:10:45] Doc Searls: Oh yeah. No, that's, that's where she's coming from. I'm gonna pause for a second i, I really like your framing of you. You can't negotiate away human rights just to, to be clear on consent. What, I mean, consent ought to be going two ways. It only goes one way. And, and so far, all we have is you have to consent. At every single website at every single service to, to give away access to your data and give away access to your life. And that's so when we're talking about consent, we're talking about the kind of one way ex um, uh, um, coerced it it's, it's coerced it essentially coerced. We, we, and is that, that a fair characterization of the way we [00:11:26] Adrian Gropper: understand cons. Very much. And, and actually, since I said I was gonna use healthcare examples throughout because people do yeah. Are familiar with healthcare, the HIPAA privacy notice, which is, uh, a joke is what it's basically a, uh, physician asking that before they treat you, you will basically sign. A set of privacy, um, uh, uh, you know, sign away your privacy to that physician practice. And if you don't sign it, you can go find another physician or another emergency. Yeah. Or another hospital. Uh, so this kind of, uh, this is like shrink wrap, uh, software license has gotten mad, right? It's uh, uh, you obviously can't walk away from the government, but you can't walk away from the ER and you can't often walk away from, uh, even a medical practice, um, or et cetera. So yes, uh, the asymmetry of power between, uh, the individual and the institutions is, uh, is what doc and I are talking about, um, which is why you wanna try. And, and I'm trying to explain to people that there is an altern. and, uh, so over the last few months, um, I've made a lot of progress, which I wanna share with, uh, this, uh, your audience. And hopefully we'll get some feedback and questions from you guys as to, okay. So if we have this regulatory problem, how do we regulate these platforms? Uh, many of which are based on surveillance capitalism and in the case of apple, that's clearly not based on surveillance capitalism. It's based on convenience, uh, convenience and staying secure and private by staying within the, uh, walled garden of, uh, of apple and Apple's app store and everything else. If we are gonna be able to do what Shoshanna Zuboff says we have to do, uh, in order to preserve democracy, she puts it it's, this is not just solving, uh, global warming maybe, but, uh, it's, uh, close, um, and you know, individual freedoms. Then we have to give the regulators some opportunity to interact with technology business, and, uh, with, uh, other things that they, they control in a meaningful way. And this is what I am trying to formulate. Um, um, That, uh, uh, in the, in the following way. Uh, not just me, but like I say, uh, my partner, Michael Stokes at patient privacy rights has been very helpful and this S have been, uh, you know, uh, interacting with, uh, doc and others over the years. Um, so what I'm proposing on the human rights. Is, uh, actually been taken up and published by I ETF. It's a draft number 10, and I can put a link to it. I'll send it in the email later, if you want, cuz we're not recording links here, uh, on freedom of association and assembly as a human right now. In other words, the universal declaration of human rights. Doesn't talk about delegation. Doesn't talk about authorization, uh, directly in the sense of, uh, you know, OAU standards and uh, things like that. But it does talk explicitly about the freedom of association. And um, so what I'm doing is using this, uh, write. Within the context of I ETF it's very effective and very short it's about 30 pages, um, as to how that interacts with technical protocols, uh, for the internet. And I'm basically saying from this, we can start to look at these, uh, platform issues, the problem of regulating platforms as a, uh, violation of, uh, the freedom of association and assembly. In other words, if you're being surveilled, then you don't have freedom of assembly. Um, if you're being forced to use, uh, uh, Google for, uh, to sign into something, to search for something, and then to transact a transaction that actually is recorded, uh, you know, like you bought something, um, or you posted something, all within a single, uh, business, a single entity that I would, I'm trying to develop and explain is forced is a loss of freedom of association and assembly and should be, uh, regulated out, uh, a priori. In other words, don't ask me for consent. You just don't get to build things that. The function of authentication search and authorization or storage, depending on how you want to think about it, um, into one entity. So let me pause there and make sure that, uh, we understand what I've said so far. [00:16:55] Katherine Druckman: I, yeah, no, I think it's great. Um, I, I did actually read Shoshanna Zuboff's book that I think I, if, if we didn't mention it by name, uh, the age of surveillance capitalism, is that the full title or just part of it? Um, I will definitely link to it in the show notes, but, and we have mentioned it before, but it's, it's an excellent read. It is both disturbing and enlightening and probably a lot of other things. But, uh, no, I think, I think, I think the ideas that. That you are approaching, here are things we've talked about before and, I suspect doc is on, is on board. when you're approached. Yeah. [00:17:34] Doc Searls: Yeah. I am, as I am too, is how far on board I am I, um, one of the blurbs on the back of, at least the first edition of surveillance capitalism was for me. So yeah, I [00:17:45] Katherine Druckman: can, yeah. I thought I, [00:17:46] Doc Searls: I remembered you were on. Yeah, it's on the back there. I tried to give one good enough that even though I'm not that notable, they'd use it anyway. So, so a question for me is given the power asymmetry between the industries of healthcare, just taking healthcare alone, which is a gigantic business to business insurance industry in the us, unlike most of the rest of the world where, you know, the, the pharmas, the HMOs, all of these gigantic entities are lined up on one side and we're on the other. And. I'm wondering how we make this happen. How, how, how do we get what we want regulatorily speaking? [00:18:30] Adrian Gropper: Uh, right. So I, I think, uh, we do that, uh, in two ways, uh, number one, we figure out how to label things properly. I mean, uh, Zoff certainly figured out some of that. And at the next level down, we, uh, we have to, uh, to make sure that, uh, the way we use language, uh, resonates with people that vote. Um, and so, um, we need to change, for example, the language of consent, uh, and privacy, uh, to the language of human rights and, uh, freedom of association, uh, by virtue of not. A accepting surveillance by default. So, uh, that is one thing. And then the second thing that we have to do in parallel with that is to. Look at how regulatory capture works. And for that, you need to, as you put it, uh, understand the sausage making of standards and, you know, what's the difference between ISO and w three C and I E TF and I E uh, not to mention, uh, lesser, uh, groups, um, in the governance sense in the standards, making sense and make sure that as the standard sausage is being made, you recognize what regulatory capture looks like, because if the standards are not around, the regulators cannot regulate appropriately. So when the regulators, uh, wring their hands over, should we, you know, will Elon Musk, uh, change the way Twitter as a platform deals with, uh, uh, Trump, let's say, um, and you try to understand, or the European regulators wring their hands, trying to figure out, uh, whether the apple app store needs to look more like Google's, uh, play store, uh, because apple shouldn't have the power to control, uh, you know, the apps on your computer or your phone, and they don't control the apps on your computer, at least not completely, uh, et cetera. Right? So, uh, these are the two dimensions of which I'm trying to answer your question. Uh, the framing one and the, uh, uh, regulatory capture, uh, through, uh, standards, appropriate standards. And, uh, that is why, uh, my current project, which is very much, uh, still in draft stage is to issue a pull request. To the IETF GNAP standards, GNAP stands for grant negotiation and authorization protocol. It's the successor to OAuth, uh, the rise of these platforms like Google, and apple indirectly is due to I is due to an unintended consequence of the way OAuth and open ID connect were designed. Uh, I don't think they were intended to do it this way, but it basically enabled surveillance capitalism on the scale that we have. And lastly, I will, uh, now that I've introduced oauth and GNAP and sort of, um, Reading ahead to what I'm trying to do in GNAP. Um, and IETF in general, I'll relate it to, uh, my work with HIE of one in the health record space. Right. So, uh, but let me pause for a second and make sure that before I, I go back to answer your healthcare related example question that, uh, we we're okay. So far. [00:22:29] Doc Searls: Yeah. So people are clear on what oauth at least is let's log in with Facebook and log in with Google and log in with, um, and, and with, uh, [00:22:38] Adrian Gropper: no that's open ID connect, which is, oh, that's so [00:22:41] Doc Searls: well, either way I thought it was off. Well, thank you for it [00:22:44] Adrian Gropper: is it's always under the covers under the guise of open IDQ. Okay. Under. The further, the open ID foundation is a separate OAuth is an I ETF standard open ID foundation is another standards body. And, uh, that, uh, is, uh, open ID connect. [00:23:03] Doc Searls: Um, so a little bit of history here might be interesting, which is that. The original intent with open ID and with OAuth. And you can explain how those fit together, uh, was actually personal autonomy in a way, an easier way to do signing on and, and relating to entities. And it ended up with us basically making them our proxies and, um, and that's not what we wanted in the first place. And, but I like, I like, I like the idea of coming up with incorruptible standards. And is that what you're trying to do basically? Does that exist? yes, [00:23:45] Adrian Gropper: that is exactly well I'm, I'm trying to come up with a way of clearly labeling the separation of concerns that, and capturing that separation of concerns into standards. So I've identified three separate concerns. One of them is authentication. Sign in with Google sign in with apple sign in with Facebook, right. The other concern is what I would call, uh, useful directories, which is things that are aggregated by, uh, default. Uh, you know, that's what makes search possible that we all need access to directories. We all need to do searches, but notably we shouldn't be authenticated when we are doing a search. We, we, we shouldn't be forced when we're fishing around. Even if we're not fishing around looking for, uh, you know, a, on how to terminate a pregnancy or something. Um, we, uh, so don't ask me to authenticate. Uh, before doing a Google search, uh, or, and promote things like Tor, for example, does that explicitly, you know, even though it's designed more or less for journalism, um, it, again, you wanna have the human right built into, and the lack of surveillance built into the technology by default, which is what to goes a long way to doing. Um, and then the third separate concern is when you are going to instantiate some kind of a transaction as a result of that, search it no longer it should. Of no concern whatsoever to the, uh, laboratory that you chose to do a blood test with, or the place where you chose to post a, um, uh, or, or read, uh, to post a particular, um, uh, tweet, um, how you got to do that. Uh, right. So in other words, you might be using Amazon, uh, to actually purchase the product, but they should have had no, uh, the vendor, uh, or Amazon itself to the extent that the vendor is delegated, uh, the logistics to Amazon should not know what you searched. In order to get to that product. And when you searched it, you shouldn't have had to be authenticated the way you need to be authenticated when you actually purchase the product. Um, unless of course, by default, you can purchase it for cash, which is even better. And use, use a drop, uh, you know, uh, emailer to, to get the, the package so that you have a middleman introduced, uh, Kind of the way apple sign in works, where they create, uh, and give you a bidirectional, uh, email address that they manage so that you don't have, you have a separate email address for each entity that you've signed into. Um, so what, what I'm saying is we have, I, I formulated, and I guess I, I developed this, um, you know, in, in the last, uh, year or so, or less than a year, I've developed this proposal that we designed the standard so that these three concerns are dealt with completely. Separately the net result of doing a good job on the standards. And I'll stop after saying this, um, is, uh, that the right thing to do becomes the convenient thing to do. So let me explain to you what the, what the problem is in reality, what led to these platforms, surveillance, uh, capitalism network effects that we've seen it's that people either for, uh, because they have no power over the, the vendors and the institutions, or simply because of convenience are are going to give up the digital exhaust that, uh, Zuboff talks about. That becomes the, uh, the basis of surveillance capitalism. And so we have to design the standards and, uh, this is the work in progress and the work I'm trying to influence specifically at IETF, we have to design the standards so that it is just as convenient to do the right thing, which is to have separate. Assortment separate choice of how you sign in. And we actually use in my hi of one in the demonstration project called trustee, we use sign in with Ethereum, just as the example of a, a way to do sign in with something that doesn't then result in any useful tracking, uh, and yet keeps you accountable and allows you to have a reputation potentially, et cetera, uh, using, uh, the ethereum blockchain in this particular case. But you don't have to do this with blockchains. There's lots of other ways to do it. It's just happens that the libraries and, and the standards sort of are available there. And then, um, you should be able again, to do your searches and access directories and post things to directories voluntarily without, uh, being authenticated. If you don't by default. And then when you complete the transaction, which is when you are running in effect the GNAP protocol, uh, in order to have authorization for something, uh, gab should, uh, by design, not make the mistake that OAuth and open ID connect made. Uh, and so it should be arbitrarily convenient for you to be using your own authorization server, your own agent, your own doctor, your own spouse, or partner as your delegate. If they happen to be either in a fiduciary relationship with you as opposed to a broker or a platform relationship, which where you are the product instead of the customer. Um, so you have to have agency at that stage in order to make this convenience. So in order again, to recap, to make the right thing, the convenient thing, we have to be able to sign in with something, but that something shouldn't, uh, surveil us in any way whatsoever, other than the obvious one, that we wanna have a reputation such as a, we wanna reuse an Ethereum address as a public key. Um, and then obviously we want to make sure that our directories and our searches are always by design and by default anonymous. And that's the easy thing to regulate, you know, once you decide you wanna regulate for that, and then, uh, So that doesn't become inconvenient because there's no need to be authenticated when you do those things. And then when you switch over into the mode where authorization is required, uh, that should be done with full help from, uh, accessible agents so that you don't have to be bothered with consent policies and reading, uh, which software you can trust and things like that. Because other people who are more expert than you are going to validate your user agents and, uh, examine the privacy policies of the people you are authorizing, [00:31:27] Katherine Druckman: I have a question though. Oh, and it goes back to a bit to your conversation about human rights and reframing the conversation to be more about human rights than about privacy. And I wonder just, and not to not to be overly political, which is my disclaimer right before I'm about to get political. Um, I wonder how many, if, if the goal is to, to change minds and reach a broader audience and, and make change, I question just a little bit reframing it that way because the, as I see it, privacy and individual agency, and freedom is one of the few things that people, at least in this country agree on. Um, people from different political ideologies will agree on. Whereas it seemed, it seems to me that when you frame things as human rights, it's only the more progressive factions that, that even care be. Because to me, individualism is something you find everywhere. It's um, for example, you know, you mentioned Trump earlier, right? Right now we're recording, um, at a time when we're learning about an, an FBI search of a Trump property and suddenly you see the far right. You know, screaming about defunding the police and it's, you know, it's one of these things where, oh, suddenly they, they care about their personal agency and their, their own, uh, individual rights. And I, I don't know, I it's just, it may be, um, that I'm not giving the argument enough benefit of let, let me it concern. Is that not [00:33:09] Adrian Gropper: enough people care. Okay. Uh, let me say that, uh, you might well be right. And, uh, you know, I've spent, uh, a long, a lot of time, um, Uh, trying to figure out how to explain privacy to people before jumping to human rights and, uh, to more or less failing. And I agree with you, uh, patient privacy rights is totally not in partisan and has benefited greatly from exactly what you said, but I wanna also point out to you that the right of, uh, uh, free association and assembly is probably more clearer, clear. To the right, uh, wing type of, uh, P politics than even to the left wing and maybe, yeah, may it is, it is as bipartisan an issue on, on the left where we see things like the right to unionize. In other words, to be represented, to have an agent that, uh, you more or less choose, obviously there's problems with that choice. You may or may not have a choice as to whether to, uh, join the union. Uh, again, that's as symmetry of power in a sense, and certainly on the right, uh, people, uh, value very much the right to associate without being surveilled, whether it's by the FBI or, uh, anybody else. Um, so, um, I, I guess I, I, I hope I'm right. Let me put it this way. that's fair. Uh, and, uh, but I think they're both bipartisan framings. The, the big difference is technical, uh, in the case of privacy, people hide behind consent. And so for instance, I participate in, uh, uniform law commission, uh, as, uh, as an expert invited, whatever, uh, and I watched, uh, different state commissioners trying to figure out how to make state laws. That would not be privacy laws that would not be as extreme as Californias, um, or GDPR, if you are a fan of that. Um, and, um, you know, uh, it, one of the things that it boiled down to is water people's expectations with respect to data processing. In other words, can we come up with a norm for what kind of, uh, secondary use for the data that you give to a merchant when you authorize a transaction or that you give to a, um, apple, when you sign in with apple in, in that kind of, uh, uh, a convenience move, right? Or sign in with Google, um, and you know, what's your expectation and can we make it convenient for the merchants and for the customers? Uh, To do the convenient thing, uh, by understanding that their expectations, their consent is not explicitly needed. In other words, you don't have to check those stupid boxes on the websites, uh, that, uh, nobody. And, uh, so what I'm saying is that the important distinction in my is, is not political. The politics are completely bipartisan. In my opinion, the important distinction is who is burdened by the consent component that we associate with privacy. And if the standards work the way I suggest they have to work from the human rights perspective, then you can delegate that burden to somebody that you choose and you trust whether that's your doctor or your spouse or your union, um, That is the critical, this ability to delegate is the critical technical thing here. And it's not political and it's black and white. If you don't have delegation, then the convenient thing to do is, as doc said earlier, is to sign on the dotted line, to open the shrink wrap package, to show your passport when all you needed was to prove that you were over 18, et cetera. [00:37:39] Katherine Druckman: Right. I, I hope so. I hope you're right. I absolutely [00:37:42] Doc Searls: do a quick question. Um, if I'm here, , I've been here gone, um, is what's okay. So, so if people are delegating and I, I, I love the way you put it. What's the business there is, is it, are you delegating to the doctors? Is there a new kind of new class of fiduciary? That. Um, yes. [00:38:05] Adrian Gropper: And then we see. Yeah. Okay. Well, we see it, for example, in the European, uh, open banking regulations, uh, and to some extent, and I, I don't want to get into the weeds here as something called PSD two payment services directive, where the regulators come out to the banks, think of the banks as the resource service, the service providers, they're, they're holding you money. That's the service, um, securing your money and saying you have to allow the individual to delegate to the payment processor, which is a credit card company, typically of their choice. In other words, you, as the, um, as the service provider cannot control, uh, who the, um, who the person, uh, delegates to you get the person gets to choose, uh, their credit card company, independent of the. And you can't do anything about it now, uh, that is, uh, in reality, you have to go a little bit further because, uh, in banking, obviously it makes sense for there be to be regulations for who can be called themselves a credit card company. Right. And that's pretty obvious. Um, but, uh, in the general sense of say health records, um, you cannot, uh, you, you don't necessarily want to create regulations for who's an authorized patient group. You know, if I decide to join 5,000 people with sickle cell anemia, uh, in California and have them control how my health records are used by, uh, clinicians, by hospitals, by emergency rooms, because otherwise I won't get the treatment I need when I have a sickle cell crisis. Um, there isn't an obvious reason to regulate. The, uh, to regulate that particular tiny nonprofit that is, uh, basically a cooperative of patients, right? Uh, the same way that, uh, we want to regulate a credit card company. On the other hand, if the delegation is to a defense lawyer or a physician, uh, then we have a different kind of fiduciary where there is a benefit on giving me the choice of fiduciary, but also regulating the fiduciary saying, you know, you, if you practice medicine without a license or law, without a license, well, medicine without a license, you know, will throw you in jail law. Just the court were cognize you. Um, so the, did I answer your question doc? [00:40:44] Doc Searls: Yeah, it, it does. Uh, and I, so have you covered HAA of one yet? And can you tell us where that fits in this? Oh, [00:40:54] Adrian Gropper: yes. Uh, uh, in, uh, because I sort of explained the, uh, relationship between regulatory capture of standards and the ability of regulators to then mandate things like open banking practices or the ability of physicians to, uh, prescribe, uh, uh, a particular medication, regardless of, uh, you know, what, uh, jurisdiction they're in or who their employer is or whatever. So I'm trying to influence standards and, uh, I'm not funded to hire developers. So I do pay one high school student, uh, to, to help out. So all I can do in order to participate, uh, in the standards process is to try and build an example implementation of what the standards are that would enable this. Let's say, uh, delegation, uh, agency and human rights perspective. And that project is called trustee. Uh, the group, uh, itself is called I of one it's, uh, it's a group. Uh, it's a list of about 50 people of which six or seven or eight. Consistent inner core members. Um, and, uh, it's an informal group, uh, that I basically control. And I, um, uh, registered the trademark trustee, uh, as, uh, because, uh, one of the elements of being able to have this kind of agency is that the stuff has to be open source. Um, just like in order for doctors to practice medicine without being regulated by the FDA, uh, they have to be U using, uh, open, uh, medical textbooks and open research papers. Um, you, you can't have black box medicine and then expect doctors to be regulated. As professionals, they then have to be regulated as technicians the way we regulate, uh, I don't know, uh, artificial intelligence, uh, for, uh, uh, decision support in medicine, right? You pretend that the FDA can do this or the way we, right. Uh, if you're gonna have some kind of formal centralized regulation, um, it becomes a cover for not having, for being able to sell black boxes because they're regulated. Um, and we can get into that later again with healthcare as the most prominent example, a [00:43:31] Doc Searls: couple of quick questions. One is, um, trustee, you registered that spelled how, [00:43:37] Adrian Gropper: uh, just, uh, the way you would trustee. Okay. [00:43:40] Doc Searls: With two. Yes, but two is okay. The, uh, the second is explain what an hae is. You have an hae of one. Okay. So what is [00:43:48] Adrian Gropper: hae an HIE of one is a platform for accessing health records, right? So states, uh, there are probably 60 health information exchanges in the United States. Uh, there is no federal system yet, but there is one called TECA that's in the works. And there are a couple of national networks that try to bridge, uh, across these exchanges. So health information exchange is simply a way of connecting one authorized user to a health record. In another institution and, uh, the problem with these, uh, aggregators of, uh, authorization, if you want to think of it in that way, um, uh, is that, uh, they're inaccessible to the patient. They have no responsibility whatsoever to follow the patient's policies in how they share information. In other words, they're just a data broker and, uh, you may, the patients may or may not know who the exchange is. That they're a part of. And just like with any other data broker, um, you really have no, they're, they're hidden from view. So, but they serve this purpose. Now we know about one class of data brokers that are regulated and that's credit bureaus. And we know what a pain in the asset is to deal with credit bureaus, but at least by regulation, we know that there are three of them and you have certain rights to disclosure and inspection. Uh, of credit bureaus as, uh, data brokers. Uh, we have states like Vermont, for instance, that have tried to regulate, uh, and ma uh, hidden data brokers. So to bring the hidden data brokers out into the sunshine and the federal privacy regulation that's being talked about, uh, and is actually past committee, I think in the house now also sort of follows this practice of trying to bring data brokers or health information exchanges into the sunshine. Uh, so, uh, hi of one simply means that my policies, my individual policies. For how my health data, wherever it is, whether it's at this hospital or that doctor's office or that lab, uh, or that wearable, uh, that, uh, is tracking my periods or my app, uh, uh, or whatever that the health information exchange policies are my policies and only my policies. And if I wanna run my own health information exchange, because I don't wanna share those policies with anybody, then I don't have to because the health record system, in other words, again, the hospital's labs. Medicare records for insurance, uh, et cetera, uh, apps on my phone, uh, like apple health is a very notable one. All of these people have to act as if there were banks and there was open banking or PSD two. Right? In other words, I should be able to bring my own health information exchange agent with my own policies, uh, and give it access to all of these things that are otherwise run by data brokers. And so the trustee is the code. The trustee is the demonstration code, uh, that assembles various standards in progress. Again, the advocacy that I'm talking about before and shows in that context, how that could be done. How do you use the standards that actually allow individuals or groups of patients? Like I mentioned earlier, say, uh, the sickle cell patients, uh, could decide that they wanna run a trustee. They want as a community to basically decide on a set of policies, uh, the way you would decide, uh, to join a church or a gang or a club or a co-op, uh, in order to have them, uh, represent you and make it convenient for you to, uh, follow policies. So that's where hi of one comes from, and that's where a trustee is. It's in effect a legal agent, um, that is able to deal on your behalf with the powerful institutions, like the hospitals and insurance companies and labs. [00:48:24] Katherine Druckman: I wondered, so we talked about, well, before we, we really got rolling here publicly. Anyway, we talked a little bit about GitHub co-pilot and I wondered if you wanted to talk a little bit about that [00:48:37] Adrian Gropper: very much. So, um, so, uh, just for background, [00:48:42] Katherine Druckman: yeah. We should probably tell people what copilot is. what [00:48:45] Adrian Gropper: copilot is. Um, uh, GitHub is a platform, uh, I don't know what fraction of the open source code. And again, remember I mentioned earlier, how open source is essential from a regulatory perspective, cuz you cannot regulate professionals like lawyers and doctors, uh, unless the tools that they're using are open source because, uh, they have to be able to take responsibility to examine those tools and decide. They are what they say they should be, and even modify those tools as professionals. So, uh, GitHub serves this function for the vast majority, not just of code open source code, but also, um, you know, standards, uh, most, I think all of the standards groups that I participate in use GitHub as their work in progress before they publish it. And, uh, so GitHub does these three things that I mentioned earlier, right? They authenticate you. You can sign in with GitHub and manage your reputation. Relative to that signin, uh, they allow for search, so you could find issues and you can find things whether you're signed in or not, uh, right. You don't have to, uh, uh, so they're an aggregator or a data broker, if you would. And then they provide this essential service of storing, uh, the, uh, actual transactions, which is the code that you write as well as the issues that you raise, uh, under your authenticated identity. Um, and so this is all well and good. Uh, it becomes a platform. It has this network effect that like Facebook and Google takes over the world more or less, uh, and everything is fine. Except when they introduce a piece of AI, now that, uh, goes and takes the code that GitHub has indexed and uses it to write new code, right. With [00:50:56] Katherine Druckman: a new and that code. Yeah. And, and that code could be any code. Assuming it's just open repositories and it's the, the, that can be any license and it's using it's much worse than that. One of the, I mean, even if we don't have the numbers on how much, you know, open source software is hosted. It's a lot [00:51:20] Adrian Gropper: but here's the problem. Uh, the, once you bring AI into the picture, it's not just that you may or may not have lost the, uh, license access, right? So when you mention private repositories or we know we have either, uh, GPL like licenses, free licenses and, uh, you know, uh, Apache, like, uh, even if the AI tried to maintain fair use by, uh, forcing this new code that it generates to be licensed appropriately under so-called fair use. Because we're talking about copyright loss here, it would fail. And here's the reason it would fail. If you are a, uh, creative person, you write some code, you figure out an algorithm that is better and you instant, or you write a journal article, whatever. Right. You know, um, a research paper. It doesn't matter if something covered by copyright and somebody goes and copies the thing outright. Well, that's not fair use, uh, unless it's, uh, Pardy or whatever. Uh, right. But if somebody goes and because they have AI, they figure out what was intelligent. What was novel about your algorithm that you instantiated in that license, copyrighted code? What was novel about the framing of an idea? That you created that you wanna protect because it's AI, it can figure out what's interesting or what, what the approach is of your algorithm. And now it writes code that doesn't have a single byte of matching with that original article in the journal or that original piece of software that was licensed under a free software license or an Apache license. There's never gonna be attribution. There can't be attribution. And in healthcare, this, I would claim basically destroys medicine and destroys science for the following reason. Once you lose attribution, whether it's in math or physics or medicine, the incentive for people. To get licensed and to get trained, to get their PhDs to dedicate years and years, to in order for them to be able to sell their services as a doctor at a huge premium over being a technician. Right. Uh, the incentive for them is lost because now you're basically forcing everybody to act as if they were, uh, using a R or a, a license. Good, something that's not open source, something that's not education. You're shifting the whole medical science evidence based medicine, regulation of physicians as professionals into the regulation of corporate black boxes, like the Tyro corporation in, uh, blade runner. Right. Um, literally, I mean, um, So, um, we, so this GitHub co-pilot, uh, is really the camels noses under the tent because all of a sudden now we're allowing a corporate owned AI, no matter how they license it, even if, uh, Microsoft, uh, you know, honors the private repositories and doesn't use those, but only uses the public ones on, on GitHub and things like that. No matter what it does, we are now allowing a black box that is not open source. We're introducing that into the regulatory process, into the scientific process, into the process of medicine. And that's an unmitigated disaster for society because now we've lost that balance that protects innovation and protects, uh, human agency. Um, You know, in a balance with FDA regulation and things like that. Yeah. Well, [00:55:32] Katherine Druckman: I'm, I'm glad you, you brought it up because especially in the context of, you know, medical innovation or, you know, academic innovation or any of these things. Um, because I think that's, that's, that is a lens through which I don't think, um, a lot of people are looking right now. Most of the conversation I see, but I'm a little bit biased because I'm kind of more plugged into open source communities and stuff. The conversations I'm seeing are all about licensing, um, which is, you know, an obvious conversation to have. And to be fair, I feel like before I open my mouth too much, I haven't spent, and I don't feel yet that I've spent enough time, um, to have formed a solid opinion about this. Although I completely understand the controversy, I guess that's as that's as far as I would say, but. Yeah, it's, uh, I think to view it through that lens is, is a significant one. And I think, um, I don't know, I hope this, this is another area where I really hope that people who are listening will have, uh, some opinions and weigh in and, and reach out to us. So [00:56:35] Adrian Gropper: just to, to sort of, I'm sorry, doc, I'll just make a very quick go for it. So what, what I'm saying is when you think about these authorization issues, like the stuff I'm talking about, the human rights versus consent, we have to stop thinking in terms of sharing information, uh, because. You don't have to share information. And I could go into great detail about how large hospitals, large academic hospitals avoid sharing the information, because they're just sharing the insights, the models, the education itself, if you wanna think of it, that way, that results from that. So what they're doing, uh, these largest of large academic medical centers, because they have millions and millions of health records, uh, they are allowing these health records to, they are turning the health records into intellectual property that is secret, and then licensing that secret intellectual property to a corporation. So that way the health records never leave the institution. It's called federated learning. So the AI is already being used to extract the insight. From the millions of health records. And instead of turning those insights into open medicine is in a publication, in a journal or a textbook or a lecture. Uh, they are selling that, uh, education, that insight to a private corporation that then markets this as a black box AI. So for you guys that are rooted in open source and open source practices, this is an existential issue. No. And that's what I'm trying to demonstrate with HIE of one with the trustee project is I'm trying to show that the level where the learning, where the inferences are developed needs to be at your individual agent's level, you know, my trustee or at the community that I choose. If I choose to share my records, authorization wise, with those, with that community, that cooperative that I chose so that they can participate in federated learning for the benefit of the community and not, and keep everything open source because that's in their interest as a community to reduce the cost and to improve the rate, which science progresses. That's the, I'm sorry to interrupt you doc, but that is the essential issue here. Around HIV of one around trustee, around the regulation of medical devices and access to health records. So [00:59:23] Doc Searls: I'm wondering, I'm gonna put this in a broad way, then I'm gonna narrow it down where we are now. Um, and that's the narrow way the, the Broadway is there has been this tug of war between personal and for that matter delegated agency and, and, and machine control and control by the entities behind the machines, uh, on the, on the VRM list that I think we dealt with it last week on this, uh, show as well. Um, our cars for example, are less and less hours. Um, uh, Kyle Rankin in the last week, talked about this at some length that. Because we're being spied on all the time and more and more of what it takes to drive a car is being handed off to, to machines and machine intelligence within the car. Um, and, and yet at the same time, and I was reading something about this recently that the best system for optimizing traffic is the human beings behind the wheel. It, it goes wrong in a lot of ways, but for the most part, it's kind of an amazing thing. It's one of the few ways that we work well and herds. And again, there are standards in that as you pointed out. Um, uh, and, but I, and I think everything you've been saying, if should it succeed, moves us toward, um, a, a, a world where our individual rights, um, are, are more fully respected. And we actually get systems that work better, um, because the right intelligence in the right place, under the right control over the right. People and systems with an understanding of what the limits of each of those are within a kind, kind of complex, a framework which may be complex and the middle, but as simple in its, in its in, in concept. And I'm wondering, I mean, you you're very paretic and, and how you approach this. I mean, I think we've known, I was just thinking about it. We've known each other now I think since oh six. So like, uh, 14, 15, 16 years. something like that. And we've both been kind of UN quest with this. Um, in many cases kind of with locked arms and it's a long slog. Uh, and, and yet both of us, all of us, Catherine too, are engaged. You know, we're, we have there's enough optimism here that we want to keep moving forward. But what do we think? Where, where are we with this right now in the, in the, in the forward motion of this? If there [01:01:56] Adrian Gropper: is. Uh, well, I, I think we, uh, we're at a, an inflection point because of this, uh, metaverse framing. Mm. Um, we, we see, uh, you know, the, the metaverse, uh, uh, traction, uh, with, uh, a broad range of, uh, you know, people, uh, may be more privileged than, than not, but still, and, uh, people have realized maybe for the first time that they have no idea how to regulate. uh, a situation where my agent is indistinguishable for me, uh, almost by design. Right? So what is a, metaverse a metaverse is, is a place where your avatar? Um, I, I try to coin a, a phrase, um, in, in the metaverse, uh, uh, nobody knows you're an agent. Right. Uh, instead of nobody on the internet, nobody knows you're a dog. Uh, you're a bot not, or, you know, you're a bot, it's not, you don't wanna use the term agent. Right. Uh, and so we see this in the debate over what's the value of Twitter, uh, to Elon Musk, right? Uh, his argument pretty much the entirety of the argument is he wanted to know how many bots there are on Twitter. And I would basically say, uh, it really doesn't matter because, uh, we're Twitter is now just the metaverse, you know, uh, 0.1, if you wanna think of it that way. And, uh, it already has all these AI algorithms for what you see and, uh, what you get notified about. Um, and, uh, does it really matter. To the value of Twitter as a platform, whether it's 5% bots, a 25% bots, I would claim no, it, once you adopt a meta verse framing, it's, it's interesting in that they're all gonna be bots. It's just that some of the bots are gonna be controlled by me and in the HIE of one project, uh, we make it very explicit. What that means. Some of the bots are gonna be controlled by a community that I choose. And that is actually the base case. Cuz again, it's too much to ask people to control their own bots. It requires too much sophistication. Uh, it's like asking them to practice medicine. And uh, we are now at this inflection point doc, where once we start to understand that we can own our own inference engines and what the importance is of open source AI. That it's not optional. We, we really, uh, now have to double down on the fact that secret AI, whether it's owned by Tyrell corporation or anybody else, or GM just cannot be allowed to exist. It's, it's, uh, it's an existential danger, not just to the scientific method and to democracy, but I think to, to humanity as a, as a whole. So even though it might delay the benefits, the so-called benefits of AI, and I don't deny that AI will become a better doctor than the doctor in many respects within, let's say five to 10 years. Um, we cannot allow, we cannot be lured into allowing black box AI to, to run our cars or, or our, um, healthcare. That's good. [01:05:59] Katherine Druckman: That was fantastic. Uh, it was kind of perfectly worded, although, um, and slightly optimistic, which I appreciated , no, it [01:06:08] Adrian Gropper: actually is optimistic. In other words, I believe in science, I believe in medicine, I believe in that, those being ways that humans augmented by AI. And so, you know what I would advocate for which I haven't figured out how to convince my doctor friends is that, uh, AI should be licensed by the same professional. Agencies that license, the doctor that way, just like we allow engineers to bring a slide rule when they take their engineering exam or calculator nowadays, you don't expect an engineer to come, uh, and do, uh, you know, you may not allow them to bring, uh, uh, you know, a web browser, but you, you certainly would allow them to bring a sophisticated, uh, calculator 20 years ago, sophisticated. It does algebra. Um, right. So, uh, we need to be licensing the AI the way we license professions. And if they happen to be better at passing those tests, then the doctor, then that's okay, because the doctors will own them. The doctors will be the, uh, people. Train the AI decide which aspects of federated learning the same thing would apply to lawyers and other disciplines as well. So as long as the AI is not a black box, we know how to regulate it. [01:07:36] Katherine Druckman: Yeah, I think that's, um, yeah, I, you know, I am pro uh, doing things out in the open as, as we all know. [01:07:44] Doc Searls: So it's the, it's the, it's the black box avoidance project. Yeah. Yeah. Where, you know, and fortunately I think it's everything . Yeah. But I, I think it actually helps that, um, Zuckerberg and Musk, um, and whoever's running, uh, I guess Twitter at this point at a certain level looks silly at this point. I mean, because, uh, I, I, I know that, you know, Facebook is spending, you know, 40, 50 billion on and on the metaverse and their AI. Um, and almost everything they've had to show for it so far has been, there's been an awful lot of, are you kidding me to it? And, um, you know, but, but I think an, an important piece of this though, uh, Adrian, is what you were, what you were saying about where medicine is going to go. I mean, I, I, I'm speaking personally, I have, I have itchy spots on my body right now that I've had three different doctors, uh, uh, organizations look at, and it's kind of like a FASTA, you know, I don't know. Um, and I have a feeling there is an answer , you know, and there's, there's probably a better way of arriving at it than getting considered opinions by three people who each give me 20 minutes or less, you know, so, and there, and I, you know, and that may not, not even be a good example, but it's an interesting, it's, it's an interesting one, because there's so much about. Us that is mysterious and is going to need a lot of intelligence brought to bear on it. Um, and more than we get from, you know, our best professionals and that's not a knock on them. It's really kind of, you know, we, we have a lot of science now. There's a lot we can apply. Uh, so, but I, and I think that, I mean, one of the, one of the issues with medicine, especially here in the us, I mean, in my case, I'm, I'm in five, six different medical systems in Boston, New York, Santa Barbara, Los Angeles, Bloomington, Indiana and they're, and they've divided up by specialties there, you know, the cardiology here is not the ophthalmology. There is not the, uh, you know, gastroenterology somewhere else. And. um, and an awful lot of it is just judgment calls and that's, I think we can improve on that. Um, and maybe part of it is just getting out of the, I, I, I sort, I'm wanting to think about, and maybe this is the subject of future conversations. Um, how we, how is as you start to succeed in, in, in hacking these, you know, this with the ITF and not with the, the other organizations, um, uh, and we get, you know, the, is it G or is that yes, the, yeah. Um, I, you know, I, I, I hope it works. I hope know who works. We have to make it work. I guess. That's, that's the basic thing. [01:10:52] Katherine Druckman: I think, I think the goal always with these episodes is giving the audience a lot to think and talk about, and hopefully they will talk at us about it and then we can get get some feedback. But, um, but yeah, I think this has been fantastic. We're glad you were able to join us if through a few technical difficulties. [01:11:11] Doc Searls: Yeah. All at my end. [01:11:13] Adrian Gropper: Well, if you, uh, have a way for your audience to ask me questions, or if there's a, a chat or, or whatever, attached to the podcast, uh, I'd be happy to engage because it helps me immensely. To refine the, uh, you know, I, I, I look up to people like doc that are able to explain these things and, and, and write which I communicate in ways that I really am pretty bad at. So, oh, you're very [01:11:43] Doc Searls: good actually. [01:11:46] Katherine Druckman: Well, there are a few ways, um, yeah, there are the few ways to, to get in touch with us and then therefore you would be find us on Twitter reality2cast on Twitter. We're also on, in, in the fediverse or, um, reality2cast@linuxrocks.online, I believe. Um, but you can find us there. Mastadon servers everywhere. And, uh, you can find us, uh, lets turn a limited extent LinkedIn and Facebook, but you know, Twitter's probably the big one and, and Mastadon, oh good. Let's use Twitter. And then, um, Yeah, so, yeah, that's and you know, we have a newsletter, you can comment on the newsletter. You can comment on the podcast page. And there's a contact form. You can email us, you know, all of those things.