Athena: 00:00:03 Have you been zombified by algorithms? Welcome to the Zombified podcast, your source for fresh brains. Zombified is a production of Arizona State University and the Zombie Apocalypse Medicine Alliance. I am your host, Athena Aktipis. I'm a professor at ASU in the Psychology Department and I am also the proud chair of the Zombie Apocalypse Medicine Alliance. Dave : 00:00:30 And I am your co-host, Dave Lundberg-Kenrick, the media outreach program manager at ASU and brain enthusiast. Athena : 00:00:37 Yes. We love brains. We can't get enough. Dave : 00:00:42 No! So today we have Ed Finn telling us something that I find really terrifying, [laughter] Athena: 00:00:48 It is really scary Dave: 00:00:48 I think this is like our scariest episode. [Athena agrees] Dave : 00:00:54 So do you want to tell us a bit about it? Athena : 00:00:56 Well, yeah, so uh, Ed Finn, he is an author and he's also the founding director of this center we have at ASU called the Center for Science and the Imagination. And I have to say, I totally have like center name envy, like I want, like a center that's the Center for Science and the Imagination because that just is like the awesomest center ever. Dave : 00:01:17 It is, but I'm thinking it sounds so happy, and one, like it sounds like a ride you'd go on at universal studios. [laughter} Athena : 00:01:21 [laughter] Dave : 00:01:23 But then when we talked to him, like this one freaked me out. Athena : 00:01:27 Right, it's actually more like a creepy haunted house ride. Dave : 00:01:30 [Dave agrees] [laughter] Athena : 00:01:34 So Ed talks to us about algorithms, what algorithms are and what algorithms want. Do you know what it is that they want, Dave? Dave : 00:01:43 Um, I mean they want to control us apparently. Athena: 00:01:46 Yes. They want our brains. They eat brains! Like, algorithms, they're brain eaters. Dave: 00:01:54 Yeah. And they're, they're insatiable. [laughter] Athena : 00:01:59 So terrifying. Okay. So, uh, Ed talks to us about how our relationship with technology has really been changing, right. And how algorithms are a huge part of that. And this I think really brings into question like, you know, how autonomous are we and what have we been giving up as we've been accepting technology? And like, is it actually okay? Or should we be worrying about whether we're actually achieving our goals or whether we're getting hijacked? Dave : 00:02:28 And it's, I think it's really interesting. I also think when he, he also talks a bit about whether or not these entities can ever truly have a consciousness and a conscience. And I think it's just really fascinating. Athena: 00:02:41 Yeah. So hold onto your brains for this wild ride with Ed Finn. Dave : 00:02:46 If you can. INTRO MUSIC: 00:02:49 [Psychological by Lemi] Athena : 00:03:25 So I'm inclined to kind of just start chatting and then when Dave comes in, we'll just be like "finally Dave's back." He's probably in the bathroom or something or maybe he's eating cookies. Yesterday we had a podcast and I had some cookies out and he just like ate them all. Ed : 00:03:42 [laughter] Now you've learned not to bring cookies out. Athena : 00:03:44 During the podcast, and it was a podcast about diet and food. Ed : 00:03:47 That's really funny. Athena: 00:03:51 It was fun. So, well anyway, you'll meet Dave when, when he comes in, I guess. He's uh, he helps with marketing and research and his dad is a professor a few doors down. He's about to retire, but he grew up with an evolutionary psychologist for a dad. So you can imagine what that does to you. Ed : 00:04:09 Yeah. How is he? How is he doing? Is he okay? Athena : 00:04:12 I don't know. We'll ask him when he comes in. [laughter] So, um, so Ed, thank you so much for being on Zombified. Um, so can you maybe, uh we have all this background noise because we're waiting for Dave, but I think we can get started. Just like talk a little bit about, um, do you know about the podcast a little bit? I mean, you were at the ZAMM meeting. Oh, here's Dave! Yay! [Dave and Ed exchange introductions] Athena : 00:04:40 Hey, you want to close that door so we got a little less noise there? Excellent. Dave: 00:04:44 Um, okay, sorry, I just need to organize some family stuff? Athena: 00:04:49 Got your text on. I was just telling Ed that I hid the cookies today. [laughter] Dave : 00:04:56 You know, I, I also ate a big lunch today, so I don't think it will be an issue. Athena: 00:05:02 So, um, so yes, welcome to Zombified. So Zombified is all about things that control us and influence us. Um, but it's also about zombies. Well, we do just talk about zombies and, seeing that you're an expert on both monsters and control and you also work on things relating to technology and AI and other monstrous things that control us. It would be totally awesome to have you on the show. So thank you so much for coming. Ed : 00:05:32 Thank you for inviting me. I'm not sure if I came of my own free will, but I'm glad to be here. [laughter] Athena : 00:05:35 We are delighted to have you here. [laughter] Dave : 00:05:40 So, I have a question. Could you tell us your background? Ed : 00:05:43 Yes. Um, that's an interesting question because I feel like, uh, I did a lot of strange things that only now in hindsight make sense in the things I do now. Dave : 00:05:55 Or even, I guess, an overview of what you do now. Ed : 00:05:58 Okay. That's another complicated question. [laughter] I'll start. I'll start with the background. So my academic training is in literature. I have a PhD in Contemporary American Literature. Uh, before grad school I worked in journalism and I wrote about all sorts of different stuff, some, some technology and science writing, but also politics and pop culture and various things. And I have worked a little bit in the technology industry and I've always had an interest in the way, the intersection between computation and culture. And so those are some of the things that I brought to my current role. Uh, and I've also been a lifelong science fiction fan. And somehow all of those things contributed to my starting this strange new venture at ASU called the Center for Science and Imagination, which is an effort to change the relationship that we collectively have with the future. To invite people to feel a sense of agency and responsibility towards the future. That this is a challenge we all face and we're all making choices every day that are going to determine the world we live in. Even if we pretend that we're not thinking about it or if somebody else is going to take care of it or we don't have any power, we do. And we're making those choices all the time. So, uh, somehow between stories and, uh, journalism and, uh, all of the strange ways in which technology changes our relationship to time and narrative and one another. Uh, here I am. Dave : 00:07:34 Yeah. So this whole issue of our autonomy and not being just zombified by the things going on around us, being more deliberate about our decisions. I mean you're, you're part of the movement to reduce the global burden of zombification, it sounds like. Ed : 00:07:49 I think I'm, uh, yes, I'm, I'm anti-zombification, or maybe more optimistically pro de-zombification. [laughter] Athena: 00:07:59 Excellent. Excellent. So you kind of gave us a great overview of the, the areas that you study. Is there anything like at the moment that you're particularly excited about in your work? Ed : 00:08:10 Yeah, there are a few things that I'm really excited about. Uh, one of them is Frankenstein, which, uh, is a great lens for talking about some of this stuff. Uh, and Frankenstein has turned into the project that just won't die, mysteriously, because we started it six years ago and I feel like we've been celebrating the bicentennial of Frankenstein for at least four years now. So I'm hoping it would be, you know, maybe excited is not the right word anymore, but I still really think it's an important and wonderful lens. And the other thing that I'm really interested in is AI and the ways in which, I guess you might say, not AI in a purely technical sense, but AI, artificial intelligence as a cultural phenomenon and the ways that we deal with the idea of AI and machine intelligence. Dave : 00:08:58 Could you, could you tell me what the Frankenstein thing is that you were talking about? Ed : 00:09:05 Yes, so I co-direct the Frankenstein bicentennial project here at Arizona State University with a guy named Dave Guston and we have a few different tentacles to that project. That's right. For those of you listening at home, Athena's pointing to our beautiful annotated edition, a beautiful, it's a beautiful sounding book. It's Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds. So we took Mary Shelley's novel and we added a bunch of essays and annotations from experts. I think you wrote one, right Athena? Athena : 00:09:44 Yeah, I wrote one about microchimerism and how we are actually made of the parts of many different bodies. So we're all a monster. Dave : 00:09:53 Cool! Ed : 00:09:53 Not, not exactly, uh, surprising that Athena would write an entry like that. [laughter] But uh, that that's uh, what makes this edition really awesome I think is that it brings in a lot of different perspectives from, uh, scientists, engineers, researchers in the social sciences, science fiction writers, artists, all sorts of people who wrote annotations or essays exploring how this book is not just a historical artifact, but actually is still really relevant and interesting. The central questions that the novel poses are really important for us right now as we are actually making all of these new kinds of life in synthetic biology in terms of AI robotics. And so that's, uh, that's one of the big pieces of our Frankenstein work. I'm also running a big grant, around Frankenstein and informal STEM education. So asking these same questions through a big network of science centers and museums around the country. And we created a really fun new alternate reality game where you can become an intern in Victoria Frankenstein's lab and play through your own little ethical dilemma. Uh, and, uh, we also took the whole book project and put it online for free. So you don't even have to buy this book, though of course you should. Um, you don't have to read it, but you should buy it. [laughter] But you don't have to buy it. You can find it, the whole thing for free, including actually a bunch of annotations that are not in the print edition at frankenbook.org. And that is supposed to be a collaborative reading and annotation project. So you can also add your own questions and comments and see some really cool animated videos we made. Athena : 00:11:35 But you should buy the book anyway because it's pretty, and it sounds really nice when you flip the pages. Dave : 00:11:39 It actually is a really nice looking book. If somebody did want to buy the book, would they, where were they look for it? Ed Fan: 00:11:44 It's available at your favorite bookseller. Uh, it is, uh, published by MIT Press, so you could get it from them, you can get it from, uh, any your, your favorite, um, digital behemoth bookseller. [laughter] Whatever, whatever reanimates your corpse. Athena : 00:12:03 Cool. Awesome. So I want to ask about monsters and this whole issue of like autonomy and control and like what really is the notion of the monster and how does it intersect with autonomy and like, is it, is a monster about something that is developing autonomy that's now different from the creator or what's the relationship there? Ed: 00:12:27 That's a really good question. And I think there are a few ways to approach it. So one of them is we, we, we call things 'the monster' when we are, when we feel threatened and we don't know how to divide, how to handle this question of autonomy. So we call things monstrous when they are somehow troubling the boundary between what we recognize as familiar and what we recognize as dangerous. Uh, and if you think about, I'm not really an expert in this, but there's a rich history of legal categories of monsters in Europe. And, uh, in medieval and renaissance Europe where there are different categories for people who might have mental disabilities or people who had physical [clears throat] uh, you know, physical challenges or just sort of, um, different, different bodies than sort of what was expected. Um, and so those are all efforts to, to, to put this, these, this uncomfortable feeling, this problem into a box, right? Say, "Oh, well we're going to give you a label. You are now officially a monster." And there are different categories of monsters and some monsters, you know, can vote in some monsters, can't, things like that. Uh, some monsters can own property and some can't. So that's one level of it. And another level of monstrosity is when that, that sense of being threatened gets more extreme. And we say, "oh no, this is, you know, this, this is now officially an other, this isn't, this is an enemy. This is an identity that is threatening us in some way." Maybe it's an existential threat, a monster, like, um, you know, uh, the terminator, right? Skynet, uh, or maybe it's a more intimate monster, like the creature in Frankenstein where it's not necessarily about an existential threat to the human species, at least not at first, but it's somehow threatening to our sense of identity and our sense of self. Athena : 00:14:25 Yeah. So for these monsters that kind of arise from us that, you know, we initiate, whether it's, you know, like the monster in Frankenstein or whether it's current issues that we kind of have in biotechnology where we're creating things that may be, you know, evolving or growing of their own accord, you know, where is the point where they become the point where they start being out of our control or not predictable or, you know, is that, is that part of what makes them monstrous? Ed : 00:15:00 I think it's maybe the unpredictability is actually even scarier to us than the notion of control. It's when things start to surprise us. Because there are lots of creatures, entities that we create that we grant different levels of autonomy to. Um, and we don't think of them as monstrous until they start to surprise us in different ways. So if you think about a, most of the airplanes we see flying in the skies are flying themselves. The pilots are not controlling the airplane most of the time. Maybe for the most sensitive parts, you know, taking off and landing, the pilot might be manually controlling it. But we've automated a huge amount of that for safety reasons. We've automated most of the air airplane, uh, travel experience and nobody thinks of the airplane as a monster, right? Even when things malfunction. Because even when it's unpredictable or, uh, something goes wrong, we don't find that surprising in a sort of existential way. Um, I wouldn't be surprised if people describe totally independent autonomous vehicles as monsters at some point in the future. You know, it's one thing when the Uber runs over and kills a pedestrian, uh, which happened in Tempe, uh, about a year ago, year and a half ago. Um, I think that it was early enough that we, nobody was really accusing the car of monstrosity then. But that was also because we, there were still so many constraints on the vehicle and all of the, the discussion or most of the discussion I think that I saw at least was about the fact that the, the human in the car was not paying attention. Right? And I think there probably were some people who accused that human who being a monster for, for that lack of oversight. Um, but once we grant these systems more autonomy, then it's the, the, not just the autonomy but the ways in which it's used that we didn't expect, the unexpected, the unpredictable, the surprising that really pushes us into the space of monstrosity. Dave: 00:17:07 What about, oh, have you seen like Azeem Sharif's research on the cars that have to choose whether to kill the driver or the passenger? Ed : 00:17:17 Is this the MIT Trolley Project? That might be something related. Dave : 00:17:21 I think its at UBC but... Athena : 00:17:23 Yeah, basically adapted the Trolley Problem to um, having people make a bunch of decisions about which direction should the self driving car take. So yeah, in fact, I'm hoping that we'll have them on the show later. [laughter] Dave : 00:17:38 Yeah. The basic idea is like they had to decide if you see a car coming down the road and say there's a pedestrian in the, um, in the walkway, and the car can either hit the wall or hit the pedestrian, what should it do? Assuming it'll kill the driver if it hits the wall or if you might have two pedestrians or you have three pedestrians and then they would ask people things like, well, what if you're the driver of the car? And it would change people's decision making and really interesting ways that gives the cars sort of a morality. Athena : 00:18:10 Yeah. I mean the whole idea of having self driving cars means that if you're going to give them algorithms for what they should do, there actually has to be some consideration of the moral components to that. Right. So it's this whole new dilemma that we haven't had to face previously. Ed : 00:18:28 Yeah, I've been thinking about that a lot recently in terms of values. That, really what you end up doing is any system that you design has a set of values embedded into it. Even if you pretend that there, there's nothing moral about this and it's purely a pragmatic, you know, problem, solution, engineering deal. Uh, then, you know that you're still gonna end up encoding values. It might just be that your values are utilitarianism or efficiency, uh, and you, you know, that's it. And that's what your car is going to choose, uh, to maybe your car in that situation is going to use to make the choice that uses the least amount of gas. Right? So, you know, that's, that's a value decision and you've encoded it. And if you don't think about the value structure that you're going to use in designing your system, then that just means that you haven't had a chance to reflect on it consciously. It's still, there's still going to be some kind of a value structure embedded into the thing you've made. Athena: 00:19:27 It's like a Zombie value system. Ed : 00:19:28 Yeah, that's right. Uh, and a lot of systems actually end up with that. You know a part of the, the, the Zombie value system I'd say is that fundamental sort of will to power or will to life that uh, a lot of, uh, autonomous systems have as a fundamental directive, this notion of self preservation and self empowerment, or growth. And so you could see that very easily being, again, maybe, maybe a, an unwitting engineering team might say, well, we're going to design this car, this car is expensive. Our customers are paying a lot of money for the car, so the car is going to make choices to protect itself and not run into stuff. And so it may be that the car, the car will choose to run into the one soft human being that will damage it less than, you know, the, the lamp post for example, right? So these are, these, these become values questions. Athena : 00:20:23 And it's good to know that that's what we're doing when we're designing algorithms. And maybe it's a, it's a sort of novel issue for us, right? That, in our evolutionary history, we haven't really had to think about how we're encoding values and morals into systems because we haven't had technological systems of sufficient complexity that they would be actually enacting sort of moral decisions on a regular basis. So we kind of have to get outside of ourselves a little bit and say, "Hey, this is a new thing. It's a thing that we have to be actively doing as we're setting up these, these algorithms." Ed : 00:21:05 Yeah. What's interesting is that we are now- is the speed at which these decisions take place and the, the level to which they're decentralized and autonomous, that a car is actually a really powerful thing. You know, it's a dangerous high speed projectile weapon, just like an airplane can be a very powerful, dangerous, high speed projectile weapon. And so, uh, one of the frames that is useful, a lot of people have pointed to air traffic as a model for a really expensive, potentially dangerous thing that we've made pretty safe through the ways that we regulate it. And you can see some of that in cars, cars are sort of a blend between the air airline model. You know, we have, um, airbags and we have seat belts and all this stuff. But in other ways it's, it's part of these other models where everybody should have the freedom to drive and you know, there, there's um, sort of a blend. You could imagine a lot of things we could do to make driving in cars safer than it is today. Um, and this is something, you know, I point out when people obsess about just the safety side of, of autonomous vehicles. Well, if we were optimizing for safety, you know, Phoenix, Arizona, it has the highest rate of pedestrian fatalities in the country, uh, on a per capita basis. Athena : 00:22:24 Really? That's scary. Ed : 00:22:25 Yeah, a lot of people, a lot of, a lot of people die because of cars now. We obviously don't think safety is that important [laughter] because we've chosen to pay attention to all this other stuff instead. So, uh, I think there's some truth to the, to the argument that overall autonomous vehicles will probably make things safer. But there are a lot of really important decisions that have to happen along the way. Athena: 00:22:49 Are there other technologies that are sort of enacting these moral values without us quite realizing it or things that are, that you see coming up in the near future where we'll have to be really grappling with this? Ed : 00:23:04 I think these technologies are proliferating all over the place. So the one way in which we've wrestled with this for a long time is, is in the construction of social and legal entities. If you think about legal history, that's the place more than anywhere else, I think, where people have wrestled with this question of values and value based design, but because we recognize how hard it is and how imperfect human beings are, a legal, a trial for something like a murder case can, can take a really long time. Um, a lot of these decisions that are made on an, on an algorithmic level take place over the, over the course of microseconds. And so that shift in scales is really profound. And one of the consequences is that humans are no longer actively involved in the actual decision making process. We're only involved in the design process now. So to answer your question, uh, finance, uh, who gets loans or doesn't get loans, which resumes get forwarded to an actual human being for review for a new job application, uh, these very pragmatic, uh, a lot of companies are using algorithms to do investing for them either on a sort of a day trading, you know, uh, very tactical scale, but also sometimes in a strategic sense. There are a lot of different arenas in which we're now giving systems power, uh, that have, power that has moral consequences to it. Athena : 00:24:37 So we're like outsourcing parts of our brain to silicone now. Right? And we don't know how much of that is actually consistent with our intentions and our values and our goals versus how much of it we've just been a little bit lazy about setting up and now has it's own, like autonomous Zombie-like thing that we're still using as part of our brains, right? To help us make decisions. Ed: 00:25:01 Yeah, so sometimes I think it's sort of like this cognitive whitewashing or what my like money laundering maybe is a better, better phrase for it. You say like, well we're going to outsource this problem, this, this difficult moral problem to this machine and the machine is, is flawless and rational. And so when the machine gives us the results back, they're objective and good, right? And we don't have to think about it anymore. So that's how you, the difficult problem of how do you evaluate loan applications, right? You say, oh well we're going to build this algorithm and we'll let the machine decide and then it's not our moral problem anymore. We're trying to outsource the, the ethical quandary. Athena: 00:25:36 You just put Spock in charge. [Ed agrees] He will make the right using his uber-rationality. Ed : 00:25:43 But algorithms are not like Spock, right? They're based on, they're designed by flawed humans and they are based on flawed data. And the objectivity that we ascribe to them is just a myth. It's a story that we tell ourselves to make ourselves feel better because it's easier and we are lazy. [laughter] Dave : 00:26:01 I mean, I often get paranoid when I, when I get the robot, like it's, if I have to call in and I want to talk to a person and I talk to the robot, it makes me hate the company almost instantly because I feel like I always think the robot is designed to stall me to deflect. Like I actually think it's designed without my best interests in mind, you know? Uh, whereas a person, when I'm talking to a person, even if I know they have to follow a script, I always do feel like, well, at least there's somebody there who has a conscience. So I do- Athena : 00:26:37 And not a Zombie. [laughter] Dave : 00:26:39 Yeah! I do feel like I always, as, as a consumer, I always, whenever I talk to one of those robots it drives me, it drives me insane. I hate those things. Ed : 00:26:51 I've got to say, I feel sad in both situations because I feel like I'm talking to a Zombie in both situations. [laughter] And what makes me sad- I agree. I think that the company does not have your best interest at heart. Um, now there's a huge variation, right? Sometimes you talk to someone on the phone and the connection is poor and they have very limited information and they're working from a script and they, they often don't have a lot of power to actually do anything to help you. And sometimes they also don't have any real motivation to help you. Right. So they clearly do not care about your problem and, and sometimes those interactions feel extremely zombified. Right? And so that makes me sad because we've taken, you know, perfectly good, thoughtful human beings and turned them into zombies. And what does that say about this global, you know, economic system that we've constructed for ourselves? Um, the robot is also problematic, right? Because then you become the Zombie in a way because the robot only has a certain amount of variance or bandwidth in terms of the kinds of inputs that can accept. And so you probably find yourself using special language or grammatical phrasings or even a special tone of voice or repeating yourself to try and make yourself understood by the robot. Right? So you become the Zombie. Athena : 00:28:11 It's so hard to speak naturally when you're talking to a robot. Dave : 00:28:17 Yeah. Oh well I've just never had one that successfully helped me, that's part of the problem. So I've, I've, if they were beneficial, I think I might not have this, but I'm like, I've become like one of those guys in star wars where it's like you're droids aren't allowed in here. [laughter] Ed : 00:28:34 These aren't the droids you're looking for? Dave : 00:28:37 I was thinking of the guy in the Cantina where he's like "your droids, can't come in here!" It's like, oh I get where he's coming from. [laughter] He's had to call the cable company one time too many. Ed Fan: 00:28:48 I feel that way about all of the microphones and sensors proliferating. You know, all of these home, smart home devices are everywhere. [Athena agrees] I have, I'm very skeptical of why you would want to have all of these sensors in your house listening to you all the time. [Dave agrees] Athena : 00:29:05 What do they do with all of that data? Dave : 00:29:08 Give you better ads, ads that are more designed to manipulate you. Ed : 00:29:14 I think there's a lot of truth to that. I think that often they are used to market to you. They are, they are surveilling you, theoretically with your knowledge, right? That you've accepted, you've agreed to this so that you can say, "okay, big five tech company, turn on the lights," instead of walking five feet across the room to turn them on for yourself. Now I do, I have to throw in a caveat here, which is that we, I'm having a wonderful time, you know, uh, blaming all of this thoughtless outsourcing of cognition to technological systems, but um, we should recognize how little we know about how our biological cognitive systems work and how bad we are at assessing things like risk and how bad we are at morality. Right? You know, so the trolley problem, as you said, why should it matter whether you're in the car or not or whether your aunt Edna is, is one of the passengers or not? Why the, that our ideals of moral behavior and the pragmatic reality when you actually say no, you are going to be one of these people, right? Uh, our ideals and our behaviors, there's a huge gulf between them. And so... Athena: 00:30:25 Not to mention all of the biases and heuristics just in decision making in general. You know, we're not good Bayesian statisticians, but any, you know, quantitative person will tell you, you should be using Bayesian statistics to do your analyses and make your decisions. Ed : 00:30:41 That's right. This is like economist saying like, well, we're all rational creatures, right? We're all rational actors in the marketplace. It turns out we're totally not. Dave : 00:30:51 Yeah. And more and more like, my sort of fear of AI is not that it will become self aware, but that it will be used to serve the sort of greedy sociopathic ends of people who could care less about me and the people I care about and that, you know, um, essentially sort of us being enslaved by robots that work for a few select maniacs essentially. Athena : 00:31:22 So that's your version of the Zombie apocalypse? We're the zombies? [laughter] Dave : 00:31:26 Yeah. I mean, that is my sort of AI fear. And I don't know if that's common among people with AI fears, but my, my AI fear is no longer just the robots themselves trying to serve themselves. It's sort of this realization that how people really don't always care about each other. And so this could give a few people who don't care about anyone else an immense amount of power. Ed : 00:31:51 I hate to break it to you, but I think we're already living in that future. [Dave agrees] I think the most interesting thing, the most compelling line I've heard about AI is that whenever this AI revolution comes, there will be no obvious change and we will have no idea. We'll just become like the gut bacteria living in the belly of this super organism, right? [laughter] We won't recognize what's going on. Everything's going to be fine, we're going to keep tweeting our tweets, you know. And so a corollary to that is that, uh, it's a mistake to focus on the hardware of AI and instead to think about this notion of autonomous decision making systems and structures. And if you think about it that way, we've been constructing different kinds of artificial intelligence for centuries. We call them things like corporations. And when AI, I think we're already at a point where this sort of machine based intelligence and what you might think of as corporate, corporate intelligence- Athena : 00:32:58 Can you tell us a little bit more about what you mean by corporations as artificial intelligence? Ed : 00:33:04 Well, if you think about, uh, an entity like Google or even something older like, um, the East India Company, these are structures that were designed by human beings, but that have a, a sense of will, a sense of purpose and maybe a will to power, will to life that extends beyond any individual human that's in them. Right? And that's actually the whole point is that the corporation is not supposed to be just a fiefdom of one human or the, the tool, the implement of, of one human. And the way we've, the way we think about corporations now with this public shareholder model, uh, that's very much the case. There's this, this philosophy that the corporation should have its own independent agency, uh, and through, uh, Citizens United, I think that was the supreme court case, we've now granted, we continue to grant more legal status and person-hood to corporations, which recognizes I think this interesting independent category of agency that they have. Athena : 00:34:08 So if they're a, you know, independent autonomous agent, would you say that they are more like humans or more like, you know, monsters? Ed : 00:34:20 I think they're, they're not human. That's the, their, their interests, their forms of memory and agency are, are very inhuman. And so I think they are more like monsters if you want to, if that's the, if that's the bifurcation, right? Is human, human or monster, then I'd say they're closer to monsters. Athena : 00:34:39 Are they zombies then or are they a different kind of monster? Dave : 00:34:42 I mean I, oh, I could certainly see how there's parallels between this sort of corporate like grow, grow, grow, like get more money essentially with that sort of Zombie single track eat, eat, eat sort of philosophy. Right. And I think that seems like that may be part of what the problem is. The sort of lack of- like, do you feel like corporations can have a conscience? Ed : 00:35:13 That is a really good question. Athena : 00:35:15 Cause if there a Zombie or not really comes down to like the mindlessness or conscience part of it. Dave : 00:35:21 Versus that single focus. [Athena agrees] Ed : 00:35:24 I don't know. I suppose you could make an evolutionary argument that we are evolving more complex forms of corporation and that may be AI is the moment where corporations will develop a form of consciousness. Corporations have nervous systems, they have email and they have streams of information flowing back and forth. [laughter] They have antibodies, they have pseudo pods, they have um, reproductive organs. Athena : 00:35:52 What are their antibodies? Ed : 00:35:53 Uh, like the security people and patent law and you know, um, lawyers. Dave : 00:36:02 And what are their reproductive organs? Ed : 00:36:04 Um, IPO's. And um, I think there are different kinds, you know, cause I think advertising is like pollen. Um, you know, so, so I think the interesting question is whether corporations have something beyond what you might think of as the like, simpler life form, "oh, it's just about reproduction, self preservation," you know, preserving the genetic heritage or if it's about some higher form of mission. Um, I feel like we're getting into a quasi-mystical discussion of information now, and life. [laughter] Life is the organization of information. Um, but uh, yeah, I mean I don't know that, I think that zombies to me, zombies exist at that level of, you know, the sort of will to, will to life and not really anything else. And I think it's an interesting open question whether corporations exist solely to metastasize or that they have, you know, they all have, um, like mission statements. Right? And to what extent the mission statement is actually, uh, a real objective. Athena : 00:37:15 What about the corporations that want your brains? That want to take over your attention, manipulate your behavior? Ed : 00:37:24 They do. Well, that's why I'm wondering if they're not the zombies, if they're the Zombie factories. Athena : 00:37:29 The zombification agents? Ed : 00:37:30 Yeah, yeah. The zombifiers. Dave: 00:37:33 I mean it is, and it also does have parallels because it's like when the Zombie bites you and then you become a Zombie. So the corporations, the sort of tech companies, I think that main purpose is to sort of get people's attention and then adjust their behavior. Athena : 00:37:48 They make you want to re-post it to your social network or whatever. [Dave agrees] "This new suitcase is so exciting, I can plug in my USB directly to it!" [laughter] Ed : 00:37:59 It's kind of a, it's like time share zombification, right? Because they- or I, it's a, it's a, it's a, it's a viral model because you can't kill the host. Uh, and you don't want the host, the host to lose their job because then you can't harvest all of their surplus income. [laughter] Dave : 00:38:20 Interesting. [laughter] Athena : 00:38:23 All right. So you want to keep your host alive as long as possible and disseminating. [Ed agrees] And then while you're extracting the surplus that you can while keeping them alive. Ed : 00:38:35 Right. Oh, and, but then again, this gets into this fascinating question is, is it more like a virus or is it more like, um, like ants, uh, farming aphids, you know, where you, where It's like a slightly more longterm thinking structured organization model. Um, because then you can get, you know, the kids involved too, you know, multiple generations of people. [Athena and Dave agree] Athena : 00:38:59 Yeah. It's more like long term cultivation of our attention and our future children's attention and their children's attention zombifying enough of the brain so that we can still keep functioning in normal society, while... Dave : 00:39:19 I mean that's an interesting sort of, cause I've noticed there's this sort of business model of like it's like Star Wars and Nintendo and Lego. All have this thing where they've tapped into nostalgia in this way that's like managed to like, keep their business going among our generation. [Athena agrees] Which, I think may be a thing that corporations are probably realizing that this sort of like generational model probably is pretty effective. Athena : 00:39:46 Well cause then it's like bonding time with your kids cause it's Nintendo and like I remember playing Super Mario Brothers when, you know I was seven. [Dave agrees] So it's quality family time! Dave : 00:40:01 But that, that does seem like that requires instilling pleasant memories. You know, cause I think there were probably a lot of competitors for those guys that nobody remembers or they don't remember fondly because they might've just, iit's all this crap and so... Ed : 00:40:20 This is, you know, the, the production of memory and the production of history is just as important as, um, shaping the future and defining what, you know, what the choices are for your next car. It's also about what, what is official remembered history from the past. And that's why we have such tangled and intimate relationships with these companies because they're storing all of our photos and all of our messages and a lot of this, this, uh, outsourced nostalgia and very intimate personal history that we've entrusted to the cloud. And so that's, uh, those, those are high stakes, you know, and people talk about if they lose access to that for one reason or another, how, uh, difficult that can be, how much it feels like losing a part of yourself. Dave: 00:41:14 Have you guys ever tried to quit Facebook? Did we talk about this? Athena : 00:41:17 I don't think we've talked about this, so I deleted it from my phone. But I sometimes go on it on my computer. Dave : 00:41:24 So if you try to deactivate your account, they really, they're like, they show you all these pictures of like you and your friends and they're like, you know, "Bill will really miss you!" And like, it's, it's really interesting to see the process. Athena : 00:41:39 Did you? Did you deactivate? Dave : 00:41:41 I did, but then they somehow got me back. So I've now been reactivated but... Athena : 00:41:45 You've been reanimated? [laughter] Dave : 00:41:50 But it is really interesting how they sort of like, that was the first time when I saw I was like, oh wow. They really are paying attention. Um, and uh, they want to live more than anything. They do not want you to stop. Okay. So, uh, but I guess that's probably, I'll bet you there's a lot of companies like that, I don't know. Ed : 00:42:13 I think there's a, a power law though, and very few companies have a tremendous amount of your data and power over you because of these long term relationships. Uh, and in part because the power law is a network law all, you know, Bill, it's going to be harder to stay in touch with Bill if you're not on Facebook. And because everybody, for better or for worse, mostly for worse is still on Facebook. Right? And so it's harder to leave because of that. Uh, one of the essays in our Frankenstein book is by Cory Doctorow, uh, science fiction writer and activist, and he is a self described Facebook Vegan and he's not on Facebook, but he says there's a high social cost and he doesn't know about things that are going on in his daughter's school because he's not on there. Athena : 00:42:57 What does Facebook Vegan mean? Ed : 00:42:59 He doesn't, that he doesn't do Facebook or Instagram or any of the Facebook things. He just, yeah, he's not on them at all. Athena : 00:43:04 No, it's a, it's a cost. I mean, people use it as a default channel of communication. Dave : 00:43:12 Yeah. Actually that's part of why I went back because I would miss out on things like, like board game group, like get together sort of things. [Athena agrees] Ed : 00:43:19 The really important stuff. [laughter] Athena : 00:43:23 But yeah, kids' school stuff. I mean, like my son's class, like they went on a class trip and they were posting on Facebook and that was the only way for you to see what was going on on the class trip. [Dave agrees] So it's really, you know, it's become part of maybe our collective nervous system, right? It's a channel where communication is happening and if you're not tapped in, then you're out of the loop. So, so what are there alternatives to this? Like sort of looking to the future, even if it's speculating about what we might have? Like is there a way, either as individuals or as a society to... Dave : 00:44:06 That is like, could we survive without like, I mean, I suppose we could survive without Facebook but when you were talking about just all of the sort of corporations. I feel like that's so all-consuming. It's like, could somebody really- Athena : 00:44:21 Well and they give us a lot of things that we really want. [Dave and Ed agree] Right away. Ed : 00:44:27 You know, we're, we're the ones who voted for this, right? With our wallets and our choices. So, uh, yeah, it's, it's not about, um, some, you know, evil cabal of people who've taken over the world. We've all bought into this environment that we live in. So I do think there are alternatives, though. Uh, there's nothing inherent in the technical construction of the systems or the way the Internet works that says that all of these platforms have to funnel huge amounts of money and attention to a tiny elite in silicon valley. You know, it doesn't have to be organized that way. It could be organized in different ways. And the history of the Internet has been this transition from an extremely decentralized, non-economic communications network into something that's becoming increasingly corporate increasingly, uh, financially motivated, uh, with more gatekeepers and more, uh, you know, uh, uh, cheater based access. Athena : 00:45:30 Some of the irony of that is that initially, right, the Internet was supposed to be not part of sort of the market driven thing. So it was like everything should be free. But then ironically, that made it so that in order to offer these free things, they had to sell ads, which then started this whole race for our attention, which then started this whole harvesting of our data in order to better manipulate our attention. So somehow we went from like being outside of the economic or outside of a really market driven model for the Internet and information sharing to one that is like hyper driven by economic interests. Ed : 00:46:12 Yeah. And I think that is, uh, on a, on a fundamental level, what's really challenging about that as the way it hooks into deeper parts of cognition. We are now talking about the mindshare of lots of individual people and platforms like Amazon's mechanical Turk are literally systems that are using humans as processing units to perform tasks and making it feel like you're interacting with a server bank, except you're interacting with thousands of people who are doing some task because it's cheaper and more efficient to get these people to do it than it would be to get a computer to do it. So we're, hat's a great example of zombification. Athena : 00:46:54 And, Oh yeah. And that brings up maybe even more important question, which is, you know, what should we be doing with all this human brain power, right? Like all this power that's being used, um, to do all of these sort of menial tasks or is being used, um, you know, to, uh, because there are advertising companies that are taking up the bandwidth, so that we'll buy stuff. Like what's the opportunity cost of all this human brain power being used in that way? Ed: 00:47:22 I mean, and it's vast. Uh, I think that it's a vast opportunity cost. You know, I remember when Angry Birds was a thing of several years ago, the staggering billions of hours that people were pouring into angry birds. Uh, and you can imagine a lot of things you could do with that. I mean, on a- imagine if there was a social network like Facebook though is just organized around building empathy and human connection, which is allegedly part of Facebook's model, except that their business model is not empathy and human connection, it's selling ads and getting, collecting your data. But what if that was the main point, you know? Uh, and maybe you had to pay a subscription fee or maybe there was some other way in which it got paid for. Um, then what would that even look like? You know, would that mean people would go out and help each other out with neighborhood projects or you would spend more time engaging with one another? Would it reduce the amount of siloed sort of spirals of extremism in political discourse that we see in this country right now? Uh, one of the ways you could build up, build empathy is having a constructive conversation with somebody with differing political beliefs. So, you know, it all sounds kind of utopian right now, but, uh, the internet was supposed to be utopia and you know, it was, the way it was designed and the thing we bought into was also supposed to be this utopian space where anything is possible and anything still is possible. The structures, uh, of the, the architecture that we've built now is not the way it has to be at all. Athena: 00:48:54 So if you could design a new architecture, what would it be? Would it be this sort of alternative to Facebook where the business model is somehow actually about empathy and connection? [laughter] Ed : 00:49:06 Yeah, I would think about empathy, cooperation. Um, and so, so for me, a big keyword as imagination. So how could we use a platform like this to cultivate imagination? And I, I bring that up because I think empathy is one form of imagination and is about being able to conceive of the lives of other people and imagine how they are feeling or what they are experiencing. Because we're all individual islands of consciousness, right? And one of the reasons that we have such a collective hunger for the Internet is that we crave that connection in that sense. Athena : 00:49:39 We want brains! [laughter] Ed : 00:49:39 We want brains! Totally. And so if we started with those ideas as a cornerstone, it would be really interesting to see what kinds of- because you see people using systems like Facebook for collective action, you know, whether it's the Arab spring and tire square or maybe in a more scary way, uh, the sort of, uh, I don't know if pogrom is the right word, these, these, um, organized systemic assaults against different, uh, minority communities in places like Myanmar. Um, you know, people organize themselves in ways that are not driven primarily by economics to do things that they feel are important. Um, it's just that a lot of the time we, we don't seem to be, the system is not orchestrated towards surfacing our better virtues. Athena : 00:50:33 Yeah. So, so we need like a alternative, maybe we call it "love brains?" [laughter] Would you guys join? Ed: 00:50:42 [Dave and Ed agree] Athena : 00:50:42 All right! Ed : 00:50:43 That sounds, that sounds really warm and inviting. I definitely, I definitely want my kids on "love brains." Dave : 00:50:50 But I guess it would need to be a subscription based, right? Because otherwise... Athena : 00:50:55 Yeah. Are there other, are there other alternatives? You know, so we could have, you know, ads or we could pay money or are there other ways that we could set it up? Ed : 00:51:03 I think you should do it, I think you should do it through generosity. I think you should be able to, you could pay money or you could do good deeds and that could be your participation model. Athena : 00:51:14 Hmm, so how would you, like put that together? Ed : 00:51:17 Oh, you could use the honor system, you know, uh, or, uh, for small stuff. And then for bigger stuff, maybe other community members have to thank you because you helped them move into their new apartment or whatever. Athena : 00:51:30 Hmm. So somehow figure out how to leverage people's cooperation to keep, but how are we going to pay for the technology side of it? Ed : 00:51:43 Well, I don't know. You could, uh, you could, you could have it be distributed and actually use the technology that people already own. Our smartphones these days are pretty sophisticated processors. Um, you could have something that runs like a desktop client. Do you remember the protein folding game? This is a screensaver that you could download that would run as a screensaver, but it would also contribute some of your processing time on your computer towards these protein folding problems, these big computational problems. So there are a lot of models out there for using distributed computation to do some of this work. Um, or maybe you could, yeah, you could, you could fund it in other ways too. Dave: 00:52:24 I mean Wikipedia is a pretty good, I think, model of a decentralized, uh, sort of volunteer based thing that's really grown to be- now I guess it's sort of being taken over by some of the content marketing zombies of the internet. Athena : 00:52:43 But, you know, if we think of it from first principles, a system that is actually based on connecting people and providing a platform for people to interact, that's really a public good. And traditionally we fund things that are public goods through taxation or you know, some higher level institutional funding. So I mean is there any possibility of this actually being something that would be funded possibly even at a government level through maybe, you know, grants that would allow the development of systems that would be free to users or um, maybe, I dunno, highly discounted or there'd be a sliding scale or something like that? Ed : 00:53:30 I mean, I think it's worth a shot, this is starting to sound like a grant proposal. [laughter] Athena: 00:53:34 And you're on it! Dave : 00:53:40 It is interesting that there's not something like that that sort of brings people, I mean, I guess Facebook is sort of trying to monopolize that, but uh- Ed : 00:53:48 Yeah, because they want the, they in a very straightforward business sense, see anyone who's not using Facebook and using something else as a problem. Right? So of course they're just like, they're going to play every card they can to get you to not leave Facebook. [Dave agrees] I think you would probably see other kinds of resistance to something like this. Athena : 00:54:09 Well, I think we've had a great chat about the potential positive future, but I do want to give a chance for you to tell us anything we should be watching out for or worried about in terms of technology and algorithms and zombification in the, in the near future or even the far future that maybe we want to get ahead of. Ed: 00:54:30 Well, I think we've been talking about both sides of it. I think that there's, there's a lot to be worried about in the business as usual approach or the unreflective approach. So I wrote this book called What Algorithms Want and one of my- Athena : 00:54:46 Buy the book! Ed : 00:54:47 It doesn't sound quite as nice as the Frankenstein book, but it's, it's uh, it's very nice. [laughter] Um, and, uh, one of my core arguments there is that it's not necessarily bad to be making these agreements with these companies. You know, I get a lot out of my relationship with Google, but you need to start by understanding the contract that you're agreeing to and what it is that you're trading. The services are not free. You're giving up a bunch of stuff in exchange for these things that you're getting. And I think most people are not really aware of that and they don't have any sense of, even if they know the terms, you know, okay, so they're going to take my data, but what does that really mean? What are they going to do with it? And so having that sense of awareness about what some of these agreements are requires a foundational layer of algorithmic literacy. And I don't think that everybody needs to know how to program, but you need to have some understanding of how these systems work and what some of the moral hazards are. Like some of the stuff we've been talking about. So I think that's really important. And right now I feel like we're not really moving in that direction. We keep sliding in the other direction, which is this sort of a captive model where your, you know, more, you're outsourcing more and more of your life to one of a small set of companies and you really don't know anymore what you've given up. Right? Or- Athena : 00:56:17 So you're like a Zombie, but you don't know it. Ed: 00:56:19 Yeah! You're an unwitting Zombie. Yeah.w Athena: 00:56:21 So how do we wrest back our autonomy? t Ed : 00:56:25 Well, I think it's, it's, uh, by starting to make these choices, it's by, um, it's by complaining and being participatory in some of the systems that we're using. Uh, but sometimes that's not all that effective and it's, it's more effective to build something new that's different. Um, and it's also, I think, effective to, uh, start communicating about this stuff. You know, building local conversation and talking to your friends and family and talking through the things you're worried about. Um, a lot of what we're talking about here is this notion of norms and what are the norms around our behavior and our participation in different digital services. So if we change the culture of what, what is normative and what is okay or not okay, that's how you start to shift some of these conversations. Um, and then I think another crucial piece of this, which is really connected to that, is that the people who are building most of these systems do not represent the global population of users. And we need more diversity in the, in the room when we're building these systems because a huge number of problems emerge because some entitled white bros in Silicon Valley made a bunch of design choices, um, often very well-intentioned and they thought they were saving the world. Um, sometimes just being kind of jerks. But in both cases, you know, probably not thinking it through and not considering a number of use cases that if there had been a person of color or somebody from a different socioeconomic background, etcetera, etcetera, uh, that whole conversation might've turned out differently before the really tragic mistake occurred. Dave : 00:58:06 Like what sort of like, I'm just trying to think through for if anybody who's listening works at one of those companies, like what sort of practical steps, I guess...cause if they're a hiring manager, they could change who they hire, but if they're working in some other capacity, is there anything else they could do to sort of help make their company that they work for more sort of beneficial for society? Ed: 00:58:33 Yeah. So I'd say there are, um, there are a couple of things. So first of all, hiring is really important and we've been, you know, collectively having this conversation about this diversity question for a long time now and not as much has changed as you might hope at a lot of these companies. So that continues to need to happen. And you don't have to be a hiring manager to have a voice and say, "hey, you know, I look around this room and almost everybody here is a white male," you know, to point that out, um, to start having those conversations. Um, then there's- Athena : 00:59:10 It's actually true in this room, right now. Dave: 00:59:12 Yeah, I was just thinking about almost everybody here. Ed : 00:59:14 Yeah, almost everyone here is a white male. Yeah. So, um, and so I think that's, you know, it's a challenge that we face in the things we do in the projects that we do. We try really hard to think about diversity. Um, and it's hard, especially when you're engaging in fields like technology, you know, because there's already this imbalance and there are a lot of people who look like me in the field, you know, it's harder to, and then, and the people who are, say, People of Color, women in technology or other related fields like that, they're, they may be more oversubscribed. They may be more likely to say no to your invitation because they already have other things going on. Uh, and so you, Athena : 00:59:56 They're already on every committee that needs a token person that needs a token person of their type. Ed : 01:00:01 Exactly. Right, said Athena, not at all referring to her own self. [laughter] Uh, so, uh, so you, you, it, it takes genuine effort and work to, uh, to address these questions of equity and diversity. And you can't actually, if you, if you approach it as the token problem then you've already, you're already doing it wrong, you know, because if you're like, okay, so I'm going to invite my friends and the people I think are cool and then I'm going to and save this one spot on my panel for the Unicorn who is the female person of color who's also like got this very particular unique academic specialty so that they can check all of the boxes for me. And that, you know, that those, those unicorns, unicorns are hard to find. You know? Athena : 01:00:46 And you're already setting it up? Ed : 01:00:47 You're setting it up for failure. Athena : 01:00:47 So there's the ingroup and the outgroup, and that's not what you want to do. You want to create a group that is inherently more inclusive. Ed : 01:00:55 Yeah. So, so to get back to your question, um, there's also a kind of, um, diversity, which is just diversity of thought that one can practice as an individual, which is to start to read, listen to, talk to people who are not like you. And don't just start with your friends and your ingroup that actually reach out to somebody who might have a completely different perspective on something. Um, and if you're designing a product, you know, you don't necessarily have to formally hire someone to get feedback from them. You could talk to somebody if you're trying to design something and say like, you know, "hey, I'm wrestling with this issue and I'm trying to get a diversity of perspectives on it." Um, and I suspect that if more people did that, that you know, things things would be better. So there's, there's the individual, there's this sort of organization wide and then there's the sort of smaller group norm setting where you can try to shift behavior by performing the behavior, be the change you want to see in the world. Right? And so performing different kinds of behavioral changes yourself. Um, which is a positive thing, you know, I think it's more uncomfortable to call out other people if they're acting in ways that might be, you know, um, the sort of making others feel unwelcome or perpetrating stereotypes or whatever. Um, there's, there's the, that, you know, the, sort of trying to be an enforcer, which is I think, harder and more uncomfortable. Um, but I think there's a to be said for just, you know, trying to be the best moral person you can, uh, recognizing that we all, you know, have a lot on our plates. But, um, yeah, valuing that, right? [Athena agrees] If you start by valuing that, um, you'll make mistakes. We all make mistakes, but you know, yeah, that's a good place to start. Athena : 01:02:39 Yeah. And if you're just surrounding yourself by people who are really similar to you and just repeat the same stuff back to you, then you're like a collective Zombie, right? Cause it's just the same thing echoing around and you're not really, you know, taking in new information and operating on it, which I don't know from the perspective of like what kind of algorithm is not a Zombie? I mean maybe it's the one that is bringing in new information and recalibrating based on that information and changing decision roles. Dave : 01:03:10 So I'm gonna ask this sort of question cause I'm, I'm just trying to figure out how, because we were talking about this sort of problem of corporations being these sort of consciousness, conscious-less, growth minded entities, right? And then the, the solution we're talking about is diversification of the work place, which seems like it's a valid concern, but I'm trying to understand if that, if diversifying the workplace alone would solve that issue of the corporation being- Athena : 01:03:52 Yeah and to the extent that the algorithms are just sort of like running amok on, by themselves, right? I mean they're evolving. So there is this kind of runaway fast Zombie kind of a process going on there. [laughter] Ed : 01:04:05 I think there's a stakeholder question. One of the benefits of more diversity is you have more, you have voices from a broader, um, spectrum or cross section of society. And one of the challenges is we have a lot of, we have a lot of small, very powerful groups of people designing things that they envision as being the same for everyone across the planet. Right? These sort of one size fits all solution. And so if you have more diversity in the beginning, I think you help with some of that. Um, I think that, uh, I do think that there's this sort of, there is a possibility for growth in the corporate space, too. You know, a lot of corporations are now taking sustainability more seriously. Um, and that's become a part of this sort of corporate consciousness in a way that it wasn't before. Sometimes it seems like it's just, you know, greenwashing, sometimes it seems more deeply real. Um, but I think that it's, you know, these systems can change too, because ultimately they are made up primarily of people and if the people change, then the system can change. Athena : 01:05:09 Can I push you on that for a second? I mean, how much are they made of people versus are they made of algorithms? Ed: 01:05:17 Well, I guess it, it still to this, up to now I'd say that for most of the time there are still people. Um, there are people sometimes hiding behind the algorithm or wielding the algorithm as a, as a cover or a shield for their decisions. Um, there are a lot of algorithms, there are a lot of people who have given algorithms more power maybe than they should. Um, but there are still people always structuring. There are people you know, hiding, there are people under the, inside the black box, right? There are a lot of people hiding and they're making choices. Sometimes they're sort of trapped in there and something that you've been marketed as this algorithmic, magical computer system actually has a bunch of poorly paid people somewhere doing all of this work. Uh, there's a really shocking article that across my feed recently, thanks to some algorithm, about these, uh, traumatized moderators for Facebook working for some company here in Phoenix who spend their whole day watching all of this stuff that Facebook is going to ban: pornography and people getting murdered and everything that violates community guidelines. If you imagine what a horrible job that would be? [Athena agrees] Um, yeah. And so you know, that's happening. These are people hiding inside the black box, right? Athena: 01:06:30 Huh. All right, thank you so much for- Ed, thank you so much for being here with us today. Ed : 01:06:36 Thank you so much for having me. This was tremendous fun. OUTRO MUSIC: 01:06:40 [Psychological by Lemi] Athena : 01:06:51 Thank you to the Department of Psychology and to ASU in general for supporting Zombified. Also the Interdisciplinary Cooperation Initiative is a big part of why Zombified is possible, so thank you to the President's Office for your support. Hi Michael Crow! And thank you to the Lincoln Center for Applied Ethics for always supporting everything zombie and to the Aktipis Lab, otherwise known as the Cooperation and Conflict Lab for all of your help with this podcast. I appreciate you guys so much and I love your brains. Thank you to the Zombie Apocalypse Medicine Alliance for supporting this podcast. If you are looking for us on social media, you can find us on Twitter and Instagram on Zombified Pod and on Patreon, we are Zombified. Our website is zombified.org. Thanks to the amazing Tal Rom who does our sound and to Neil Smith who does all of our illustrations for the podcast, their brains help make this podcast possible and help make it what it is. Athena: 01:09:09 So thank you guys so much and I am so grateful just in general for everybody's generosity with their brains. Um, it's amazing to have the privilege and opportunity to work on this podcast with so many amazing people and amazing guests. And I had a really, really fun time sharing brains with Ed Finn and Dave this episode. Um, and so I wanted to share a little bit more of my brain's like I do at the end of every episode. So I share something after the credits, whether it's like a story or a connection to my work or some sort of speculation. Um, and so today I want to talk a little bit about brains and algorithms. So I think there's this really interesting kind of feedback loop where it's like, algorithms want brains but then brains also want algorithms. It's like the computational power is wanting more computational power. Athena: 01:10:20 Brains want help with, you know, doing the computation and then the algorithms want to actually use our brains too. So it's all computation and in a way, you know, we could kind of think about algorithms as a sort of brain too, cause, you know, they are, they're processing information. So, you know, maybe it's all brains wanting brains. So algorithms are, are just another kind of brain regardless of what they're made of. The fact that they're, you know, not made of neurons and wet stuff, maybe that doesn't matter so much, at least from the perspective of information processing. All right, so let's get back to brains wanting brain's, algorithms, wanting brains, brains wanting algorithms. Um, I think some of this ties in really interestingly with the episode with Mark Flinn a few episodes ago cause we talked about how humans have kind of evolved to need each other's brains and to want each other's brains and to use each other's brains. Athena: 01:11:30 Uh, and so I think we really are a, you know, species that just does that, that like uses computational power that is not just sitting inside our skulls but uses computational power outside of our skulls as well, whether it's other brains or, um, machines of various sorts that can take in information and process it. So here's the thing though, not all brains and not all algorithms actually have the same interests, right? So when it comes to figuring out how to share your brains effectively, you want to have good boundaries with the brains and with the algorithms that don't share your interests or at least don't share the particular goals you're trying to accomplish at that moment. So, you know, it might be that some brains and algorithms have aligned interests with regard to accomplishing a certain goal. Like, you know, for example, maybe you want to, uh, you know, find the best and most absorbent paper towel and, um, an algorithm, um, on, you know, some sort of big tech store, um, wants also to help you find that most absorbent paper towels that you can buy it. Athena: 01:12:56 So if you have aligned interests, then you can kind of outsource some of that cognition, right? But to the extent that interest are not aligned, then you open yourself up to exploitation by outsourcing that information processing. So I think this kind of brings up a really interesting and general point that we need to be able to simultaneously share our brains and get the benefits of sharing those brains, but be alert to the situations where our interests are not aligned, and then have boundaries with those brains and algorithms where those interests are not fully aligned so that we don't get exploited. So with all of that, I want to thank you for letting yourself be zombified by this podcast for the last hour. Thanks for listening to Zombified, your source for fresh brains. OUTRO MUSIC: 01:14:07 [Psychological by Lemi]