PRE-ROLL: Businesses all over the world right now are trying to reinvent how they connect with the world. Whether a business is delivering packages, treating patients, or running a global customer support center, their customers need them to invent new ways to stay connected. Twilio is the platform that Fortune 500 companies and startups alike trust to build seamless communications experiences with phone calls, text messages, video calls, and more. Really, the only limit becomes your developer’s imaginations. It’s time to build. Visit twilio.com to learn more. JESSICA: Hello, good morning! Welcome to Episode 215 of Greater Than Code. I'm Jessica Kerr and I'm happy to be here with debuting panelist, Mando Escamilla! MANDO: Hi, Jessica, thanks and I'm here with my friend, Rein Hendrix. REIN: Hi, Mando. I've actually known Mando for over a decade so it's really cool to be doing this with him. And also, with my friend Avdi, who I've also known for quite a while. AVDI: Hi, Rein and I am very thrilled to be here with Abeba Birhane. Abeba Birhane is currently a Ph.D. candidate in cognitive science at University College Dublin in the School of Computer Science. She studies the dynamic and reciprocal relationships between emerging technologies, personhood, and society. Specifically, she explores how ubiquitous technologies which are interwoven into our personal, social, political, and economical sphere are shaping what it means to be a person. I just want to say, on a personal level, it is rare that I get the privilege of introducing someone who has legit changed my life. Abeba’s article on “a person is a person through other persons” in Aeon completely baked my noodle and has changed how I think about identity and discovering oneself and so, I am very, very happy to be here talking to you. ABEBA: Oh my God, I have no words, Avdi. Thank you so much for such an amazing introduction. The admiration is mutual and I love your perspective, I love your work, and I love how you look at things. AVDI: Oh, thank you so much. ABEBA: Yeah, so it’s totally mutual. Thank you. JESSICA: For the benefit of the audience, Avdi, you caught the phrase from Abeba’s article, “a person as a person through other persons.” We’ll provide a link to the article in the show notes. For now, Abeba, can you summarize it for us? ABEBA: That was written in 2017 and it took a while to write because I'm used to writing academic stuff and communicating such complex ideas to people outside a narrow field proved somewhat difficult. The editor helped a lot and it was published 3 years ago, but it remains still one of my favorites works and usually, I'm a type of person that would look at stuff I did a few years ago and feel a little maybe embarrassed or like oh, I have moved on beyond that, but I liked that article. A quick summary of that article is a kind of comparison of how, what is known as a Cartesian individualist way of thinking is so dominating, whether it's fear, what cognition is, what personhood is. But also, even outside academia in various spheres, you will see the downstream effects of Cartesianism thinking where people really think in individually individualistic terms. So that article was kind of a pushback against that individualistic notion and the core argument is that there is actually no, even the very idea of thinking is a very communal, very interactional, and a very relational endeavor. Even say, for example, when you send a tweet, you are either responding to someone or you are sending it in the in anticipation to a response so there is always an element of others’ involvement. So any area of life, any sphere of life you look at, we become who we are through our constant and dynamic exchange dialogues and interactions with those around us, but also even the physical infrastructure, the physical environment. It in a sense, plays a significant role in kind of shifting the kind of person we continue to be, or that we are. I don't know if that describes the idea perfectly because we have become has that connotation of that we are finished. So become is a much more suiting, I think because we are constantly becoming, we are constantly changing, constantly adapting as we move through interactions and journeys with others. I’ll give various examples and actually, one of the really striking examples is people read and especially psychologists are very sensitive and say, “That's a kind of strawman argument, a caricature argument. Nobody commits to the Cartesian notion of the isolate yourself anymore. But you go in and look at how people actually study various aspects of cognition and you would find that this individualistic notion has actually seeped into people's research. So one example is memory research where the canonical traditional study memory is to seek someone out to their environment, place them in a lab, and you give them various words or numbers to memorize, to regurgitate, and then you ask them to recall it. But also, and as a really striking point, is that I bring in the example of solitary confinement and going back in history, at the conception of solitary confinement, the idea at the time was that it's all a reflection, logical coherence, and the ability to think; all the higher kind of elements that matter. So then the thinking was that if someone has committed crime, then the solution is to put them in isolation, away from social distractions, away from the distractions of what's happening around them, and then in isolation, they will reflect and they will repent on their crimes and then they will come out cleansed as if they are a new person, they will emerge anew. So that kind of thinking within the prison system, especially in solitary confinement, at its conception, really shows you how Cartesian thinking was really fundamental. But of course, the research has shown that when you put people in solitary confinement, instead of reflecting and thinking and going through logical steps and finding their true self, what happens is the longer people remain in solitary confinement and isolation from others, the more they suffer from physical, psychological, and mental problems. Suffer from delusions, they lose sense of space and time, and in some extreme cases, they even lose the ability to differentiate where their body ends and other people begin. So eventually, they gradually lose their sense of self instead of finding themselves. It's really unfortunate, but a really stark way of illustrating how people in isolation are no longer people. JESSICA: Oh, wow. Isolating someone from all other people. If we hypothesized with a real grasp of people finding themselves through other people, people being themselves through other people, we could not perform that experiment because it would be unethical to isolate a person in order to see what happens when they're separated from other people. But it happens that our individualistic culture has performed that experiment for us and indeed, they lose their sense of self. ABEBA: Yes. Any scientific expense like that where you isolate people is absolutely unethical, but the prison complex does it for you, unfortunately, which is really sad and horrifying. Authority to think, treat solitary confinement as kind of punishment, not as something that has a really deep mental and psychological impact, not as a way for that leads people into losing their sense of self, which is unfortunate because that's how solitary confinements should be viewed as a really horrible way for someone to die while they're still alive. MANDO: My philosophy training went as far as reading Camus and getting to the point where Camus telling me that the only thing I had to decide first off was whether or not I wanted to commit suicide so, I just closed the book and started playing with computers instead. [laughs] So when you talk about Descartes and the individualistic perspective, I feel like I'm gathering it via just the context clues of what you're saying that maybe this Cartesian perspective is one of a strong individualistic existence that we exist kind of, not maybe isolated, but like you are you and I am me and we're separate. Is that right? Am I getting that right? Or is there more to it? ABEBA: Yeah, I think so. I've only been to the United States once, but my impression of that the culture within the US is generally speaking very much informed by that individualistic notion of if you work hard enough, you can achieve, you can get whatever you want. or if you are, say, for example, obese or overweight, then it's your fault. There is that element of thinking about the individual as if they exist on a no man's land, or I should say, on an island. It's as if we are kind of isolated items that's not impacted with what's going on around us, or with our historical inheritances, or with what is socially perceived as the norms so various social and structural power inequalities. So that individualistic notion tends to put all these factors aside as if they don't have that much impact, even if their impact is acknowledged, it's seen as a secondary factor rather than a central factor that really plays a huge role in determining what you achieve, who you become, or what kind of things you end up doing. But I think these things which are usually seen as contaminant factors within the Cartesian framework for me, I think are really central and not just for me. Within the embodied cognitive science perspective, even a lot of Black feminist scholars highlighted factors really are important. If you are proposing any understanding of cognition, or of the person scrutinizing and examining those factors often seen as contaminant in the Cartesian perspective is really central. MANDO: Correct me if I'm wrong, but am I remembering right that you had some longer form thoughts around pioneering Western expansion and stuff like that as being this myth that was created around this? I'm feeling these weird connections between this individualistic ideas or viewpoints and the way that some parts of the tech community still feel about meritocracy and pulling you up by your own bootstraps. Right? Am I close? Am I? AVDI: So Mando, you might be referring to, I gave a talk—maybe you were there for that—that was exactly, it was about the pioneer mythology in America and particularly, the pioneer mythology that was propagated by people like Laura Ingalls Wilder, which turns out to be almost entirely based on either fiction or misinterpretations of just how interdependent people really were in those days and how many of the failings, how many of the disasters came from people believing that they were independent. Actually, that talk was influenced by some things that Abeba wrote. There were some pieces in there about, I think, interdependence that was influenced by that. So I guess, that explains why you're making that connection. MANDO: There it is, yeah. Abeba, the stuff that we've been talking about so far has been very – it's been about cognitive science and it's been about philosophy. I've seen that you have increasingly been commenting specifically on technology and artificial intelligence and stuff like that. Can you trace for us the connection between your philosophical and cognitive studies and your tech criticism? ABEBA: So basically, you are asking me to describe my thesis. MANDO: Just real quick! Just real quick! ABEBA: Yeah, just real quick. REIN: Just hoping [inaudible] is happy to defend it, hopefully. ABEBA: Yeah, that's basically what I'm trying to do through my thesis and it's a long and challenging road. The short summary of it all is that again, when you summarize, you change to caricature things so what I'm going to give you is almost a strawman. But what I'm trying to do through my thesis is that on the one hand, you have the embodied cognitive science perception or understanding of cognition, culture, societies, which is what we've been talking about, which is fundamentally interconnected, fundamentally codependent, dynamic, always changing, always becoming, never finished, not something you can pin down and finalize and predict in a sense. So inherently, as the conclusion of cognition personhood in society, coming from a embodied cog sci, general systems thinking is that as complex adaptive systems, we are inherently non-determinable and ambiguous and always changing. You have that view on the one hand that I'm trying to develop in the context of AI and machine learning in general. But on the other hand, you come and look at so much of the development, especially over the last few years in AI and various machine learning models where the fundamental assumption is that you can create a model, see these huge amounts of data, and you can track patterns and similarities and commonalities, and you somehow then understand or know human behavior or how society works. Then you use that data or that output to create predictive models where you are constantly predicting really high-stake impactful areas where you have machines determining who is a good hire, who should go to prison, or who is a good babysitter, who is deserving of child welfare or general welfare. You have the grading algorithm that was introduced in the UK and in Ireland over the summer. So you have algorithms for all kinds of social phenomena that operates with the assumption that you can predict with some precision. Between those two notions, I'm trying to argue at least that any predictions, whether long-term or short-term, of how people will behave, or how people will act, or who holds a good characteristics, or how society moves is fundamentally flawed. Not only scientifically because theories and empirical findings from systems thinking say that these complex adaptive phenomenon systems are inherently unpredictable. So it's difficult, it's impossible to predict scientifically, but also, on top of that, it's really ethically a huge red flag to create systems and to deploy them into the world that predict these really crucial roles in these really crucial elements within society. So I'm trying to approach it both, as scientifically problematic, fallacious, but also ethically flawed and harmful, especially to minor-sized communities, to individuals and communities that are at the margins of society. Because when you create machine learning models using historical data, the huge amounts of data that you use to create machine learning system is usually treated as the ground truth. Whereas in reality, it really reflects historical injustices and historical inequalities and instead of questioning those historical inequalities, what machine learning systems do is to assume this is the ground truth, this is how societies, this is how things are, this is reality and based on these, we can predict. Also, prediction itself is performative. By predicting something, it's almost a self-fulfilling prophecy. You are creating, you are helping create a certain type of reality, and in that process, you are also depriving people the potential to be the exception from the rule. Basically, to put it, you are depriving people the potential to be different from the stereotype. So those are the kind of questions I explore in my thesis and those are the kinds of connections I try to make between embodied cog sci theories and principles, systems thinking, and current machine learning systems, and ethics and harmful outcomes due to machine prediction. JESSICA: So when we make machine learning models based on a bunch of data and then act on what they can say about what happened in the past, which is assumed to be what we also want to do in the future, then we're recreating the past, we're continuing the past, and we're depriving our whole culture of that continual becoming. REIN: Not only that, but we are perpetuating a particular perspective of the past. ABEBA: Yes. MANDO: Yeah. It's called reinforced learning for a reason, right? It’s what it’s doing. [laughter] JESSICA: It’s a lot more about reinforcement than learning. ABEBA: Yes. Except for some reason, reinforcement learning is presented as objective, apolitical, neutral. [laughter] ABEBA: Yeah, I agree. I agree. MANDO: I mean, the subtlety to my raspberry thumbs down because it all depends on the data. If there was some sort of magical wand that one could wave that would create this dataset that was free of historical injustices and biases and things like that, then sure. JESSICA: I don’t think so. ABEBA: Yeah, that's very problematic. For starters – JESSICA: I don’t think usually bias is a historical – no, no, we have them right now! They’re present. They’re the future. There is no objective measure. Therefore, there is no dataset that can be correct in a system that's constantly changing. ABEBA: Yeah, exactly. I mean, it kind of assumes there is an observer free reality that can be represented by data, but that is not the case. Reality is always observer dependent. So any dataset will reflect at the point of view of that observer, or how – MANDO: Yeah, it seems like it's assuming that there's some kind of – a lot of this stuff assumes there's some kind of platonic ideal of a person. Like if you're trying to measure a person, if you're trying to gauge their aptitude for a job, or for getting a home loan, or anything else, you're assuming there's some platonic ideal of them at the core of the universe, that they are somewhat expressing through their various, through their – JESSICA: Their resume? MANDO: Their resume and other things and if you can just find that platonic ideal of the person, then you'll know what they can do. But that's not it, is it? That ideal, that kernel doesn't exist. We exist in relation to other people and I see this with interviewing and gauging people for jobs a lot where a lot of these systems seem geared to trying to measure certain aspects of a person. You're not going to know how they're going to perform at that job because you haven't seen them in constellation with the other people who are going to be on their team. The way that they behave – JESSICA: And the way the job is going to change with them in it. MANDO: Exactly. That system is going to change. JESSICA: As if the person matters half as much as everything around them in that job anyway! MANDO: Right. REIN: Man, there's so much going on here. This is great. So we've got Simon’s Ant, which is the environment influences behavior. JESSICA: A Simon’s Ant? REIN: Simon. So Herbert Simon had a parable in the book that won him his Nobel Prize, which is of an ant and the idea of the Simon's Ant is if you look at an ant – so if you watch an ant for a while in the jungle or wherever ants are and you look at what it does for a day. It walks over here, it walks over this log, and walks around this rock, it goes over there to talk to its confederates. It comes back, it looks for food, it comes back. If you placed a little tiny transponder on the end and then you chart it out in three dimensions, the path the ant took through the jungle, and then you got rid of the jungle, that path would look very [inaudible]. [laughter] So Simon’s question was where does the complexity come from and the answer, well, Simon got the answer sort of wrong. Simon's answer was the complexity comes from the jungle so you can just ignore it as being external to the behavior of the ant. John Hagelin said, “No, look, if you want to understand what the ant did, you can't throw away the jungle because if the ant was in a different place, it would have behaved differently.” And so, the metaphor here is that the environment you're in has a huge impact on behavior. JESSICA: Yeah. It is the relation, right? You can’t throw away the ant – ABEBA: Yeah. REIN: So Simon said, “We treat the jungle as an externality,” and Hagelin said, “No, we have to think about the ant as being embedded in its environment to understand its cognition.” ABEBA: That is a brilliant point. That's connected to things that I’ve been thinking about, about abstraction and how, in a sense, when we are creating an archetype, as you were saying, of a person from their patterns. We are trying to abstract away what we think are important, which kind of connects to what you are talking about, Rein. How we can’t treat the environment as externalities, but rather in a sense putting away the jungle is as an important factor is putting away context and putting away the necessary contingencies and creating that group as something that can be abstracted away. This is one of the problems. Abstraction, I know you've discussed about this in your previous podcast with one of your guests, Professor Eugenia Cheng, I think. Abstraction has its own place and one of the beautiful descriptions that I have heard is from your podcast with that mathematician, Professor Eugenia Cheng, where abstraction is really important, it helps you see the bigger picture. So I'm not dissing or advocating for throwing away all systems of abstraction. But when it comes to understanding the person or the ant's movement, what really matters is not the positive stuff we can abstract away, but the particularities that the idiosyncratic movements, the concrete interactions, that's really what makes us who we are individually, uniquely, rather than – or the commonalities that we have with other people that can be abstracted away and formalized and kind of datafied in a way that can be used to fuel machine learning models. REIN: Let me add in one other thing, which is that I think we've gotten to the priority wrong on how we understand the complex systems or folk models; how people think about it. Because what people think is, when you have a system, then what you do is you look for the components and you look for the interactions, and that's what you do to analyze the system. But in fact, being able to find components and interactions is what allows us to see a system in the first place so, seeing components and interactions comes first not second. JESSICA: I liked Abeba’s point from Dr. Cheng's episode was about the right abstractions can help us see the bigger picture from the Simon’s Ant example, other abstraction mechanisms can take away the bigger picture. ABEBA: Yeah, or in the case of the latter, maybe they are the wrong framework. JESSICA: Right. ABEBA: Because in a sense, when you are abstracting away my activities or my behavior, you are taking out or separating the things or some kind of facts about me that can in and of themselves and that can describe me independent of my interaction with others, independent of my dynamic communication with my environment. So if you're putting away all the mundane or idiosyncratic movements or behaviors that cannot be put into patterns, or that cannot be seen independent of other factors in an ideal world, in a reality where things are not interdependent and interconnected in the mathematical world, as I think I described it, that is a really nice way of seeing things. But in the reality that we exist, where things are always becoming, where things are always contingent on other factors, where people exist always codependent and in co-interaction with others, then trying to kind of put away that dependency, that co-being, and trying to – this is the same as kind of psychologist studying memory by taking away the person and putting them in a lab and trying to understand their mental capacity for memory. JESSICA: It's only memorization of arbitrary. ABEBA: Yeah. It kind of gives you a neat – it sounds neat. It gives you an ideal way of understanding people and it gives you an illusion into thinking that you can separate things and put people in a lab and really just focus on their mental capacities and memory. JESSICA: And we think that's what science is! We think science is about isolating all the other factors so that we can study just one as if that weirdly separated concept of memorization has any relation to how we interact with it. ABEBA: Yes, exactly. Yes. So many people have said that really beautifully and one of my favorite books on that is Order Out of Chaos by Ilya Prigogine and Stengers, I think where they remark on how science from its foundation is really built on separation, dichotomizing, cutting things. Even the etymology of science itself comes from the root word of to cut away, to separate things. So that is the canonical perception in thinking about science, which again, really shows you how really pervasive Cartesian thinking of separating and individualizing and cutting things away. But in reality, you can never separate, you can never cut between things. MANDO: And I feel like it leads to reductionist thinking when you regard cutting away as the ultimate form of knowing. JESSICA: Leads to? [laughter] It doesn't have far to go. MANDO: I was thinking and I think in an information economy, it feels like we tend to identify having “all the data” as knowing everything we need to know about something and we constantly mistake, we think we have all the data on things because we have so much information. What I was thinking about is if I were the ant and aliens were watching me and they were collecting data on all of my movements, well, sometimes I go into the kitchen to get a bagel and sometimes I go into the kitchen to have a conversation. But from the point of view of that dataset, it's just going into the kitchen. JESSICA: Or on a resume. On a resume, it's oh, you might notice that someone has 10 years of Ruby and Heroku, but no AWS. Does that say that they don't like the cloud or—well, okay, Heroku is also the cloud, I'm making this up—that they don't like AWS, that they don't like Java? Or does it say that in that environment, Ruby skills were needed and developed in that environment, in that particular thing that we're working on, Heroku was appropriate or just historically present? We make it about the person and it's almost never, the ant did not climb 3 or 30 feet today because it is a climbing ant of climbyness. There was food up a tree today! REIN: Yeah. There’s a really great example of this phenomenon in cognitive science, which is the study specifically of confirmation bias. JESSICA: The thing about confirmation bias is once you learn about it, you see it everywhere. [laughter] REIN: That's the other thing about confirmation bias is that it was originally studied out of context. So they gave these people, in a laboratory setting, a task to do and that task didn't have any – they weren't familiar with the task. They didn't have any context to make sense of the task. In that scenario, they saw confirmation bias. But when another study gave them effectively the same task, but using a context they were familiar with. So the context was around checking ID to figure out whether someone's allowed to drink in a bar; it was a context that people were familiar with. When they gave the same task, the same logical problem, in a context that they were familiar with, the confirmation bias effect disappeared. JESSICA: Oh, because the thing to do with confirmation bias is not to eliminate the human. It's to recognize it and compensate for it and in a real situation, we can often do that appropriately, consciously. REIN: So Gary Klein made a really good point, which is that you don't get grant money to study how bias work well. [laughter] ABEBA: That is very meta. MANDO: Abeba, one of the things that I think I see in some of what you talk about online is it seems like you're saying sometimes that a side effect of all of this study into machine learning models is that if we think we can model everything we need to know in a machine learning model, then we flip that around as a mirror and say, “Then that is what humans are made of.” Like, this machine learning model is also what a human is. Is that accurate? Like, is that something you have been saying? ABEBA: Yeah. I mean, it's obvious that many machine learning researchers don't really engage in really high-level critical, or almost metaphysical question of is that what humans are, or is that what society is, or is that reality is? You don't find any questions like these in machine learning research. But when you look at what machine learning models are trying to do, you see that the working assumption of what people are, what cognition is, what society is, and you extrapolate that assumption is that has it existed? What has happened historically? If you can gather enough data on it, captures reality and then from that as your baseline as I keep saying ground through that's the term machine learning people use, which is often used without any critical scrutiny, because it also has its own really huge baggage of assumptions starting from the assumption that there exists some objective, observer free world out there. So coming back to my point, if you can gather enough data to tell you how things are and how things have existed and how people behave, then they go on to assume that then you can create machine learning models to replicate and to create a future that resembles the past for what exists or what the data shows you exist. REIN: Just thinking about how many levels of wrong are involved in getting these machine learning results. So, first of all, if you take means testing, first of all, it means testing is bullshit. It's not a legitimate way means testing. So figuring out who gets welfare-based on some criteria about are they the right sort of person versus do they need it? So first of all, means testing is bullshits. Second of all, using machine learning to do means testing is bullshit. Third of all, using machine learning to get some single number that tells you how worthy a person is to receive aid is bullshit. Fourth of all, the results that you get from machine learning are garbage, but they're given this sort of in premature of authenticity, but because people don't understand that machine learning is garbage. [laughter] Fifth of all, it's okay to do the wrong thing wrong because then you generally don't succeed in doing the wrong thing. It's much worse to do the wrong thing because then you get wronger. But in this case, you're doing the wrong thing wrong in a way that makes you wronger because people are interpreting these garbage results as being meaningful and then they're acting on them. ABEBA: Seventh of all, I'm just going to carry on from what we started, seventh of all, people have a huge amount of faith, really blind faith in machine learning models. People assume, because there is mathematics involved, they are apolitical and they are free from negative outcomes and because it's not people, but rather machines that are doing the sorting thing, the categorizing or the prediction then it must be good. REIN: Yeah, how can adding be wrong? AVDI: Oh, let me tell you. Have a couple of my kids do math this week. Adding to be super wrong. ABEBA: Yeah. So we can really lose their critical abilities when it comes to examining machine learning models. So that's the seven and eighth of all, even a meetup point that the fact that much of learning models start from a really problematic assumption. In the case of say, for example, judging who deserves welfare, they come into this with the idea of playing gotcha. How long are we going to catch people that are taking welfare systems undeservedly, not from a really positive or a really concerned point of people often that fall under the welfare system are underserved from economically deprived backgrounds. So that assumption is not how do we improve that, how do we change society, or how do we help those already vulnerable people? It's really built on how do we catch people that are taking welfare that shouldn't be, so. REIN: This is the welfare queen, the racist welfare queen trope laundered through machine learning. ABEBA: Yeah. MANDO: Yeah. There's something that I've been bouncing around in my head a lot especially with the recent election cycle here in the US. It seems as though there's this, I don't know, stratification of empathy in society where you have these different levels of empathy for different groups of people based on how you relate to them or how you know them or whatever. When you don't approach these kinds of problems with this generalized empathy towards the people whose behavior you're trying to model or effect or whatever, like you were saying, but without that empathy, you don't focus on the potential positive outcomes that you're trying to actually get. You're focused on the negative outcomes that you perceive as reality and trying to reduce those. Rein, like you were saying. It's like welfare queen trope that the government is willing to spend who knows how many millions upon millions of dollars to build these machine learning projects to reduce potential abuse of the system rather than spending that money trying to make the overall outcome better. I don’t know, to me, it seems like – REIN: I think that's exactly right. Yes. Let me quote, Russ Ackoff here, “You can't get something you want by getting rid of something you don't want.” MANDO: Yeah. [laughter] REIN: So the example he uses is what's the likelihood that you turn on the TV and you get to show you want? He said he calculated it and it's a tenth of a percent. So you switch to another channel, what's the likelihood that you've got to show you want? It's not better now. JESSICA: So if you just eliminate all the channels. Well, then at least you’ll know. REIN: You actually have a 50% chance of getting a show that's worse than the one you're currently on. ABEBA: Yeah. Also Mando, don't also forget the underlying topicality drive between in the creation and deployment of these machine learning models. If you look at the objectives, they are not really in the business of protecting vulnerable people's welfares, or needs, or listening to marginalized people’s interests. But rather, most of the time, the objective is to maximize profit, create efficiency systems—efficient in a sense that reduces time and effort for people who are already in positions of power. So they are barely developed with the interest and the need of a marginalized community so capitalist motives drive the development and deployment of these systems. AVDI: Yeah, and it seems to me, anytime you're trying to score someone, it becomes, even if you claim it's not, it's going to become a game of worth, or it's going to become an indicator of worth. Ultimately, you are rating people's worth as a number. JESSICA: And then the system treats them like that and then people treat each other like that and it becomes accurate as a self-fulfilling prophecy. ABEBA: Accurate in a sense that it portrays socially held stereotypes, because – AVDI: The more you're marginalized, the less you're able to do in the world. ABEBA: Yeah, and I have various ongoing projects and a couple of them involve reading huge amounts of work in HCI, machine learning and AI, and in almost every paper, you will have the word accuracy mentioned multiple times. After reading so many papers, I have come to realize that when people say their models predict certain phenomenon or certain behaviors or certain actions with accuracy, the accuracy usually refers to socially held stereotypes. So if their models can match that perception of an individual, then that is often seen as accuracy. But that's my understanding based on my reading. I'm sure a lot of – AVDI: So accuracy means confirms my bias? ABEBA: That's my claim and I'm sure a lot of people will not be happy with my interpretation, but that's unfortunately, what I have come to realize. MANDO: That’s what the listeners are here for: cognitive bias hot takes. REIN: There's a problem here, which is, if you had a better way to get the answer, you would just do that instead. So if you had some way to determine the accuracy of your answer that was more reliable than the thing you're doing, you would just go do that thing. JESSICA: But the thing we're comparing it against is human judgment, because that's all we've had in the past to decide someone's worth, for instance. But now we attempt to reproduce the human judgment with math and we get a more precise number with more decimal points! Progress! ABEBA: [laughs] Yeah. I mean, accuracy, just like the notion of unbiased data or the notion of ground truths, again, is [inaudible] towards these observer free kind of really Cartesian metaphysical assumption of you can remove all these contaminants social, historical, contextual factors in and get at that freestanding reality. So even the very question of finding the true accuracy or the true ground truth or the unbiased dataset is just a futile endeavor because there is no such thing, I think. MANDO: Abeba, have you focused on any other areas of technology that are causing these same reinforcement of dynamics outside of machine learning? Like other things in the technological sphere, or has your work been focused primarily on machine learning and AI and frameworks around those and how they're impacting? ABEBA: Can you give me an example, maybe? MANDO: I was kind of hoping that you would have some. I was wondering what are the kind of things in technology reinforce these kinds of dynamics outside AI and machine learning models and stuff like that. I know we talked a little bit about it earlier, about the interview process, but these are our own personal mental models and how that happens. I was wondering if anyone had any other ideas or examples, or if you hadn’t worked through any. JESSICA: That’s always the thing about names and Western assumptions that people have Western style names the gender binary enforce informs. MANDO: Yeah, that’s a good point. Those are both good points, good examples. Thank you. ABEBA: Yeah. At the moment, I'm on the final year of my PhD. [laughter] Saying it out loud is really scary, that means you really have to write the thesis and I'm nowhere near that. The good and the bad thing is I have been working on various projects, interrelated projects. So I have been producing academic outputs, published work, which is the currency of academia. From that front, it’s positive but the negative thing is that a PhD thesis is supposed to have a single story, a single direction. You’re supposed to narrate something. So having myself distributed between all these various projects is making it really difficult to kind of narrowing down which directions I’m going to take, which story I’m going to narrate even though I have this idea of connecting embodied cog sci thinking, what do learning systems do, and the Black female scholars have been producing amazing work in exposing how marginalized folks are always the most disadvantaged in how historical discriminations work and how Black women's experiences have been kind of sidelined. But when it comes to having knowledge of say, for example, of oppression, the experience is really crucial. Patricia Hill Collins, one of my favorite Black female scholars, outlines this beautifully. She even thinks of knowledge in two terms. She has books learning on the one hand, learning that comes from academic engagement, and you have wisdom, which you acquire through your lived experience. So for her, lived experience is really crucial. When it comes to judging oppression or discrimination or bias or anything like that, people who experience those things have the extended privilege to really know and understand those things. So I've also been trying to approach what ethics is from that. So at the moment, I have three strands; the embodied cog sci, the machine learning elements and the kind of practical concrete ethics that come from Black female scholars. So coming back to your question instead of kind of studying other things or looking at other things, I'm really trying to narrow down at this stage of the final year of the Ph.D. Yeah. MANDO: Stop jinxing it, I’ve got to find some wood to knock on for you. JESSICA: Wow, it's almost like your varied interrelations with your existing complex environments are in conflict with the academic desire to narrow things into a single component part. MANDO: Almost. [laughter] JESSICA: Coming back around to that. Abeba, I didn’t ask you the question that we ask all of our guests. Now, usually that question is what is your superpower and how did you acquire it? But in light of this discussion, I want to ask you what is one of your important idiosyncrasies and what contaminating factors from the rest of the world helps you develop it? ABEBA: [laughs] I’m glad you asked me the second question instead of the first because I'm Ethiopian, I’m from Ethiopia and I live in Ireland. So in Ethiopian culture, people don't really talk about their superpowers or things that they are good at or it’s like bragging is really looked down on. Other people talk about your superpowers or what you’re good at. Ireland and Ethiopia are really far apart in terms culture similarities but one thing both cultures, my home country and the country live, one thing they have in common is this thinking of talking of yourself as something undesirable. So I would have struggle finding or expressing my superpower thing. So your second question, which is what is my idiosyncrasy and what are my contingent factors. Again, like superpower, maybe my idiosyncrasies are something that are reasonable to other people, to those around me and not me. So yeah, I don't know. MANDO: In light of Ethiopian culture, maybe we should say what we think your superpowers are. JESSICA: Yeah, yeah. Or things that are different about you that are important to the world. MANDO: Yeah. JESSICA: One of their things is clearly seeing and appreciating the exceptions, the potential to be different in every person in every situation. REIN: Yeah. MANDO: Yeah. Along those same lines, the ability to tease apart these existing cultural ideas around identity and humanity, and to be willing to say that hundreds of years of thinking now that’s just all wrong. Y’all’s all wrong. JESSICA: Yeah. REIN: Before I answer, I just want to point out that Mando, you were talking about teasing apart and I think it’s fascinating how embedded Cartesian analytical thinking is into everyone's worldview. I don't think you're wrong. I just thought that was interesting. I would say that in addition to that sort of analytical thing that you're doing, I also think you're doing a really important synthesis, which is, you’re taking concepts from different but related fields and seeing their connectedness and interrelations to then bring them together into a whole, that I think is more than the sum of their parts. JESSICA: Yeah, seeing the consequences that do not belong to any one cause such as oppression, marginalization, machine learning, our worked individualism, but seeing the things that are consequences of all these things wrapped in with each other. AVDI: Yeah. Abeba, I feel like you just a person has idiosyncrasies through other persons. [laughter] ABEBA: I like that. REIN: Well, if it were just one person, then nothing about that person would be unusual. [laughter] ABEBA: Exactly. MANDO: It’d be synchronic then. ABEBA: See, everything is relational. JESSICA: Yeah. MANDO: Everything is relational. JESSICA: Yeah, it is. REIN: I feel like this is a good moment to move into reflections. Although, we kind of did that. JESSICA: Yeah, we kind of just did. Does anyone have any additional reflections? MANDO: I'm going back with what Rein just said. The thing that I’m carrying away from this the most is how that Cartesian thinking and worldview is so embedded in my own almost lizard brain that it frames how I look at so much stuff about me even thinking about it. Also, that I have a lot of reading to do. AVDI: I think the thing that really stuck with me is the phrase contaminating factors. Like interpreting context as contaminants and interpreting relations as contaminants. I think that is the thing that we do and that's a useful way of really bringing into focus that we do. As a reflection, I also want to talk just a little tiny bit about something that isn't a reflection from this particular episode, but it's just getting back to why I was so excited to talk to you in the first place and say that that perspective of relationality and of our identity emerging through relationality, not from a single source. For me, personally, I feel like it really helped me get over the Western notion of dive into yourself to find yourself, go inside yourself to find yourself, and really finally gave me peace to discover myself in relation to other people and not feel like I was shortchanging my own identity by doing that. That has been just transformative, so. REIN: Okay, that actually changes mine slightly. It is true that that the sort of dominant paradigm in Western thought has been this Cartesian mode, but actually, Eastern thought has gotten this for centuries. So for example, in Jainism there are three central principles. The Jains are the people who don’t kill bugs because they’re non-violent. The first principle is the theory of – I’m going to reduce these by summarizing them, but I'll try to get them across. The first is the elephant and the blindfolded man parable; we each see a particular perspective on the thing, on the object and that no particular standpoint is privileged. The second is the theory of condition predication or contingency so, everything is maybe, or in some ways, or from a perspective. JESSICA: It depends. REIN: Right, there’s no objective perception of the universe. And then the third is the theory of partial standpoints and so, that is that any particular object has infinite facets, dimensions that we could perceive, that we could talk about, but at any time we're only perceiving some of them. So Jainism has gotten this right. Well, I think it’s right—that's a value judgement. But Jainism has had this perspective for centuries. JESSICA: I want to give the Cartesian program credit for what it's good for by using science as a way to break things down into parts, and studying those very deeply and individually in isolation with the scientific method. We have learned a lot! It's that incredibly useful to the progression of human knowledge and that's one reason it's so embedded and stuck. It’s not that science is breaking apart. It's not that controlled experiments are bad or useless or wrong. They're just inappropriate in most real situations now! I think of them as corner cases; sometimes x equals zero and the problem is a lot easier. It's like that if you can have a linear effect, great! Do that easy math but don't think it applies to anything human! ABEBA: Yes. I don't hold on when it comes to criticizing Cartesian worldview. So I also want to join you, Jessica and say it's not all bad. I mean, the Cartesian forwardness, for example, brilliant! We have also learned so much about the world by breaking things down into its own grade. Again, I go back to, they break things down into fine grained elements and studying them. It’s all great but again, I go back to that book Order out of Chaos. They brilliantly point out how it’s great to break things down into their very tiny elements and study them. The problem is we forget to put pieces back together, we forget to zoom out and try to see the bigger picture. I mean, breaking things down and all this Cartesian thinking gives us a kind of certainty, which we all love, and it gives us a sense of control, and it's great in a sense. But also, it prevents us from acknowledging these continually moving reality where we cannot be certain about everything and we also get into the habit of giving answers and see as a weakness, as a problem rather than as an equation, as a part of reality that we should embrace and that we should live with. REIN: I mean, even reductionism isn't bad per se. It is good that I can sit in a chair without having to perceive it as a collection of quantum fields or whatever the fundamentalism universe is. [laughter] Like reductionism is really good and lets us understand the universe at all. JESSICA: But it’s insufficient. REIN: Yeah. JESSICA: Yeah, yeah. Embrace certainty as part of reality and work with it. We can work with that! ABEBA: Yes. JESSICA: Thank you. Abeba, I think this is my new favorite episode. ABEBA: Yay! I have enjoyed this so much. I could talk to you all day. JESSICA: Okay. [laughter] REIN: I’d like to do another one, please. JESSICA: Okay. ABEBA: Ah. Yeah, this has been enjoyable. MANDO: Thank you very much, it’s been fantastic. JESSICA: If you like Greater Than Code, you should join us on Slack, for instance. Actually, just support us on Patreon and then you'll get an invite to Slack and then if you want to [inaudible], you can also do that. But first, support us on Patreon on patreon.com/greaterthancode.