Augusta DellÕOmo: Welcome to Right Rising, a podcast from the Center for the Analysis of the Radical Right. I'm your host Augusta DellÕOmo. Today I'm joined by Ashton Kingdon, a doctoral candidate in the Department of Economic, Social and Political Sciences at the University of Southampton. She's here with us today to talk about something I'm very excited to hear more about, the relationship between right wing radicalization and artificial intelligence. Ashton, thanks for being here. Ashton Kingdon: Thanks so much for having me. So exciting. AD: So I wanted to start off first with what your PhD is actually, technically in, which is Web Sciences. So can you tell our listeners a little bit more about what that actually is? And what that means for your approach as a researcher? AK: Yeah, so I get asked this question a lot, because a lot of people haven't actually heard of what Web Science is. So basically, our departments based out of the University of Southampton, and Southampton University is known for being big on tech, a bit big on AI, Tim Berners-Lee has an office there. So they're really big on the web and technology. So essentially, when Tim Berners-Lee created the Web, they had like a whole host of people, thousands and thousands of people who were experts in the technology. So they understood how the Web worked, how the technology behind it worked and all the research was kind of into that. So eventually, as the Web progressed as a technology, they became more concerned about the fact that less people understood the people and the social aspect. So essentially, Web Science was set up to train researchers so that they're hybrid, so that they understand the Web as associate technical system. So essentially, what that means is, as a web science researcher, you have to be interdisciplinary. So my research in particular, is criminology, history, and computer science. So I have a supervisor from each, and you have to dedicate a certain amount of your research to these specific disciplines. So my research looks at right-wing radicalization online. So I adopt three identities. One is the right wing populist, one is a Neo-Nazi, and one as a member of the Klan, and I look at different online platforms. And more specifically, I only look at imagery, so like memes, videos, images that are disseminated by different right-wing actors. And it's all about the sub-cultural and historical elements within the imagery, how that progresses with different narratives of extremism, and then the way that technology is used as an accelerator within these platforms. So it's really about merging the tech with the social aspects. And this was like key to the Web Science approach. So you know, you view the Web as associate technical system and you examine, you know, as, as a society, we are the people that build the web. The technology on its own is redundant, you know. We are the users we put the content on. And it's really about exploring how technology and society together can impact and influence certain things and in my case, it's radicalization and extremism. AD: Ashton, I think that's absolutely fantastic. And this approach, pulling at both history and technology in the social context of what you're working on, I think is so important. One of the problems, that as someone who's more historically focused, I don't run into the same issues as other researchers. But sometimes there's a pretty big knowledge gap between people who work on the far right online in their actual understandings of how online works, you know, what are the mechanisms for how this technology is made? So I think that that's an absolutely critical approach and I'm really excited to have this expertise for you to talk to us more about. So diving a little bit into some of the things that you're interested in. You talk a lot about AI. And AI is definitely a sort of hot topic right now, especially in our field. So what do you see is the connection between AI and the far right, and why is that important? AK: Okay, say my research in particular, although it doesn't focus on it for my PhD, it's things I've looked on and worked on in the past. So I looked at the issues surrounding like echo chambers, algorithms in elections, things like that a few years ago, just after the Cambridge Analytica scandal, I was really interested in that. So basically, what my research did was looked at this argument of echo chambers and the way that algorithms can create them specifically on the like tech giants Silicon Valley platforms. And so essentially, I don't subscribe to the technologically deterministic view that technology radicalizes people, I don't think that algorithms can radicalize people. I think it can act as an accelerator. But my research was all about proving that actually, it's the social aspects as well, people are looking for something, they're seeking something. So what I did was I created like echo chambers from scratch on certain platforms, the main one was like YouTube. And I was looking at the way that different content was coming up, and the sorts of rabbit holes you could be led down. But then the counter arguments that was sort of where a lot of the specific extremism I was looking at at that point, coinciding with Cambridge Analytica was more like the right-wing, populist rhetoric, the alt-right. And a lot of this was happening on platforms like 4chan. So the echo chambers are created by the social aspect of the users rather than AI or a technology. So it's all about explaining how underpinning it is this concept of homophily, right, so birds of a feather flock together. So on your mainstream platforms, like you might have a Facebook or a Twitter and the majority of people on your Facebook will be like people you work with your family, your friends are all likely to have the same views. They'll always be the anomalies, you know, the person you met while you were traveling, the person you met at a conference, but the majority of the people will have similar views to you. And in a similar way, Twitter, whilst it might not be personal, it's still likely to be professional, career orientated people that share your similar interest. So it's all about breaking down these barriers into the social that was really feeding this idea of the algorithm. And the other thing that I was looking at was this idea of adversarial AI and then deep fakes. So I don't know if you've seen a lot of them, AD: Unfortunately, I have seen quite a few deep fakes. AK: So it was like the Barack Obama, Angela Merkel and, you know, investigating this software like Lyrebird, where you can use your voice to go over different people, whether that be a dictator, whether it be a president, you know, a candidate. And it was all about talking about the dangers of this and how it's more about explaining to people the impact of AI and how the technologies work, rather than saying that these contribute to radicalization or extremism. It was more about explaining the technology, rather than saying that it contributed, if you know what I mean. AD: I know completely. And I really liked the phrase that you use where you said, the algorithms act as an accelerant. And I think that that is something that even when you're looking at these groups, historically, before they really just abounded on the internet, the idea of accelerants is really critical when you think about how do people become more radicalized in the far right, they are searching for this kind of content and the algorithms and these echo chambers that you talked about, are just one mechanism to push them quicker into that space, but likely they would have traveled down that pathway, you know, of their own volition. So, transitioning a little bit, you've also written about this idea of what you call ethical AI. So what is that in comparison with what we just discussed? And why is it important? AK: Okay, so ethical AI is one of, in my opinion, one of the most important things that we're facing as a society. So you probably know AI is taking over the world, people are obsessed with it. Like you said, it's a hot topic right now. So as we build this technology, it's important to understand the ethical implications on society that will come from it. So you can't sort of slot it into the ethical standards that are already in existence. So in our sort of case, it would be one of the most concerning aspects are things like autonomous weapon systems, right? So AI is like the third wave of revolutionary warfare, you may have gunpowder, you had the nuclear bomb, now you've got AI. Anyone that's interested in this, I recommend you watch the eight minute videos, Slaughterbots, is really good for just explaining how it can go. AD: So you know, you would recommend that and not the Will Smith classic iRobot? AK: Westworld you know, if you wanted to see how things can go wrong. So yeah, it's just like, exactly with like autonomous vehicles. It's about understanding that if you're having technology as being the sole responsibility for decisions that affect life or death, who is responsible for that decision? So if you're in a driverless car and you will might notice, you know, you're in this sort of Harvard-MIT world, if you're in a driverless car and you know you're about to hit a child, does the car swerve into a wall and kill the driver? Does it kill the child? If it was an adult? Who would you who would the decision be made to kill and this is called like trolleyology - MIT did like an ethical experiment of like, who, you know, life is at stake, more or less. So there's all these really complex decisions when you're having technology in charge of these big, you know, life or death events. It's not like that necessarily with what we're talking about with extremism and radicalization. It's not to that extent, but it's still a big, big issue. So you have things like I mean, Cambridge Analytica really transformed the way that ordinary people see machine learning, deep learning and how it can impact, you know, ordinary people's lives. And I've sort of been warning people for years about the content that they put up, and you know, how things might not necessarily be what they mean. But I think this, AD: Ashton, can you give a quick overview for maybe some of our listeners who are familiar with Cambridge Analytica, but it was such a complex and really confusing issue for many people. Could you give a brief summary for it? Just the key issues? AK: Yeah, of course. Yes, so Cambridge Analytica really came to light with the 2016 election of Donald Trump. So what they - and then it was also linked to the Brexit leave dot EU campaign, or they they claim that they never did fully paid work for them, there was still a link with them. So essentially, like, the point of the matter is for what we're talking about, like extremism and radicalization is that companies use data mining, to get your data from big Silicon Valley, social media platforms to be able to target you with content and based on your information as provided by them. So what Cambridge Analytica did was they had like general personality quizzes that they like, pumped out on Facebook, and you know, you would answer them. So it would say like, do you think that you're empathetic? Do you think that you're an agreeable person, all of this. And then like I was saying to you about this idea of homophily say, birds of a feather flock together. One person might not, might have answered that quiz, right, just one person. But then Cambridge Analytica, were able to harvest the data of every single friend that they had. So like I have over 1,000 friends on Facebook. So if IÕd answered that quiz, it would have harvested all of these people's data, those people's data, and that's how you can get so much data for like a small quiz. So what the Trump campaign did with that, when they teamed up with Cambridge Analytica was they use this information to target what we call the ÒPersuadables,Ó so they would already have people that they knew were going to vote blue, they already had the people they knew were going to vote red, and they use the information to target the ÒPersuadablesÓ in the swing states specifically. So this was how it was all being talked about as manipulation. And then they were flooding through like thousands and thousands of memes, videos, to try and persuade people to vote for Trump. And when it all came out, it was just how devious and deceptive the artificial intelligence had been on convincing people to behave and act a certain way. So it's information warfare. And then on the back of that you had the investigation of Russia's interference with the election through Facebook and like the Muller report and things. So it's like a big, big issue. And actually, in like on the back of this with what happened with Leave.EU, actually, today, the Information Commissioner Office has actually said I'm sorry, it was yesterday, have said that they're not publishing the report into the actions of Cambridge Analytica. So it's all about power and you know, who gets access to the data, what they're doing with the data and for what purposes, and this is how we can link that specific type of AI to what we're talking about today, like the radicalization and extremism. AD: So going into that a little bit further and thank you Ashton for for walking us through Cambridge Analytica. AK: I hope it was helpful. AD: Yeah, no, it definitely was. It's extremely complicated. And like you said, the, the devious insidious quality is, is sometimes hard to conceptualize for, especially I think about this personally, you have a sort of, I think relationship, you know, with the internet where you realize that some of your data is being harvested pretty much at all times. But then when something like Cambridge Analytica as you discussed, it's not just your data, but they how they use your data to get other people and, you know, the kind of content they're creating. I think all of that is really hard to wrap your brain around because in many ways, the internet feels like an organic place. But like you said, it's all about power and who's pushing the content who's driving the content? So when we're thinking about AI and thinking about radicalization, we've talked through a couple of the key problems, you know, echo chambers, accelerating how people become integrated into these spaces. Are there more problems that arise more thinking about AI and the far rightÕs radicalization? AK: Yeah, I think, um, one of the key issues is, this is become like, particularly prominent because of COVID, right? Because people are suddenly aware of the amount of disinformation that they're being exposed to online. And this is becoming like, more of a public need, right? The public want to know what they're seeing what they're being exposed to. So I think one of the problems when we're talking about ethical AI, so one of the things that came out of Cambridge Analytica, was really this need for social companies, social media companies to be transparent about the technologies that they're using, right, the algorithms and then being accountable for them. So some of the buzzwords people might hear at the moment are about algorithmic transparency, algorithmic accountability, ethical and explainable AI. So we're talking about far right radicalization and the way that it could potentially happen through social media platforms, one of the best examples to use is like YouTube, right? You watch a video about Jordan Peterson, you slowly start to be exposed to more you start to be exposed to more people, you start to be exposed to that people that they're communicating with. And it becomes this like chain of events, right? So there's this need now for social media companies - Explain your technology to the masses. How are you using this technology? How does this influence what we see? Well, one of the big problems with this is they don't actually know. Right? So particularly on YouTube, they use deep learning. So machine learning is a subset of AI where your dataset is fed into an algorithm, right? And then the algorithm will make a decision based off of only the data, it's been fed. And this is how computers make decisions, right? So you might see it in sentencing, you might see it, you know, in the criminal justice sector. And the problem is with deep learning, which is what YouTube's recommender algorithm uses. So deep learning is a subset of machine learning that's like, trained to model itself like the human brain. It's called a neural network. And the problem of this is it makes decisions in what's called the black box of AI, which means it makes decisions and we do not know how it's made those decisions, right. So that's one of the consequences of that. We don't actually know how it's recommending the content, it's learning on its own. Right. So this means that a social media company can't actually explain to you how that particular technology is working. And the other side of this is that there's so much secrecy shrouded in social media companies use of AI, that even the people that work there themselves don't know how the technology works. And this is like the big problem. You don't need to focus on the algorithm, the algorithm isn't alive, it's not evil. It's the data that goes into the algorithm that might be bias. Who's made that data? Who's created it? Who's making the algorithm? What's it spitting out at the end of it? And the thing is that the social media companies will not want you to know that at all. No way, would they give that up. Like there was the famous case of, Google created their ethical AI board. And it got disbanded within a week because they wouldn't show them anything. Like if you have someone external coming in to see if you're being ethical and you're not showing them any of your data or algorithms, then you're like, Okay, I cannot help you. And the reason that they don't want to do this is because they obviously make so much money off of these algorithms, right? It's the stickability of it, the business models that are fueling the platforms, you know, how much how much time our eyes on screens, you know, and the algorithm doesn't know if it's spitting out more pictures of kittens or Kalashnikovs, right? It doesn't know, it's just doing what you're trained to do. And if they're not going to tell people honestly, how their algorithms work, the data that they're being fed on, they can't be transparent. They're not being accountable like in my field, we talk about looking at artificial intelligence as a socio-technical system rather than an algorithm as a piece of technology; you need to expose the socio-technical system saying or giving up one element of it: this is our algorithm, this is one piece of data, is not being transparent. And, you know, you can't expect people to make sensible decisions about the content that they might be being exposed to, if they have no idea about any of the backstory of who's making it. AD: Yeah, completely. And I think that's the last point really hit home to me as someone who spends probably too much time as an American watching C-SPAN and watching Congressional hearings, watching the hearings with Facebook, where you have Mark Zuckerberg trying to explain what Facebook is doing in a very limited, he's obviously not being super forthcoming way, but you have not technologically literate senators and Congresspeople who are asking questions that aren't even getting at the the core issues that that you're talking about. And there's been so much focus recently on the social media platforms. So could you walk us through maybe some of the changes that have been proposed, right, especially because of COVID? There's a lot of concern about deep fakes, misinformation. What are some of the potential solutions that people have put forward and maybe walk us through the costs and benefits of these different approaches? AK: So one of the core things that the big platforms will do - So I'll just say it's like a side note, actually, my research, it doesn't look any big tech platforms only looks at Web 1.0, like forums, right? Because my argument is that actually, the more sinister people and the more dangerous people are operating on platforms that aren't using this technology. So the technology is redundant and there needs to be like a more overarching universal method to combating extremism, right. But we had like the attack in Christchurch - massive terrorist attack. Now, we all work in the far right through CARR right. So a lot of people will know that, you know, people have been warning the government, the police, people, about the threat and rise of right-wing extremism fears before Christchurch, and suddenly a big attack happens, it's discovered that they're communicating on all different platforms live streaming, and they do a cull, like they do did with ISIS, when you know, they did that big Twitter cull. So you can use AI to do that. Excellent. But actually, another side of this is the fact that you still need a human to decide what is extremism, right? Because AI is actually not very good at this. I don't know if you've seen the stuff in the news, I think it was last year about the like farms of people that they had in the Philippines like check content. And it's like, it brings into all these issues about modern slavery and things. But actually, what you found with COVID was when everyone wasn't allowed to work from home anymore, like Facebook, there was like a massive issue about people getting suspended and you know, accounts getting suspended for no reason and content being removed offline that didn't need to be, and things were staying that and this is what it shows is that the AI is actually not very good at this, you still need the human beings there right to understand, to decide, whether or not something's extreme in nature. So I think to a certain extent, AI is useful in removing content. And actually, Megan Squire has a really good thread on her Twitter about removing the like de-platforming from the main platforms, because you have a huge chunk of that initial propaganda, which is obviously super beneficial. And so that's a pro of like using AI to be able to try and automatically detect content. And then the cons of that are where do these people go? They might go onto platforms that like my research shows aren't being monitored as much things like this. So I think you needs to take into account that AI is good for some things, but also we need to think about who is making the instructions and the algorithm to tell this platform to remove certain content? And are they looking for specific people and things that they consider extreme. So you might have like a whole host of jihadist content coming off, because the algorithms trained more effectively to target that. Whereas if you're looking at like white supremacist images, it might be more difficult to train the data and the algorithm to look for those because people are still reading focused on, like jihadist extremism, if that makes sense. AD: Yeah, no completely. I mean, that's the big - that's one of the big arguments about the kinds of white nationalist terror attacks that happened in the United States that police and law enforcement are still disproportionately trained and focused to not look at these groups as threats. And is I think the same principle applies to who are we like, what are we training these algorithms to look for as extremism, as you said. AK: Yeah. And I think that's one of the big issues with the social media companies. And also the fact that to change an algorithm, like to the extent that we would need it to be changed to try and really combat these problems, you would have to have a complete overhaul. And that would screw up their lovely business model that they have to make money, right. And what you would need to do is have external people coming in, to have a look at their data, have a look at how they're making their algorithms, what's coming out the other end to be able to see ethically, if they're doing something that is right, for the greater good. And they won't do that. Like they will not do that, they will never let anyone in from outside so that, I think is one of the key issues that you can try and tell people till you're blue in the face like I do that. Like, you know, it comes back to where these companies are placed. A lot of them are in Silicon Valley. What sort of interests do these people have? WhoÕs behind them? What sort of power implications there and I think that that is a big issue that's like, under explored in research. AD: But I love that you keep coming back to this idea of power, you know, because yeah, like you said, people think of these technological systems as, you know, Facebook is evil. I hear that all the time. It's like, well, Facebook itself is a nonliving being that it's not good or evil. You know, I think it's really important that we not only think about how these platforms operate, but who benefits from them operating in this way, and what that means for us as consumers and producers. And, you know, we have to be critical about who we're blaming for, for these kind of problems. I wanted to ask you one final question. One of the things that I think is pretty amusing about the concept of artificial intelligence is I've been seeing more and more of like, these different labels that people put on, like what you talked about with ethical AI, or, you know, the ways that they're kind of trying to tweak this idea, because I think a lot of people are terrified when they hear the phrase AI. YouÕve talked about, and I've heard a little bit about this, is something called augmented intelligence. What is that? And can that be useful to us when we're thinking about combating far right extremism online? AK: Yes, I just, like go off of what you just said, I just do a tiny little turn, recap, say, obviously AI is just machines making decisions that are like human, right. So it's a machine displaying some sort of human intelligence, right. And then you have ethical AI say, the need for us to take into account things like algorithmic bias, who makes the algorithms like, you know, if it's ingrained from the beginning, the algorithms always going to be problematic. Explainable AI, explaining to people how the technology works. And obviously realizing that things like the black box, I've seen so many things like ÒOh IÕm de-coding the black box AR,Ó that's not necessary, just say, I don't know, I don't know how it works. I'm really sorry. And then you have algorithmic accountability, say social media companies being accountable for the decisions for the bad things that might happen, right, I'm sorry, we effed up here. And then algorithmic transparency, so being transparent about the technologies that you use as a socio-technical system. So this is my key thing. Like, it's not just about being transparent that you're using an algorithm or unveiling in a magical way, one part of the system, you need to tell people the entire story if they're going to be able to make sensible decisions about you know, fake news, propaganda, things like that. So augmented intelligence is really like the core focus of my thesis. So it's about using technologies to enhance human research. So it's the combination of man and machine together on combating radicalization. So the best example I have of something I'm doing at the moment is I'm using open source intelligence to research like extremist networks. So I use this software called Sherlock, which you run through Python and it essentially will you'll put someone's username in and it will go in across the entire Internet of things and the web and Spit back out you every single social media platform or platform that that account is linked to, right? So itÕll take about a minute. So it will go through everything, rather than me just sitting there looking at, oh, does this person, ÒJohn Adams Number One,Ó have an account on Facebook, IÕll have a look, it does it within seconds, right? So then I can use that information to then go in myself as a human being, and make a decision as to whether or not these profiles actually match up. So it's about using technology to be able to aid your ability as a researcher to find things, but you're still there as the human making decisions. Because it's infeasible to assume that a machine can arbitrarily decide whether or not someone's become radicalized. But things like Osen AI, machine learning, they can help you massively in terms of being able to get you data. But I don't think we can trust technology to make these sorts of decisions on their own. So that's really what my thesis is about the combination of both together and like how we argue at Web Science, you know, we need to cross disciplines more, especially with something like extremism, like with me, it's like if you don't understand the history of these groups, like particularly, if you're looking at something like a Neo-Nazi group, you don't understand the history from the beginning of like the rise of fascism, Naziism, you don't understand the sub-cultural elements, you don't understand the technology. I just think it fits better to have a better understanding of all of it combined. AD: Well, as a historian, I'm very biased to agree with you. Well, Ashton I wanted to thank you for being here today and just ask where can people find more from you online, on the internet? Out in these on the web? Do you have anything coming out for people to read? Where can we learn more from you? AK: So I have a Twitter so you can probably just find through Ashton Kingdon. The latest thing I have coming out for anyone that is interested in imagery or memes, I have a book chapter coming out about using memes in research as the sole object. And it's really about a conceptual framework that other researchers can build upon if they want to use memes in the future. Because I've had a lot of people contacted me over the years saying, you know, I see you do all this stuff for memes anyone that seen my stuff through the CARR center will know I usually use images. So that's the latest thing I have coming out. But if you have any questions, please, feel free to reach out. AD: I definitely also think we'll have to have you back on Right Rising to talk about memes. Because it's such a it's such coming up area in our research and also it's something that all of us consume every day. AK: Yeah, you'll get this because, like, the main thing that I try and do in life, you will understand as a historian, so we'll just take the George Floyd protest as an example, because I wrote an article on that. It's about educating people about the historical significance, like signification of the memes, you know, I mean, like, it's people share memes into humor, satire, tropes, without understanding the historical implications of the history of like the Confederacy, Jim Crow, slavery, things like that. And then I also look at like the criminological elements say, what's going on now that they, they can draw from them. And it's this whole, like, a picture can paint a thousand words. And it's about capturing like the universality with the memes. And the inherent crystallization means that different users will form on different platforms. So I think, yeah, memes are a really important way to try and understand radicalization, because I think a lot of people will pick up on memes before they necessarily pick up on a thread on 4chan or something. AD: Yeah. And I mean, I think that's such a great point. And memes has, especially when you're thinking about these historical memes, they really get it this idea. You know, I think people have a false conception of history as just one set of facts. And that's true. I mean, there's dates and times and when things happen, but a lot of what history is is interpretation of why things happen when what caused certain things, what factors were most important. And when you have these alt-right histories, a lot of them are really warping history and putting them in memes and creating this false reality. And, you know, it's very toxic and so quickly shareable. So well, Ashton, thank you so much for being here today. AK: Oh, thank you so much for having me. AD: This has been another episode of Right Rising. We'll see you all next time.