Episode 122 === [00:00:00] Introduction and Welcoming the Guest --- [00:00:00] Sean: Hello and welcome to Teaching Python. This is episode 122, and today we're talking about the role of AI in society. My name is Sean Tyer. I'm a coder who teaches, [00:00:27] Kelly Schuster-Paredes: And my name's Kelly Schuster Perez, and I'm a teacher who codes. [00:00:31] Sean: and today we are joined by a very special guest. We're joined by Cecilia Dei all the way from Spain today, and we're super excited to have her here to talk about the role of AI in society and how it affects different people in different ways. So welcome Cecilia. It's wonderful to have you today. [00:00:48] Cecilia Danesi: Well, thank you so much. I'm so happy to be here. [00:00:51] Getting to Know the Guest --- [00:00:51] Cecilia Danesi: Uh, I have to correct something because I'm living in Spain. Yes. But I'm from Argentina, [00:00:58] Sean: Ah. [00:00:59] Cecilia Danesi: but it's very confused because now I live in Spain. I'm working in Spain, in Europe, so, but I'm born Argentina. Arg. [00:01:07] Kelly Schuster-Paredes: I was gonna correct you since, we have a very important, Argentine in South Florida with us. [00:01:12] Cecilia Danesi: Great. [00:01:13] Kelly Schuster-Paredes: yes. Right. So I, you know, she's probably very proud of, His accomplishments. [00:01:18] Cecilia Danesi: Yes. Okay. [00:01:22] Sean: Before we begin, I wanted to pass along a message. I was telling my daughter this morning, she's 10 years old, that I was, going to be recording this podcast with you. And she asked me about who you were and what you worked on. And she right now is, doing an afterschool robotics program. And she is really loving the engineering and the coding and the problem solving. So she wanted to say hello from one smart woman to another to keep going and she's very happy that we get to chat. [00:01:47] Cecilia Danesi: thank you. Thank you. I'm so happy because this is the idea to inspiring young, women's. [00:01:53] Sean: This is good because now I can, talk her into helping me edit the podcast afterwards so she can be a part of it. [00:01:59] Cecilia Danesi: Yeah. She's going to it later. [00:02:01] Sean: Yep. [00:02:02] Kelly Schuster-Paredes: Yes. [00:02:03] Cecilia Danesi: Yes. [00:02:03] Kelly Schuster-Paredes: absolutely. [00:02:04] Winds of the Week: Sharing Personal Experiences --- [00:02:04] Sean: Why don't we get started with the, winds of the week and then we'll do a proper introduction in our main topic. Cecilia, why don't you go first? The wind can be anything. It can be inside work, outside of work, whatever is best to share. [00:02:16] Cecilia Danesi: So maybe about this week we can talk about, the new, I dunno if new or not, because day is something new. Freedom Chat. DPD is, I think that is going to be a interesting topic, co-pilot. Another interesting topic and then, well, something a little more sad in the sense that, , bad or negative consequences of the ai. For example, this is not about this week, but, news about this. to create fake form porn, for example, images, pictures, which something that is very, very dangerous. But maybe we can talk about this later. [00:03:05] Sean: That would be great. I, we'd love to get into [00:03:07] Cecilia Danesi: Yes. [00:03:08] Sean: All right. Kelly any, wins from you this week? [00:03:11] Kelly Schuster-Paredes: Oh yeah. Well, a quick, like, win for me. Something that happened in the classroom. We are looking at version control with, Python and building, , a Game Oregon Trail, which we've done three versions. And in the third version I took, our typical spaghetti code, Cecilia is like code that. If lf, L-F-L-F-L and the kids kept like writing all these conditions for algorithms and the students will write 700 lines of code where people like Sean or myself, we can make it shorter and more logical, but the kids aren't really there. So anyways, I, in the spirit of this show. Took the spaghetti code of 700 lines and put it into chat, GPT, and , I was telling it, coercing it to write a program that an eighth grader can still understand in a dictionary form. But keeping to the true nature of this code. And it produced after a lot of prompting, something that was nice and I got to show it to the kids, but the win of the week was not the fact that I used chat GBT, it was the fact that it took a long time. In the prompting, and I wanted to show them that this answer didn't just come out from me, just dumping it in and say, fix it. It took a little bit of coercing and I wanted them to understand that it, it wasn't like instantaneous, that you needed to have the knowledge in order to guide the AI to do its job. That was a little bit inspiring for them because they got to see that. I scrolled and scrolled and scrolled and scrolled, and they're like, oh, wow, that takes a long time. I'm like, yes, you have to understand things in order to use ai. So it was good. It was a nice one. [00:04:47] Cecilia Danesi: Okay, great. Nice. [00:04:49] Sean: Yeah. For me this week, back to doing some teaching. I'm preparing to do a , bootcamp for a team of engineers next week. And so a lot of the work has been structuring, what are we going to learn? What are our learning outcomes? How are we going to get there? And I had this moment of realization as I was doing this, that. , early in my career I didn't know what I wanted to be or what I like to do, and I was always trying to be something that I thought would get me advancement or promotion or success, where I am now in my career, I realize there are two things I like to do really well. I like to build systems with code and technology and build things that actually work. And I also like to build engineers. I like teaching people. I like helping them become better. Those are the two things that I really love doing. And what I'm finding at this point in my career is that doing those things really well is leading to the success I always tried to get when I was younger and I didn't know how to get there, but it's about finding kind of those things that you really love and you like to do, and it's something that you can't just, I. Tell someone when they're young or you can't teach them that they have to discover it for themselves. I had this moment where it just fell into place this week that I'm really loving what I'm doing because it's the two things that I do really well and I love to do. [00:06:03] Kelly Schuster-Paredes: It's awesome. It's awesome. We do a lot of reflection, uh, Cecilia, [00:06:08] Cecilia Danesi: Yes. I. [00:06:08] Kelly Schuster-Paredes: that. [00:06:09] Introducing the Main Topic: AI and Its Role in Society --- [00:06:09] Sean: So Kelly, why don't you introduce Cecilia, this week. You found Cecilia, you brought her to the show. This is such a great moment and I'm so excited that we're here. [00:06:18] Cecilia Danesi: Thank you. [00:06:19] Kelly Schuster-Paredes: am, I'm excited as well. So I came across Cecilia from Gabriela Ramos, [00:06:26] Cecilia Danesi: Such an honor for me. [00:06:28] Kelly Schuster-Paredes: it was, it is an honor to be even to communicate, , with both of you. And we were really excited. I was following the work of unesco. We often use the sustainable development goals in class to give kids some data to work , with Python. And so I follow both, United. UN information and UNESCO and I came across her because of a friend that works in, cellular ai. She worked with Gabrielle and she's on my LinkedIn and anyway, she put me in touch with you and I'm so excited because you're one of the 17 leading female experts on the UNESCO Woman for Ethical AI Committee. Is it, is it a committee or A [00:07:12] Cecilia Danesi: A platform. [00:07:13] Kelly Schuster-Paredes: A Platform, [00:07:14] Cecilia Danesi: Platform, yeah. [00:07:15] Kelly Schuster-Paredes: And you are not only working in this sector where you're trying to increase, the progress of non-discriminatory to algorithms and data sources, and you're trying to get girls into code and women and underrepresented groups. To participate in AI development, but you're also a lawyer. You're also looking at artificial intelligence and gender rights and social impact. And as I was researching about you, I was like, wow, this is amazing. Be it's so, it's an honor to have you here. I think as educators we are, probably in the, what the heck mode. As educators, we really don't know what we're doing. We're trying to pull a lot of pieces together and so our hopes are that you can shed some light for education and educators around the world and open our eyes to things that we need to be more well aware of. So was that good? Sean? Did I miss anything? [00:08:17] Sean: I mean, I'm sure we will discover more as we go. So, Cecilia, welcome. We're so excited to have you. Where should we begin? [00:08:23] Discussing the Challenges and Implications of AI --- [00:08:23] Sean: We, let's talk about this in the context of our times, there's been a tremendous amount of new research and new platforms available in AI over the last 12 months, especially with generative ai. But where does this really start? Because it's not like you woke up a year ago and said, you know, this AI thing is becoming a problem. We should talk about it. You've been working on this for a while. Where did you begin looking at this and how did you get started? [00:08:49] Cecilia Danesi: First of all, , we have to recognize that Judge GT or open ai the, in November when it's. The popularity of AI because everyone can , touch ai, use it, and create things with ai. But if we think just for a minute, we can realize that we are every day. Interacting with algorithms in digital platforms. When we are talking with the chatbot, social media everywhere. The idea of the AI is very, very old. The problem is in the past we don't have the huge amount of data that we need to train an algorithm. And the capacity of processing this huge amount of data, but the idea of imaging that tried to imitate our mind, our brain is very old. The fact is, nowadays we have the technology to do that. And of course this is, interdisciplinary tool that can help us in every aspect of our life. This is very good, but in the same time, it could be very bad or dangerous because we don't have the limits. when we talk about limit, we talk about a. The only thing can put a control or a limit to an ai. I can say that, since we have chatt PT in our hands, this is the huge beginning of the AI in the sense that it became popular it to every people. So. Nowadays when I am, having a mat, typical, a rage in Argentina, , or a coffee with my friends, and they started to talk about Judge gt. So for me, it's a miracle. It's incredible because, before that everyone asked me, what are you. [00:10:54] Kelly Schuster-Paredes: Link between [00:10:54] Cecilia Danesi: Law, human right, social science with ai, there is no big data, is no link. There is nothing related to this. And nowadays it's easier to talk about it. So the chat GBT issue was very important for this debate. [00:11:12] Kelly Schuster-Paredes: I can't agree with you more. I was reflecting on this, prior to the episode. And prior to a year ago, Sean and I would talk about algorithms. We teach coding, so we have those conditions in place. We make our simple algorithms with our kids, and that was kept only in the computer science program. Right? Only in that classroom, within these little walls. And as educators of computer science, we wanted more teachers to understand that it's everyone's job to look at algorithms, Chat. GPT kind of did that for everyone and now all a sudden it's everybody's job to be aware of this because now we're trying to bring in chat, GBT. We have opportunities for kids to use it even though they probably shouldn't. And I'm gonna get to a question for you. [00:12:00] Cecilia Danesi: Okay. [00:12:01] Kelly Schuster-Paredes: I saw this quote, I saw this quote from a recent article from Augustine, Rubal. [00:12:06] Cecilia Danesi: Yes. [00:12:07] Kelly Schuster-Paredes: And I love this question that he posed and there wasn't an answer, so I'm gonna give it to you. He said, the goal of society is to adapt and convert to a 4.0 world. We're really at a 4.0 world. We need to get in motion, but how do we educate? That future generation and what are the steps that you and unesco, what are you guys doing in order to help us help them? So [00:12:33] Cecilia Danesi: Well, this is, ah, well, small question. You know, very easy. [00:12:40] Kelly Schuster-Paredes: Yeah. We can say all [00:12:44] Cecilia Danesi: A lot of things first, this is, the main challenge, or the fourth industrial revolution is how to implement, use, develop AI in an ethical way. Because first of all, we have to realize that there is. Huge link and an important link between social science and technology. Because technology is changing the way that we live, that we interact, that we, so we fall in love. So everything, so this is very important. We need, to focus on, uh, education, especially with kids, with teenagers because they are our future and they are who especially interact with technology. so it's very important that they are aware of the risk and the benefits. Of the ai. This is the first point to analysis. I wanna share with you a case, in Spain because, at the beginning I mentioned, digital violence. This case, is going to be very practiced to. Education educators, teachers are very, very important in this topic. This is a case of a very, very small down, in the south of Spain, it's called. And here, a group of students, at school used, generative ai, create fake forms, pictures of their classmates. So there was students, uh, under 18 years old that create fake porn pictures of their classmates. There are several cases of these, for example, in real de also with school students. And apart from that, there are other cases, for example, related to adults. For example, in Costa Rica, happened the same with the Germanist. As I mentioned, this is, known as fake porn. And the problem with this, for example, is that these cases normally affect women and it has a huge impact in their life. For example, , in the case of the students, this student didn't go to to the school for a long period of time, and in the case, for example, of adult women, they lost several jobs. Opportunities because they were , linked to the porn industrial, because their images were in these kind of websites. In all these cases, the victim, they wanna leave their houses. because of the shame. [00:15:28] Kelly Schuster-Paredes: Of [00:15:28] Cecilia Danesi: Consequences, damages in their lives. So why it's important education, because we also have cases where children are, who create this kind of, damage, this kind of images. So we have to start from the education because of course it's important to have a law. Yes. But, we are talking about children. We are talking about kids. sO we need to create from the school, from the family, from the very beginning, to create awareness of this kind of, how can we use this technology? And of course, we are , the risk or the damages that this technology can cause in case that we use in a bad way. So I think that educators and teachers has, a great challenge, for the, this kind. [00:16:23] Sean: It's a really good point. Kelly and I have talked a lot about these ages where you're talking about teenagers and preteens, that adolescents are age that. they are not great at evaluating risk and consequences of their actions, that's something that's a learned behavior and they tend to underestimate the effect of the actions that they take. There's having these examples as, as horrible as they are to be able to show the consequences of something that they think is funny or mean or something that has a limited scope but has wide reaching consequences is a way that we can show them that although these tools are very powerful and you can use them for good things in bad cases or horrible cases, they can have consequences that really affect people's lives for, years to come, not just in the moment. [00:17:14] Cecilia Danesi: Yes, absolutely. For that reason. I always try to talk with examples with real cases, because sometimes when you talk about just theory, it's a picture in our mind, but this is not real. But all these cases are real. So it's easier. Understand and to work real cases for this very important to study and spread [00:17:46] Sean: I also like that we can use global examples, that it's not just something that happened in my [00:17:53] Cecilia Danesi: in one country. Yes. [00:17:54] Sean: that it's happening in many places. It, and that is part of the phenomena on of AI is that it's available pretty much everywhere. [00:18:02] Cecilia Danesi: yes. And this is not about, the big countries or the big cities. No. It can also happen in small, down in the south of Spain, in Brazil, in everywhere, because normally these kind of tools are, close to everyone. So this is very important that we. About this in every place in the world, not just, [00:18:27] Kelly Schuster-Paredes: Yeah, I always think about downstream versus upstream. The law kind of affects the downstream effect. It's after the fact. Something bad has already happened. We're chasing the tail downstream and it's the educator's job upstream to, educate the students. And right now, most educators, all they can think about is how do I keep kids from cheating? How do I keep kids from writing their essays in chat, GPT, where the bigger picture is to understand the risks, understand, what might happen. When we use it poorly, but I think also is the, the biasness of not really understanding how that comes out. do you do work with the bias of, data as well in some of the cases? [00:19:20] Cecilia Danesi: Yes. I was taking notes because I have a lot of points to talk about this. First you mentioned JU g pt. Uh, [00:19:26] Kelly Schuster-Paredes: Mm-Hmm. [00:19:27] Cecilia Danesi: one of the main problems of the teachers, of the professors is, how can I do. Prohibited the use of chat. So I wanna be warranty that any of my students are gonna use it, and this is not the point. We're gonna use technology in the present and of course in the future when we are in the university training for our job in the future, of course we use technology to work. So why don't. Learn with the help of technology. And this is a huge challenge because we have to change the way that we learn and we teach with technology, with the help of technology. And this is a challenge especially for educators and also for students. Uh, so the first point is that we. [00:20:24] Kelly Schuster-Paredes: we as educators, as professors, have to think about How [00:20:28] Cecilia Danesi: to teach. With technology. So the technology is going to be another teacher with me that is gonna help me to teach, to train our students. And apart from that, how can I teach, to the students, how is the best way to use technology? Ethical way. So we have two main points to analyze, incorporate technology in the way that, teach and create awareness of my students of the best way to use technology, and especially of the consequences and the damages that technology can, do. This is one point. The second biases, well, it's, uh, algorithmic bias is one of the main, main points of analyze. when we're talking about algorithmic bias, technically it could be, uh, like a mistake or a bad prediction of the system. But when we are. This prediction, the. Unfair, threat of a group of person, for example, a discrimination or the violation of a human's, right. Okay. we have different examples. For example, the classical one that we are going to read everywhere is the Amazon case. This is the typical case where Amazon create an AI system to put on a score in cv, candidates resume for a job. And the system put, allow score for women's CVS in comparison with men because the system was trained with the last 10 years of the companies. We are the only who reach to high positions jobs were men instead of women. So the system, learn that men work better than women. This seasonal algorithmic bias and there are a lot of, cause. Why this happened? I work a lot with this in my last book, the Empire of Algorithms. one of the main point is the dataset that we use to train an algorithm that we use to train a system. And this is the example of case, uh, the, the same, happens with, In this case, the problem is. Bigger because we have no control of the dataset or the data that we used to train the algorithm because it use a huge amount of data and it's very difficult to control it. So of course, this system are going to be biased and this system are going to be biased because they are learning from our society, from our perspective, our values, our biases. So the system is gonna reproduce them. [00:23:26] Sean: the thing that came to mind also while you were talking about the Amazon example, and I've heard a similar case around mortgage approvals for buying homes and the biases that are in there. A lot of this problem is also in the way that the problem is being framed, that we're setting up the algorithm to solve, so Amazon, if they had framed the problem differently, it would've been immediately apparent that there was a problem with the bias because the, they didn't frame the problem as how do we ensure equality or equity in our hiring process and ensure that people are getting, an equal opportunity to be hired? They just said, show me the best candidates for the job. If they had framed the problem differently, maybe that bias would've become more apparent. So sometimes even the problem comes before the algorithm that the way that humans are setting up the problem to be solved has implicit bias in it. That can then be amplified by the algorithm and the data that we select [00:24:18] Cecilia Danesi: yes, absolutely. So I completely agree with you that the problem is before the ai, so it's not about just technology. The key point about technology , is that technology and especially AI has the power to alert and intensify our biases. One system with one bias, two bias or whatever. , can do a prediction in a minute. A human has not the, this capacity to do, for example, a lot of sentences is a lot of results, a lot of predictions. So , the main point with the AI is first it's invisible. We don't know that we are interacting with the AI system. And second, it's. Has the power to reproduce and amplify all these problem. So that's why we have. [00:25:11] Kelly Schuster-Paredes: why there's so many questions. I wanna go into it, just a, if you can in a melted bit, for our listeners to get an understanding of some of , the laws of AI and how they're evolving. Because I know, in my role. All the apps and things that we want to bring into education. I'm always looking for COPA and ferpa. I wanna know where the GDPR, I wanna know where the data's going for kids. And that's a lot because even if we have all those protections, we still don't really know. I know the laws are constantly changing. Can you kind of give us a [00:25:48] Cecilia Danesi: Yes. [00:25:49] Kelly Schuster-Paredes: summary? it's, huge. [00:25:50] Cecilia Danesi: yes. well, the first point is, that of course we don't have, and it doesn't exist a general law. So a law which can be, mandatory for all the countries. It doesn't exist. And of course it has a sense now, but, it's create a problem that technology didn't recognize frontiers, so it's very difficult to create a law for each country. When we were talking about the phenomenon, a globally second, we can say that there is no, general law of. [00:26:28] Kelly Schuster-Paredes: of AI [00:26:32] Cecilia Danesi: AI law for specified areas, for example, health, autonomous vehicles or something like this. But there is no a general law. The main example of this, and I think that is going to be the, the, main, rule in this area is the AI Act from the European Union. It not enforced nowadays but they are working on it. This, act is going to be very important because it's general in the sense that we are gonna apply not just for one area. We're gonna apply for all the systems. We are included in the act. and then I think that the most important Of the prevention, so we are not talking about just consequences. because of course when we arrive to repair the damage, it's late because the damage have already happened. so the idea is to prevention. So it has different requirements that each kind of system has to comply with, in order to. [00:27:39] Kelly Schuster-Paredes: order to prevent, for example [00:27:41] Cecilia Danesi: So this is the key for the future to work about prevention, to work about, awareness. And for that reason, of course, education is the first step to start with. a clear example of this is, the case that I mentioned before about Spain. Now the idea that, in most of the cases. It's impossible to find, the responsibility, the people who make, the prediction or who create the porn fake. Then sometimes they are, under 18, underage, children. It's very difficult. Of course we are talking about a phenomenon which cannot be forbidden. I always use the case of Uber. There are in many countries that, Uber was, uh, forbidden, but it's always continuous using so prohibit the technology is impossible. We have a lot of challenge, especially when we talk about ai. Here in the university we have a master degree on the ethical governance of the ai, and we are working, with different governments. To create this legislation of ai. Also with unesco, we're in the, ethic forum of the AI in, in Chile the last week. There are different initiatives to work on this because we need to create , an harmonized regulation about all these topics. [00:29:04] Sean: , there's so many directions we could take it in. And I know , we're out of time here. , we're gonna pause here and hopefully we can have you come back again on the show [00:29:11] Cecilia Danesi: Yes, I would try to, [00:29:12] Sean: bring friends. We'll make it a bigger conversation. I love this and I love where it's going. You have your book about the empire of algorithms. You have the work that you're doing with unesco. You have, the work that you're doing with governments. If people want to learn more, if they want to follow the work that either your platform is doing, that you're doing with unesco, your own work, where's the best place for people to learn more and to follow what you're doing? [00:29:34] Cecilia Danesi: I have my website, which is my name, Cecilia Esi. There, I put a lot of information about this, and then, of course, social media with the, of the algorithm. I especially use Instagram. Ceci did the same. I always, give for example, courses or a lot of information about this in order to create awareness of all this topic. Apart from coming back again , to the podcast you can find me in the virtual world. [00:30:05] Sean: I Love that. And we'll definitely send our audience your way. [00:30:07] Cecilia Danesi: Thank you. [00:30:08] Closing Remarks and Future Directions --- [00:30:08] Sean: Here's our charge to our audience. This episode in particular is not just for computer science teachers, it's really for social sciences teachers. It's for humanities teachers, it's for physics teachers. Send it to other teachers, share it with them so everyone can raise our standard of how we think about AI and how we educate, the next generation on how to use it ethically. [00:30:31] Cecilia Danesi: Yes. I think that this is the main point to work, with an interdisciplinary perspective of this topic, because as we can see, we have a lot of examples of AI with algorithm bias with digital violence. That it's about a lot of disciplines, not just one, not just about computer science, not just about, uh, lawyers. We need to talk, with the perspective of different sciences. This also is, a challenge for us. Because when I studied, I all was just about lawyers, not other disciplines. And nowadays, we need in a global, society, in a global world, , so we need to talk, from different perspective. So the perspective and diversity. Now, this is very important. [00:31:18] Kelly Schuster-Paredes: You hit it right on the mark. As educators, this is the interdisciplinary project of the century. This is where we should be really focusing and Sean. And I just want to really thank you. I would love to try to get you into our innovation institute that's happening in April. I'm gonna do my little magic. Maybe you can be a virtual speaker 'cause we are gonna have an AI panel. More and more people need to hear about and learn about AI ethics bias and the laws that hopefully will be created globally. We just really wanna thank you for being on our show [00:31:57] Cecilia Danesi: Thank you so much for the invitation. It's very be. [00:32:02] Kelly Schuster-Paredes: Thank you Sean. Anything to add? [00:32:04] Sean: Nope, that'll do it for this week. So for teaching Python, this is Sean. [00:32:08] Kelly Schuster-Paredes: And this is Kelly signing off.