Sean Tibor: Hello, and welcome to teaching Python. This is Episode 111, and we're here talking about generative AI and the complete destruction of learning as we know it. Or maybe not. My name is Sean Tyver. I'm a coder who teaches, and my. Kelly Schuster-Paredes: Name is Kelly Schuster Perez, and I'm a teacher who codes. Sean Tibor: And we are welcomed today by welcoming today a great friend of the show, Eric Mathis, author of Python Crash Course, author of a substac subscription area where you can get Python musings all the time. Eric, welcome to the show. Eric Matthes: Good morning. Thank you for having me again. Sean Tibor: I think this officially makes you the most returning guest, and we are super excited to have you with us today to talk through this, especially after our conversations at the Education Summit at PyCon. Eric Matthes: Thank you. I love talking to you both. I'm very happy to be back. Kelly Schuster-Paredes: I think Eric was like, one of the first people we met yes. Before we knew what we were. Yeah. That's awesome. Sean Tibor: And I'm really excited about this conversation to talk about AI, because it felt like when we were at the Education Summit talking about this, that we were missing something very important and that something was Kelly. So now we've got you here. We've got you in the conversation. I can't wait to dive right into the conversation and learn about this area and talk about some of the things that we're trying to figure out together. So before we do that, why don't we start with the wins of the week? And I think we're going to try something a little bit different here. One of us didn't prepare a win for this week, so we've asked Chat GPT to generate a win for us. So your task as the audience or the listener is to figure out which one it is. So, Eric, would you like to go first? Eric Matthes: Sure. My win of the week. This is going to throw your contest a little bit. I've been writing a weekly newsletter about Python, and it's a lot of work, but it's really fun because it puts me into something different most weeks. And so I did a dive into looking at what are the most popular recent Python projects on GitHub. It kind of got me into not just what does everybody know about, but what are people putting out there? And so one of them was a project called Git SIM, which generates images and animations of your own project as managed by Git. And so I was fascinated by the project, and I ended up contributing the initial test suite for that project. And I don't do a whole lot of contributions to other people's projects because I have so many of my own going on. So it's very satisfying to get a PR merged. Sean Tibor: That's pretty great. Like an actual photograph or generated art or something like that from your Git history. Eric Matthes: Yeah. You run a command like Git SIM, merge new branch, and it will generate an image of what that merge operation looks like, which commits are being merged into which other parts of your project. Sean Tibor: Oh, very cool. Eric Matthes: Teaching and learning. Sean Tibor: I have some very active projects that that would be interesting to run and see what happens. Eric Matthes: Yes. Kelly Schuster-Paredes: So you'll see what's going in and how you're merging it back. How's that explain it? Sorry, I'm processing slowly. Eric Matthes: Sure. Yeah. One of the things I really like about this is it acts on your own project. So rather than just, like, here's a tutorial about how git works, we're all going to use a sample project. It acts on your own project, so you come into the project with an understanding of what you think your git history has been doing. And most git history is basically a series of commits. And so you see those series of commits as circles. But if you have branches, how do those fit in with your other commits? How do commits on a new branch fit in with a development branch? What does a merge do? And so if you have an accurate understanding of git, how it works internally, it shows you that. And if you don't have a solid understanding of how git is actually working, it can give you a much better idea of what those operations do. That's awesome. Kelly Schuster-Paredes: I really need to get into that because I failed at the whole branching thing. Eric Matthes: Branches are so useful. Sean Tibor: I have a lot of junior engineers that I think would find this very helpful to be able to see this. One of the things they definitely struggle with is building that mental model in their mind of what actually happens with the different get commands that they're using. So I'm excited to use this. Yeah, very cool. All right, Kelly, how about you? Kelly Schuster-Paredes: Me? Okay. Oh, wow. So it's summertime, so I've been taking the initiative to start a new hobby, and I'm decided to start playing an instrument. So that's what I'm doing, and I'm dedicating time during the week to actively pursue and engage in that hobby. Sean Tibor: So what instrument are you learning? Kelly Schuster-Paredes: I am learning the recorder. Sean Tibor: All right. I have a feeling that this is not going to be as hard of a contest as we originally set out. Kelly Schuster-Paredes: I just thought it was funny. Very apparently, starting something new and investing in my own personal growth is a valuable accomplishment. Sean Tibor: Okay. All right. We'll take its word for it. So, for me, I guess, kind of two things that have been a win this week. I can't choose between them. The first one was one of my new engineers who's been working on my team for about six or seven months, solved a problem that I had been working on and I hadn't quite gotten to yet. And it was one of those great moments where she figured out a way to do it that I guess I had considered but didn't really give it much credence, or didn't Kelly pursue it? And she ran with it and made it work, and it totally solved the problem that we were trying to solve. And I was super excited because she just really dug into it and took the initiative to solve it. And I have the feeling it's just this major milestone on her journey because she knew that she did something that I hadn't been able to yet, and that was something that was, I think, really satisfying for her. So I saw that as a big win. It's great to see people growing and developing right in front of your eyes, and that was a big moment. So very satisfied. The inner teacher in me, because I've been investing a lot of time into helping her grow and develop, and to see her start to or continue to grow and develop as I knew she could has been really satisfying. Kelly Schuster-Paredes: That's very cool. I do have actually, another one. It's not really my win. Sean Tibor: Okay. Kelly Schuster-Paredes: But two of our former students, asar and Aman Thompson, just got picked by the NBA, the fourth and fifth pick. Yeah. So very cool to see those boys. So that was awesome. One's going to, I think, the Rockets and one's going to the Pistons. Sean Tibor: That's pretty amazing. Kelly Schuster-Paredes: Were you there with them? Sean Tibor: No, they were, I think, a year or two ahead of when I taught, but I know you've introduced me to them a few times. Kelly Schuster-Paredes: You can't miss them. They're like seven foot tall twins. Well, now they're probably taller, but very cool. Sean Tibor: Definitely helps with the basketball and on the sports front, the other one that I had was I was in San Diego this week with my family on vacation. We went to the San Diego Zoo with my kids, and I hadn't been there since I was ten years old, and it was a lot of fun. And as we were leaving the park, my son looked up and said, hey, look, lacrosse. And he's been learning how to play lacrosse for the last three or four years. And I've spent already a fair number of hours watching him play and on the sidelines. And it turns out that the World Lacrosse Men's Championships is happening in San Diego right now, and the opening game was on Wednesday night, so we were able to go get tickets and watch us play Canada and have the opening ceremonies with all the teams coming on the field. There's 30 teams from all over the world playing together, and they haven't played since 2018. They normally play every four years. They pushed back a year because of COVID And so we got to go see some really top class men's lacrosse being played in San Diego, and we just happened to stumble upon it. And what's great about it is that for the most part, it's not this huge venue. It's very up close and personal. We went to one of the pool games yesterday morning before we flew home, and we were standing right on the sidelines at midfield watching some of the best players in the world play lacrosse. And it was a lot of fun to watch, and the players came out and talked with all the kids and everything after the game. So it was just one of those great moments where sometimes you look up and you see something that's a great opportunity and it all plays out to your benefit. Kelly Schuster-Paredes: That's awesome. That's what happens when you don't have a phone in your hand. Sean Tibor: I mean, I think I might have been looking at my phone when he saw the billboard. Kelly Schuster-Paredes: Good on him. All right, let's get into this topic because every time we have a guest, I always want to get into the topic, and we always kind of skirt the topic. And now you guys have promised me to reenact the topic of PyCon that I missed. Sean Tibor: Yeah. Well, let's get started. I think, Eric, you had posited the question at the Education Summit and ran I think it was a group of twelve to 15 people through a workshop discussion around the impact of AI on education, particularly chat GPT, but many other models that are coming out and emerging. Can you talk us through a little bit of kind of how you came up with that workshop idea and what happened at the education summit for the discussion? Eric Matthes: Sure. I'd like to take a step back for a moment and just for people who are not at the education summit, share that. I've been going to the education summit since it first started. I went to my first PyCon in 2012, and I forget if it was that year or the next year, but I saw an education summit proposed, and so I signed up for it. I think it might have been 2013. Do you guys know off the top of your heads? Sean Tibor: No. I might be able to look it up real quick. Eric Matthes: Yeah. So I've been going to everyone since then, everyone that I've attended, which has been most of the Python since then, and this was the best education summit that I can remember. And I think it's worth saying that, and I think it's worth kind of articulating some of the reasons for that. Sound good? Sean Tibor: Yeah. Absolutely. Eric Matthes: All right. So it feels like we've resolved the attendance questions. And for people I think it is worth hearing for people who are considering going to PyCon and in particular going to the education summit, I think just about everybody in this audience would enjoy the education Summit. For people who are unaware of kind of the background of setting up the summit, it's typically on the Thursday when Python starts, and it's the same day that tutorials are happening and tutorials are paid presentations about particular topics. And when people sign up for PyCon, they're presented with all these options for what they can do throughout the conference. And so people come to the Thursday options and they see a bunch of paid options, and then they see the free Education Summit. And so what ends up happening is a lot of people just check the box and then we as organizers get, oh, 100 and 5200 people are going to show up for the education Summit, and then 50 people show up, or 60 or 40. And it felt for a few years, it felt like we were doing something wrong, like these people had signed up for something and then lost interest when somebody finally named the issue of what's happening. These people are just checking off the free option and then realizing that it's not a fit for them. And so I don't remember who named that, but I appreciate that because it kind of dropped the question for a lot of us of what are we doing wrong? And helped me realize that the people who need to go to the Education Summit end up going to the Education Summit, and it's such a good group of people and letting go of that concern that we were missing something was really nice. Sean Tibor: Yeah. And I think one of the side benefits of this that has worked out to our advantage at the Education Summit has been that we always get a room that's quite a bit larger than I think, the number of seats we actually need for folks. But we turn that into something very useful in the afternoons, which is this idea of breakout sessions where we can split out and talk about various subjects that are of interest to different people depending on where they are and what they're doing, which kind of leads into the opportunity here for the workshop. Was one of these breakout sessions that we arranged to be able to take advantage of the space that we have. Eric Matthes: Yeah, I'm going to name a couple of other quick things. All the content was relevant, and so for a while, it felt like we should be splitting into tracks of, like, K Twelve undergrad Graduate. But for whatever reason, the talks have have met the needs of most people who show up. And I think your podcast might have something to do with that because you two are grounded in middle school, but you've brought together a variety of all kinds of people who just deal with teaching python and I think I'm seeing that in Education Summit, and I'm not sure what the connection is, but it's a good thing. So, yeah, what was interesting about organizing this year's summit as far as what talks to include, is that the call for proposals ended before GPT really came out, and so there wasn't a chance for people to propose talks about how GPT and AI tools might be impacting teaching and learning. And so when I saw that and saw that there were two workshops in the afternoon, and we typically do three to four just to give people choice and get smaller groups. I proposed just facilitating a conversation about exactly this question. Anybody who wants to talk about AI and its impact on teaching and learning come over to this corner and we'll organize that conversation. And so I'm glad I did it because I did count, and it was over 20 people who joined that corner. So these are people looking for a conversation with like minded people about how this is all impacting education. So I structured it as I think Kelly's going to appreciate this. I structured it around the idea that most people don't actually have answers to any questions we have about AI. There are some objective questions like how does a large language model work? And you can give answers to those questions, but a lot of the questions that people have, for example, how should my teaching change in the era of AI tools? Some people might have claims to answers, clear answers to those questions, but I don't think those are really answers. I think those are people's thoughts. And I think we're all in a space of having to articulate the questions, articulate our thoughts around those questions, and then try different things and then report back to each other about what's working and what's not working. Yes, go ahead. Kelly Schuster-Paredes: I was just going to say, I think that needs to be reiterated a little bit more and highlighted, I guess, just to really put it in there. I feel like as we have a lot of listeners who don't really have people to talk to in their schools. They're one computer science teacher, or they're in their isolation, or they're teaching 6th through 10th in a computer. Science school in their school and they're too busy to have conversations. But they get caught up in a social media of someone selling a book last minute and saying, this is the way it goes. And I look at these books sometimes and I wonder how can a person be such an expert to say, this is how you're going to use Chat GPT in the classroom? So I think as educators, we need to be aware that if we're not at least focusing on the people that have built models, understand what's underneath the model, even maybe just even a little bit of coding, at least we're computer science teachers, we've coded, and we have a slight understanding of what's going on. We've got to be leery about someone saying it has to be this way. And I liked what you said, Eric, about trying I tried something. Was it right? I don't know. I mentioned the fact that I opened Pandora's box. I don't know what's coming out from that box, but I tried it trying to make sure that everything was safe and where do we go from there? Eric Matthes: Yeah. Pandora's box. It really is a good analogy, and I would say that you didn't open Pandora's Box. It has been opened for us. And it really is a box that can't be closed. The fact that these models can be run on really small devices with small footprints means when the tools first came out, I thought you needed $100,000 million dollar supercomputers or cloud computers in order to run these models. It was very quickly shown that once they're trained and built, they can be run on really small devices, so there really is no way to shut it down. And I think that's an important thing for people to recognize. Sean Tibor: Yeah. The other thing that I like about this approach is that it's also changing so quickly. This landscape is changing nearly every day. New models are coming out, new applications. And the beautiful thing is that people are doing what people do, which is they see some new tool and they find creative and unique ways in which to apply it. Right. And that happens at all levels, from the learners to the educators to professionals. Everyone has this new tool that they're all trying to figure out how to use. And so anyone who claims that they know, right, I know what's going to happen, or here's what you will get from these tools. Probably selling something. Right. Eric Matthes: People are either selling something or they're overconfident. If they're saying, here's the answer. Sean Tibor: Right. Eric Matthes: I will take a step back again and just keep reiterating that there are some things that have objective answers, and it's important to distinguish between what's an objective question and what's the subjective question. Sean Tibor: Right. And anything that I would say that's predictive in its answer. Right. Anyone who predicts what's going to happen with a high degree of confidence is probably predicting the same way that people did back in the 1950s or the 1890s about what the world would be like 50 years from now. Right. I think the only difference is it's really what will the world be like five years from now or even five months from now? The timescale has changed dramatically. Eric Matthes: Right. So the structure of this conversation, I had this group of 20 people, and they're all looking at me and they're like, yes, we want to talk about AI. So the structure was, what are the questions that you're wrestling with about AI and then share thoughts in response to those questions? And that's it. And with that group, it's kind of funny to have top middle school and high school people for a long time and then have a group of adults to manage. So I passed out index cards and asked everybody to write down to think about and write down one question. If you could ask this group one question about AI and teaching and learning, what would it be? And then I just had everybody go around and read one if they wanted to and clarify that we weren't going to try to answer all these questions. We're just hearing the questions that people have, and then we split into three groups and just had small group conversations about people's responses to those questions. And again, not try to answer the question, just what is your response to the questions that people are bringing up? And it's a bit of a gamble. Do people go away from that satisfied? Or they go away like, oh, I didn't get my question answered. People were very satisfied because I think people are recognizing the reality of the fact that there's not answers to these questions right now. And it was a good foundation for the rest of the conference as far as continuing conversations with people about AI and particularly watching out for the people who are claiming to give answers. So I thought it was good. I don't think there's much value in trying to report back to you and your audience about what was said there, because that's already two months ago. And so a lot of the questions that people are raising are still the same questions, but the answers are changing all the time. So I thought it might be interesting to just go around with the three of us and just have a brief discussion of what are the questions that we are each wrestling with or have wrestled with. What do we think the audience is likely to be asking? Particularly, like Kelly said, people who don't have others to talk to about this stuff or maybe are being spoken to from an overconfident perspective. So what are the questions that are still floating around about AI and then what are our thoughts about those questions? And I can talk about this for hours. So. Kelly Schuster-Paredes: I've been putting together a presentation because I have to give a presentation to our educator, to teachers, just regular teachers, not computer science teachers. I have a split, right? So I have a split for here's the basics of what it does well, what it doesn't do here's, the risk, et cetera. So I'm going to put that over to the side. Those people can go research and go to the AI experts, the AI experts, and read on that stuff. I want to talk about the computer science side because I kind of hit on this with Katney. We mentioned we wanted to talk about it, we missed it because we ran out of time and it was just the idea of this curriculum. And I'm going to kind of throw this back at you. Eric, you have a book on the crash course, right? Your GitHub, your solutions are out there, but a kid has to be pretty savvy to know that I'm picking up a question from Eric Mathis's crash course book and using it as a project. I have to literally tell them, this is where you go to get the question. Or you could probably Google the solution, et cetera, et cetera. And it's not as easy. However, if I pose that question now, and my students that have already logged in to Chat GPT, these kind of like I'm trying to words slipping my mind. But these problems that are simple and not necessarily unique are easily solvable. So as an educator in computer science, you start to wonder, what do we do? What do we do? And I think I have an easy job because I have 6th graders who have no clue what even a word print statement means or variable means. They don't have the vocabulary. So I think in the 6th grade, when you have these really young kids that don't have the vocabulary for computer science, going into Chat GPT and trying to solve a coding question is very difficult for them. However, as you start to get a little knowledgeable, even after nine weeks or four weeks of learning the vocabulary, it starts to become something solvable. And so there's been a lot of my mind thinking about how do you restructure a curriculum to encourage learning and to kind of help with this collective intelligence? Because I won't lie, I am addicted to Chat GPT. I don't like Bard. It doesn't really solve anything for me. My Chat GPT is my imaginary friend whenever I have a problem. And so how do we encourage collective intelligence? How do we make sure that we're focusing on intelligence happening and learning at the same time? So that's kind of like my mindset of where I'm going as a computer science teacher. Eric Matthes: I love it. I love the way you're describing it and the questions you're asking and the grounding from those questions. Sean Tibor: All right, I have two questions. I think the first one is similar to what Kelly is asking. And I'm thinking of it as like the goldilocks zone of learning, right? What's the right amount of use of these tools in the learning process? Right. Too much? What are the implications of that? Of having too much AI assistance? Right. What about banning it entirely and pretending like it doesn't exist? Both of those extremes feel like they're wrong somehow, right? Where's the right zone? Where's that Goldilock zone? Where it really accelerates learning for the learner without taking away or removing the feelings of accomplishment that they might have by struggling through and working through a problem that AI could solve for them? Right? So where's that Goldilock zone is my first question. And my second question. And I don't even know that it's I'm not even sure how to phrase it all the time, but do we need to start reframing or redefining what the outcomes of learning or teaching should be in a world with AI? Is the measure of whether someone's a competent program or their ability to write code perfectly from scratch, which I don't think that's ever been the measure, right? Or is it their ability to solve problems and use the appropriate tools as they're solving the problem to come up with a correct, valid approach to solve it? Right. Or maybe it just takes the same outcomes we've always wanted and we've always had. But now it just makes them more clear, right? Because now we can see where they're using these things as AIDS to be able to help them solve the problems. Eric Matthes: This is why I love talking to you both. I really like the goldilocks analogy and for people who didn't grow up, grow up with that, there's a small bowl, how would you say it? It's like once too hot, once too cold. Sean Tibor: One's just right. It's a porridge. But I honestly think in this case it might be more like the Goldilock zone in astronomy. That right. Range from a sun where a planet is just warm enough to be able to sustain life. Right. Too close and it overheats and too far away, it never gets warm enough. And so to me I'm thinking about it like what's that band that we want learners to orbit in? Where they're getting just enough warmth to sustain life in their learning and not too much that burns them. Kelly Schuster-Paredes: That goes on to the head. One of our past participants on the show, Will Richardson, trying to think of his name, he's a very education reform kind of guy and love a lot of the stuff he does. And he's very adamant that AI is not necessarily where the future is. The future is in actually reforming education. And I'm somewhere in that dynamic in the middle. But I do agree on a lot of his stuff about agency and having students driving their learning. Sometimes the grades and the school and the schooling gets in the way. In fact, more than sometimes schooling gets in the way and the grades get in the way. And that driving of the agency, of their learning doesn't really happen as much. I see Chat GPT and this was like a comment I made to him. I see Chat GPT and these AI powered tools as a way for the kids to be their co designers of their own curriculum. And it was case in the point in the activity I did with the 8th graders. We were studying Matt Plotlib yippee skippy I give them, they go do push ups and they see how many press ups each person can make and they plot a bar graph and yay, they learned how to do a graph and they can apply this in science. But we challenged them with the task of actually finding a social change that they were concerned about and we wanted them to make a data visualization. And I would throw out a couple of words here and there because they didn't have the knowledge. And I was like oh, I heard there's flask out there. Or there's also a program called Dash and I wonder if you use Vs code to write a dash you would be able to get a visualization. And so I was using these plug words. We had about four groups of kids who able to generate a couple of tables. Now, were their tables meaningful? Did they tell the right story? No. I mean, that's going to take some skill of understanding a lot of other things, but they were able to produce a cool website using chat GPT and they got to pick the direction. And we did this a lot in the past, Sean and I, where we would do demonstration of learning or pick your own library and learn from it, or reading from a tutorial. But again, that was sort of a guided curriculum. And I'm wondering if with this generative AI kind of error thing, that students will start to become more of these co designers of where they want to go within their curriculum and learning more in a direction that they seem fit hoping. That's kind of like a question in my mind, too. How can you do that? Empower them to actually make change with technology instead of just giving them a problem and go do tic TAC toe? How can we make something that's really cool? Eric Matthes: I'm going to throw out a couple of the questions that I've been wrestling with and I have been articulating these questions over a period of months, so I can just throw a bunch out there. But I think it's good to name some of them because the three of us have been thinking about this, wrestling with it, and working with the tools. And I think there's some people in the audience who are right there with us as far as having been working with it for a while, and some people who have just not had the chance to focus on this. So some of the earlier questions are like, what are these tools? How do they work? So what's the range of the tools and what's going on underneath the hood? You don't need to understand all of it, but an accurate understanding of what it's doing and what it's not doing is really helpful as far as evaluating some of the bigger questions. What are people doing with these tools? Both in the education world and in the professional programming world, there's huge equity issues. Who has access to the tools? Who has access to which tools? Because when we talk about GPT, there's the GPT that people have free access to and there's a GPT that people can pay for access to. And a lot of the misunderstanding that I've seen has been people talking about free tools and the other person is talking about paid tools. Some of that's going away as the free tools get better in quality. But that issue will probably always be there. What does programming look like? What does life look like for people with access and people without access, especially over the coming years? And then I get into questions of how does this impact the way we teach programming and how does it impact the way people learn programming? And it feels like a while back the Internet was supposed to fix education. Everybody has access to all the information in the world and we know how overwhelming that was and that didn't fix everything. And AI sure feels pretty similar. And the last kind of category I put in is cautions. So what should people be careful about? Both with intellectual property issues and trust issues and any number of things I do feel the need to put out there when I have these conversations that I'm not an AI evangelist, I don't love it. I think there's lots of problems with it. I'm super fascinated by it and I kind of come at it from a perspective. It is out there, it's not going to be tucked back away. And so we all do need to wrestle with what we're going to do with it. So I think a good question to begin addressing is that core question of what are these tools and how do they work? And I think I found a really brief way to clarify for people. So they're large language models, and so what that means is instead of talking about how they were developed and trained and all that, just what is like GPT? When you go to chat GTP, GPT and put in a question, it gives you an answer. How does it generate that answer? I would tell people that GPT is really a bunch of mathematical functions and it's a bunch of English tokens and it's about 50,000 English tokens, just fragments of English words and language. And so when you ask GPT a question, it breaks your question into some tokens, it feeds it into mathematical functions, and then it recombines tokens and gives you an English answer. And I think that's important to clarify because there's probably a lot of people that think, like, if you ask a question about, say, my book, Python Crash Course, there's a lot of people who probably think GPT is like an advanced Google that goes out, does some research about Python Crash Course, my book, and comes back with some regurgitation. And that's not at all what's happening. And if you have an accurate understanding of that not happening, then you're in a much better position to evaluate what comes out of these tools. Sean Tibor: Yeah, I was thinking about it a little bit like those poetry refrigerator magnets, right, where you can put the words in different orders and create poetry and things like that. And what ends up happening is that you have really good poetry that comes out of it, you have really bad poetry that comes out of it, you have dirty poetry that comes out of it. All of these things happen, right? And these models are basically like someone rearranging, according to math, the order of all these different refrigerator magnets to make words for you that may or may not make sense. And as we train them, they get better and better at making stuff that fits the prompt that you were given, right? So for me that's been kind of my mental model for this is that one, there's a limited set of words that it can choose from. It only has the magnets that are on the fridge to work with. And two, it doesn't actually, Kelly, know anything about the world around it. All it can do is rearrange words and put them together based on the prompt that you were given. And it's like a bit of a magic trick, right? And that it's so good. It's this illusion of knowing things, right? Because it's been trained on what people have said. Yeah, that's pretty good. Well done, Chat. GPT, like, you did it. You got the right answer for that prompt that I gave you, right? But it's still limited. It just doesn't seem that way because behind it, it's such a large model that to us, it's the illusion of intelligence, right? And in some case, it's a good substitute for what we would do as humans. Eric Matthes: Right? I like the poetry magnet analogy, and I really like that because I think those poetry magnets, the larger sets are not just comprised of words. I think they have, like, word endings and word prefixes and whatnot. And GPT is not working from a set of words, it's working from a set of fragments of words and endings. So it's really flexible in what it can generate. I think it's really important to recognize that it's a nondeterministic tool. And I think that's really interesting to recognize for programmers because we're used to all kinds of advanced programming assistants. We've been using assistance for decades. If you look at Vs code and you hover over a function and it tells you the arguments that that function can accept, that's a deterministic tool. If the three of us open the same code base and hover over the same function definition, it's going to show all three of us the same exact arguments that that function accepts. If the three of us give our own windows of GPT the same code and ask it the same question, the three of us are going to get three different answers. And that's because it is a nondeterministic analysis that's happening. And that's where that power comes from. And I think some people are hearing these kinds of things and being dismissive. It's nondeterministic. You don't know if it's right or wrong. It's nondeterministic. And that's why it can give you such interesting and useful responses. And a good comparison is a good colleague. There's no colleague that gives you a perfect answer to the questions you ask in teaching or programming or anything. Good colleagues give us a bunch of ideas and we hash out with our colleagues based on the context of what we're working in, what's a good plan for proceeding. And when Kelly talks about GPT feeling like a partner for a lot of things, and I've had that same experience, GPT feels like a good colleague. And knowing your colleagues strengths and their weaknesses is critical for evaluating their effectiveness and making the most of that partnership. Kelly Schuster-Paredes: And I think that's where a lot of educators who are pushing AI have been going in their teaching. Here, we're going to use Chat GPT to teach students about prompt engineering and critical thinking. And using this generative AI with your intelligence to develop something that's pretty unique, and I see that as a possibility in the middle school. It's not very easy. I'm going to say first, it is not easy. Expecting an 8th grader, we're talking about 13 year old, some of them 14, to read the responses that come back from Chat GPT and then interpret that response, decide if it's right or wrong, find out the part that you want to re ask or use, and really hone in on a specific quality. And I think that's what makes for me. My addiction to Chat GPT is sort of like coding. It's that problem solving of really digging in and finding where that path is going to lead you as you start to do that prompt analysis. It's really interesting. I want to really quick switch, though, before I stop talking and just say, like, Bard. Bard has been coming into the picture now and a lot of people haven't really been talking about Bard google and personally, I can see why. Personally, it's not one of my favorites, but in theory, this Bard is supposed to be able to go out to the internet and scan and collect more information. It's not trained on that 2021 and prompts and human responses like Chat GPT is. And from my understanding, it's training still on the human responses, the human prompts, but also being able to collect more information. That's going to be an interesting take. Not so much, I think, in computer science, because their responses are not as creative, and creative is the key creative as Chat GPT. In the terms of coding, it gives a very dry, single solution without explanation. Eric Matthes: Usually bard does or GPT does. Kelly Schuster-Paredes: Bard. So Bard will give you like a chunk of code and it's not really explained well where Chat GBT when the code is produced. And this is why I can always tell when the kids use Chat GBT. Yeah, because it has every single line is like nicely coded. The syntax is beautiful, it's got the two spaces and the accurate amount of lines in the pepe. So it'll be interesting to see when Bard starts taking up slack, I guess. And if it ever hits that for coders, it'd be interesting. How are you doing, Sean? Processing. Sean Tibor: Yeah. Thinking about how I'm struggling a little bit, thinking about how to phrase it exactly, but just thinking about how it comes back to that Goldilock zone again for me, right, what are the best ways to use this? And I like the idea that we don't know, right, that we're learning this, we're figuring it out, we're trying to come up with those approaches that work best. There's no best practices yet because we're trying to learn this. And as we learn and the tools get better, we establish these ways of working or these patterns, and we'll start to develop preferences ourselves for which tools will work best, for which kinds of use cases. That, to me, is kind of that divergent, branching approach to figuring out where and how to solve these sorts of problems, both in the technology space, the teaching space, the individual learning, each of those areas. Eric Matthes: There's there's a question that there's something that Sean brought up that I think is really worth digging into with the three of us and your audience. You brought up the idea of curriculum, and I think this really throws the education world up in the air around programming curriculum. The three of us have been through the era where there's a drive up, everybody should learn to code. And so I've never really bought into that because there are people who just don't care about it. I have a conviction that everybody who wants to learn about code should have the opportunity to and should have the opportunity to have access to quality teaching and learning about programming. But what's the core drive for that? The core drive for the idea that everybody should learn the basics of programming was that if you want to automate anything, you need to understand how to program. And I think that's going away. I'll plug one thing. People know me as the author of Python crash course. I'm still doing that, still maintaining the book. New Edition came out this year. It's been very satisfying. But I'm also writing a weekly newsletter. It's at mostly python called Mostly Python and it's at Mostly python substac. And I'm excited about that because it's pushing me to articulate ideas every week about the current state of programming. So it gets me out of the I wrote about the basics and some projects and stay in that world. So when figuring out how to talk about AI in an informed way, I made myself a small project and did some work without AI, and then did some work with AI and used that as kind of a concrete example. And my goal for PyCon this year, beyond the education summit, was to have face to face conversations with people about what they're really doing with AI. Because so much of what we see from our own houses is blog posts and videotapes and whatnot, and there are so many of the ones that reach us. Fall into the camp of AI is changing everything. Nothing's the same, or AI is garbage. It's hype and it's not changing anything. And the reality is in the middle, as always. Almost always. So the real world example that I focused on is because I'm writing a newsletter, I was dealing with screenshots a lot, and screenshots look better with borders because a screenshot tends to have the same background as a web page. So you put a border on it and it stands out from the background. macOS does not have an inbuilt utility for easily adding borders to images. You can use preview, but you've got to click and drag a box and make it match on all corners, and it's tedious. And if you're doing that more than a couple of times, it's not fun. So I took a short afternoon and wrote myself a small utility that lets me just run the command, add border and name the image file, and it adds a border. And then it has a few CLI options. Add border, thickness, color, or whatnot it started out as Python. Add border PY, but you can structure your project so that you can end up with a command line tool. So it's in Python. One of the things I love about Python is that project is maybe 100 lines of code, but it stands on the people who created Pillow and other imaging libraries. What I did for this analysis, I wrote that project on my own. I did a round of refactoring on my own, and then I used an AI tool. I used GPT to guide me through the refactoring work and do the refactoring work for me as much as it could. And it was fascinating. It was so much easier. It was easier for me because I know what I should do for refactoring. And then I packed it up and it's on Pipi. It was helpful to do that before Python because when people said, AI is useless, it can't really do anything. I end up being able to tell people like, okay, here was my real world problem. I need to add borders to images. For experienced programmers, it's a trivial task. It's the kind of thing that most people with a competent understanding of Python and using third party packages can put together. I couldn't do it off the top of my head. I had to do a bunch of research, but it was straightforward. It's the kind of tool that most competent programmers can build for themselves. And then suddenly, if you're writing a newsletter, your screenshots are better than everybody else's. It's the classic example of something that if you don't have programming skills, you cannot build that. So if you use macOS and you don't have any programming skills, there's no way for you to easily add borders to images. And it's really interesting too, that you can't really pay somebody to build you that tool, because a reasonable programmer is going to charge you more than you probably want to pay for a little tool that just adds images to borders. And my big takeaway from that and the reason I take a few minutes to explain that particular context, anybody who we go back to the goals. What's your goal in wanting to learn about programming? If your goal is I want to be a programmer, for the rest of my life. AI isn't impacting things a whole lot. You still need to learn the basics of programming. You need to understand what a variable is, what a data structure is, and all that. But if your whole goal is just, I want to build small tools that automate little tasks, I would argue that you don't really need to learn to code anymore. And I think it's important to clarify that. If a non programmer asks GPT, I'm on Mac OS. Can you give me a tool that adds borders to images? It'll spit out the code. You can then say to it, I have no idea what to do with this code, and it'll walk you through what to install on your computer, where to save it, what to do with it. Like Kelly was saying, a person who can interact with something like GPT and not get stuck at what you gave me doesn't work, but somebody can say it doesn't work. And here's what's happening. You can probably go through a process where GPT will build you the tools you need. And so if that workflow, does what you need, and your real interest in programming was just build those small tools and then focus on the things I really care about, like writing newsletters about non technical subjects, then those people don't really need to learn programming anymore. And I think that's something that needs to be talked about in the education world. Sean Tibor: I'm going to disagree with you. I think it falls a little bit in the Pixar movie. Ratatouille, right? The whole message of that is not that everyone can cook, it's that anyone can cook, right? And it's the idea of it is that a good programmer could come from anywhere, and anyone who wants to program should have the opportunity to do so. But I do think that there's a certain level of literacy about computational thinking that is extremely helpful to have in the world that we live in today. And I'm not thinking that everyone has to be a master chef level of programmer, but they should know how to make spaghetti, right? They should know that there's something that they can do when it comes to automating things or making their lives simpler, to be able to even ask the question. Right? So it's kind of like that. I feel that people still need enough knowledge about how computers work and how to think about tasks in a computational way to even know that something like this is possible, to be able to ask for it, right? So they may not need to even know how to code in Python, but if you ask someone who hadn't been a programmer before to put you know, then they're writing a blog and they want to put borders on their images. They're going to go get paint or some sort of tool, and they're going to manually do it and sit there and struggle through it. And add the border every time without even knowing to ask for an automated tool to do it. So there's a certain amount of, like I think there's like a base level of knowledge that would help to know that you can solve that with code, whatever that means, right. Or with automation, just enough to be able to ask for it or to ask if it's possible. And then they can go the rest of the way using the tools that help assist in that process. Kelly Schuster-Paredes: It's like the rise of the generalist. We were all specialists. We were all specialists in our fields. And I'm not going to date myself, but you have the we all needed before that, right? We all had to be specialist. And now we're in an age of information where we can now become more generalist. I can know a bit about programming, I can know a little bit about playing a recorder. We can do all kinds of things, but not necessarily specializing. It's an interesting thing about it. And I want to throw this question, and I keep kind of like twisting the knife and other language. Computer science people had a conversation and I'm like, and we're still teaching JavaScript. Let me think. Generative of AI is made on what, 99.8% Python? And what the other is, like, terminal or something? Barred. Google, I'm assuming, is also all Python. So if this is all running and the age of AI. Is pushing more towards Python and less towards everything else that it was running on in the past, where do we go from there? And is it critical for the kids to have a certain understanding? All kids, like Sean said, have a certain understanding that this is powered by code. This is how it was collected, this is what it was trained on. I feel strongly in the students having to have some sort of education about programming, about AI, about how it was made, and about the algorithms and data that's collected on you. And I think we kind of lumped that in as computer scientists that AI. Literacy is going to be more predominant in the classroom, in the computer science classroom. That needs to be a huge strand. If you're not teaching AI. Literacy, where are we going? And that kind of goes into thing. We don't really know what we're doing 100%, but we can teach the basics to the kids and go from there. Oh, my God, we can go on and on, but I have to cut this short. Eric Matthes: Yeah, I like that phrase, AI literacy. If I can throw out one last perspective on this, I'm not of the mindset that what I just said is the answer to this. So it really is. Those are my thoughts around this. And I think the conversation needs to move. That conversation of should everybody learn to code? Should be re asked in the AI era. And I'll just throw out there two intro courses. Imagine an intro course that's structured as we've often structured them, where we do teach people the basics, the goal is they learn about variables, data structures, use those in some way, so they make a TikTok toe program. Any number of something tangible, they use. It what they've learned, and they come away from that class with an understanding of how programming fundamentally works. So imagine an alternative intro class where the goal is to name a small project you want to build in the beginning and then use AI tools to help you build that as quickly and efficiently as possible and understand enough of what comes out to be able to work with it. I would argue that those are two very different classes, both of which are really interesting, and that's the kind of conversation I'd like to see happening in the education world. Kelly Schuster-Paredes: And I'll add the cherry on top, smooshed in the middle. And this is like our plan, hopefully. Everyone always asks about the curriculum. This is not our written curriculum at Pinecrest, but this is my mind curriculum. This is where I'm going in the classroom curriculum. I'm thinking no basics, I'm thinking Chat GPT on this end, Smooshed in the middle. Thanks to Katney, in our future, we have a future person on podcast with drones that hands on approach, those hardware things that even if you get the code, getting it on the hardware, tinkering with the hardware, playing, learning, hopefully not breaking things with flying drones in the classroom, but they do have the little mini ones now. But those kind of things are going to be interesting to have in the classroom going forward, because we get the basics, we play around with our project on the end with Chat GPT or whatever, and we create in the middle and I think that's going to bring about some really beautiful things. I'm not going to let you say anything because I'm going to add a little bit of thoughts before we go. And this is a great conversation, and those of people who are listening, it's kind of going to trigger some thoughts because in a couple of weeks we will be having Philip I always pronounce his name wrong. Sean Tibor: I think. Kelly Schuster-Paredes: Guao and Sam Lau, who did a study and I love their cognitive science background, where they did a study with some computer science instructors from around the world and talked about their thoughts. So they're going to be continuing that conversation. And that's what I love about having this conversation first and then we sliding in, so I'm sure it's not going to go away. We probably talking a lot about chat GPT before the school starts. Eric Matthes: Sean yeah, I think we've all found our own ways of saying there's a middle ground that people need to find. Sean Tibor: And the journey to get there is the fun part, right? We get to do new things every day and try things out. I think my last thought as we're talking about this, is that the world of AI is rapidly being carved up among the big players. Everyone is trying to put their stake in the ground and say, we're going to gain our market share, right, and we're going to claim it. And it really just makes me appreciate how the value of open source and free software and self hosting and people who are out there creating amazing things and sharing it with the world. So that it's. That answer to what happens when three or four big companies control all of the AI tools that we're using and they can control what answers we get or what responses we get. And the answer to that is go make it yourself. Right? Go build it, go host it. Go try it out. The tools are out there. And take your own control of this space and learn about how it works in a really hands on sort of way and just makes me feel appreciative for all that people are doing. Eric Matthes: Yeah. And for the audience, please tell your stories. So most people are trying something, you are probably trying something that other people have not. Try to be a guest on a podcast. Podcast. Write a blog post, share your story, please. Share what's working and even what doesn't work. It's really helpful. Kelly Schuster-Paredes: Yeah, and they can share that directly. And also in the LinkedIn, we have the live stream going in with LinkedIn and we have the numbers are growing in our LinkedIn community, I have about 320, which I'm really excited about. Some great people. And I've been hounding some AI experts and begging them through messages to see if I can get Andrew I don't know how to pronounce his Ng. What's his name? Niguint. I don't know how to pronounce him. Andrew lee. Lee. I've been emailing all the big names saying, listen, I know you guys are busy, but educators need some answers and it would be great to get you on those shows. So if anybody has any connections to that, that would be awesome. Sean Tibor: Eric, any final thoughts? Eric Matthes: I have one more suggested follow. I read Charlie Guo's artificial ignorance. It's another subsack newsletter. I'm not a super fan of subsec, by the way, but his newsletter is the best weekly roundup of What's Happening that goes beyond just a link list. So he mentions what's happening, gives a brief summary, and it's what's allowed me to have some sense of what's happening in a world that's changing too fast to keep up with. Sean Tibor: Nice. Very nice. Kelly Schuster-Paredes: Put them in the show Notes. Sean Tibor: Nice. So we'll put links in the show notes of that. Eric, we're going to put a link to your newsletter in there, so if you want to follow Mostly Python at Substac, you can do that. Eric is on Mastodon as well. We'll put a link to that profile in the Show notes as well. I'm on mastodon also. Although I. Think I need to start getting back on there. I've been drawn back into the world of Twitter and I want to stop that before I go too much further. I think I'm leaving Reddit at the end of this month when my third party app is no longer supported. So I'm going to be looking for some new homes to post and share content and communicate with people. Eric Matthes: We all are. Sean Tibor: Kelly, any updates from your side? Kelly Schuster-Paredes: No, still two more months. If anybody didn't guess the Wednesdays of the week, obviously you guys can put that in the LinkedIn or share out. That's some fun things. Be creative with chat GPT. Sean Tibor: Sounds great. Well, then, I think that does it for this week. So for teaching Python, this is Sean. Kelly Schuster-Paredes: And this is Kelly signing off.