Sean Tibor: This is episode 113 and it's all about adapting your courses in a new world of chat, GPT and GitHub copilot. My name is Sean Tibor and I'm a coder who teaches and my name. Kelly Schuster-Paredes: Is Kelly Schuster Perez and I'm a teacher who codes. Sean Tibor: Well, welcome to the show again, Kelly. It's been actually not that long since we recorded, but it feels like so much has happened in between. Kelly Schuster-Paredes: I know. Just cramming in in the summer. Two weeks. Less than two weeks. We were like, right, less than two weeks. But yeah, it seems ages. Sean Tibor: It's flying by. And as we're thinking about the new school year and all that's coming, we're joined by two experts who are going to help us try to figure out how to create your courses and how to deliver the best learning experience in a world that's inhabited by chat, GBT, GitHub, copilot, large language models in general. So I'd like to welcome Philip Guo and Sam Lau to the show. Welcome, both of you. We're happy to have you here. Philip Guo: Yeah, it's great to be here. I guess I've been here before, but it's Sam's first time. Sam Lau: Yeah. Hey, it's great to be here as well. Sean Tibor: Yeah, we're really excited to have both of you here. I know that this is a topic that's been getting a lot of discussion and a lot of conversation, and it's been interesting to me to observe the number of people who are professing to be experts in this space without any sort of empirical data whatsoever. Right. So what I'm excited to talk about with each of you is that you have actually been gathering data. You've been working on what's really happening and how things can be different in this. You know, Philip is an associate professor of cognitive science at UC San Diego. You've been working for quite a long time in this space around computer interaction and programming tools online. You know, really, I think you've got a lot of great background here. Also, if you haven't used it already, you probably should be the creator of Python Tutor, which is a fabulous way to figure out what your code is actually doing. So it's been something that I've shown everyone from an 8th grader to a college intern to adults, and they've all loved it. So we've said it before, but thank you so much for making that tool great. Philip Guo: Yeah, I'm glad you all enjoyed it. Yeah, that's been running for quite a while now, and I think that it's more than relevant than ever nowadays with these intelligent tutoring systems that large language models are enabling, kind of coupling that think about how do you couple that with runtime visualization and stuff. And that's something that myself and Sam and others have been thinking about a bunch. Sean Tibor: Yeah. Well, welcome back to the show. I also want to introduce Sam Lau, who is new to the show. Sam is also a teaching professor at UCC San Diego in the Data Science Institute. I have no idea how to pronounce the name of the Data Science Institute, so maybe you can help us with that. But you've been teaching a lot of data science courses at UC Berkeley, UC San Diego, and looking at different ways that data science can be taught because there's so much demand for it right now. And you've created a tool that I am just finding out about this week called Pandas Tutor, which I'm kind of excited to go play with you're. Kelly Schuster-Paredes: So behind the time, Sean. So I just did the Data Science Boot camp. Sam Lau: Yeah, the pronunciation of it is haligiolu, as far as I know. Sean Tibor: All right, I'm going to have to rehearse that a few times, I think, before I can get it right. Sam Lau: Yeah. And then the Python Tutor tool, like Python Tutor was meant to help people who are learning how to write python code for data science for the first time. It was also designed for instructors to prepare some lecture slides because we saw the instructors spending a lot of time filling around with their arrows and boxes on their lecture slides and thought we could speed up that process for cool. Sean Tibor: Very cool. Well, you have also authored a paper about the effects of Chat GPT GitHub Copilot on the learning process, and we're going to get into that topic and the meat of that conversation in a minute. We're going to start with the wins of the week, which is one of my favorite things to do. So, Sam, since you're brand new to the show and our guest, we're going to make you go. Sam Lau: Sure, sure. My wins of the week are well, I've been learning woodworking for the first time, so I learned how to use, like, a basic table saw, like a miter saw, learn how to screw some boards together, and it's been a lot of fun. So that's my win. Sean Tibor: That's so great. Kelly Schuster-Paredes: To the hands on stuff. It's like awesome just to step away. During the summertime, someone asked me, when was the last time you coded? And I was like, I think it's been like three or four weeks. Three weeks. I've been doing a lot of hands on stuff. Gardening, home repair, home organization. My son does not welding. I always say welding, blacksmithing, and so, like, woodworking is one of our fun things to build stuff. So that's awesome. Good tool. Good tool to have in your toolbox. Sean Tibor: Absolutely. Philip, how about you? Any big wins the last week or two, or even little wins we'll take? Philip Guo: You know, I wish I had something, like know, some real world cool thing, but it's actually a much nerdier thing, which is I recently published, actually, today it came out. It's another blog post for O'Reilly about my own adventures of using Chat GPT for doing more of an end to end kind of web development task. I can send you all the link to put in the show notes later. This is in addition to the blog post that Sam and I wrote about our research paper. So that just came out today, which was a big win because that was a long form post that I spent probably a week or two kind of drafting up. That was a win for today. Sean Tibor: Nice. It's always good to see it go live, right? Philip Guo: Yeah. Kelly Schuster-Paredes: Question for you and see if you do what I do. Do you write the paper, then dump it into chat CPT and then put it into grammarly and do all that fun stuff, too? Or is it just me? Philip Guo: No, it's interesting because I tried I mean, first of all, I think for longer form things, it can't handle as well, right. Because still, you can't just throw in an entire ten page thing. Also, frankly, Sam speaks this, I haven't found it to be super useful in technical writing because I think it's more tuned for business writing, kind of the more kind of popular, everyday things. So I found that even if you try to use it for technical writing, it just ends up sounding a bit strange, right? It ends up sounding a bit artificial. Not to make a good pun about it, but yeah, what I do do is I put it things in Google Docs. I just use a Google Docs grammar and spell checker because I found that that's pretty lightweight and catches most obvious things. But, yeah, I haven't used anything too heavy. Kelly Schuster-Paredes: They have now that help me write draft popping up in Google Docs, but it always messes up your writing completely. But it's like a deadly circle for me. I fix it in grammarly, then somewhere else. Somewhere fixes it. And then my English teachers all say to me, you can't do that. No. Oxford Commas and stuff. Sean Tibor: But anyways, them's fighting words. All right, Kelly, what's your week? Kelly Schuster-Paredes: I've been hooked on AI. That's been my pastime for the whole summer. Just really just investigating stuff. And I found magicschool AI by Adil Khan, the guy that I'm assuming he's the guy that did Replit, too, because they're both but he doesn't claim it to be both at the same time, so he just kind of passes back and forth. It is phenomenal. It is going to be a teacher tool. Talk about teacher tool. I'm not even talking about chat GPT, I'm not talking about prompt engineering. Put everything aside. It makes rubrics, I think. I don't know if I said this before, but it makes rubrics. And yes, it doesn't make a rubric that you're just going to put in and spit out and it's going to be great. Some people may do that, but it does type everything out for you that you can then tweak it. It does questions, it does like text. Yeah. And it's free. Nice. You can't help that. And I found that and I sent it to my supervisor and yes, it's amazing. They're trying to get districts to get in there, obviously, to do a paid version, which would mean that you would have a team sort of connected. But yeah, I don't know. Philip's looking at it now. I can see he's deep. There's about 30 or 40 different things that it does for teachers writing emails. I think public school teachers might like it for the accommodation stuff that it has to write. We'll see. I don't use half the stuff in there as a CS teacher, but the rubrics are always nice, especially if you're going to build some projects and stuff. Sean Tibor: Nice. Kelly Schuster-Paredes: That was, like, a huge win for me. Sean Tibor: Very nice. All right, I guess it's my turn. My win is that I'm home. I was traveling for about nine days, mostly for personal vacation. I think it was the first vacation that my wife and I have had without kids and for an extended period of time in about ten years. So it was a lot of travel. The vacation parts of it were amazing. The waiting for airlines and being stuck in airports part was the opposite of that. I think out of the five flights I had, four of them were delayed by more than an hour. So I'm going to call them out. United. You can do better. I know you can. It was awful, but the rest of it was great. Kelly Schuster-Paredes: Apparently they're having, like, flight issues with everybody, so I'm excited to see if I get out tomorrow morning. I'm traveling tomorrow. It's my turn. Tag. Sean Tibor: Yeah. So, yeah, just being home again after that much travel, no matter how great your travel is, there's no place like home. So I'm glad to be here and glad to be home. Even though the AC is not working and I came back to lightning strike damage and all kinds of stuff, I'm still happy to be home. Kelly Schuster-Paredes: Well, hopefully this podcast will cheer you up. Sean Tibor: I'm already excited. I'm super happy to be talking about this topic because I think it really gets down to the real question, which is what is learning? Like, what is that really, and how do we make it happen? Right. Kelly Schuster-Paredes: Yeah. Sam Lau: That is a big question. Philip Guo: That's almost as big as what is life? Sam Lau: Right? Philip Guo: It's like, what is life? Is probably the biggest question. What is learning? That's pretty big. Sean Tibor: Right? And that's, I think, where we can really get into this. And maybe a good starting point is I think the rise of Copilot, the rise of Chat GBT has really disrupted a lot of things that we have been doing for many, many years in the education space, and specifically within the computer science education space. And so maybe let's start there with what is the nature of that disruption? What are we seeing and what are we struggling with as we are going through this change? Philip Guo: Sam, do you want to take it from there? Put you on the spot since you're the one growing up in this generation. Right? Because I feel old now. Sam has kind of come of age in this generation, and he's now starting as professor and teaching students in this generation. Do you want to get started with that? Sam Lau: Yeah. I think for instructors, the big wake up call with AI was realizing, oh, crap, if I put my homework or my exams through this tool, it actually gives me the right answer most of the time, like, maybe 90% of the time. And then, of course, then you start thinking, well, that means all of my students could use this and kind of fudge their way through the homework or the exams and what do I do now? So I think when Copilot and Chat GPT came onto the scene, it was definitely a wake up call for people doing instructing because a lot of the problems that we put in our assignments are the sort of questions that Chat GPT and Copilot are exactly tuned for. So they perform generally very well, and it's caused a lot of us to rethink, okay, do I keep doing what I'm doing right now or do I need to adjust? Kelly Schuster-Paredes: Yeah, and I was thinking, like, just backing up a second on that. Do you do well? And do I adjust that whole cognitive learning point? Because I'm going to bring in your cognitive science I want to back it up a little bit. If you're doing these problems because you set these problems as a computer science teacher, because you know these are the problems that's going to step you through in order to solve larger problems. Right. It's designed that way, and you can't kind of start with a new student and all the way jump to, here's my study, and I have proper data vis and everything's. I understand everything. You have to go through these little stop, these little points, these checkpoints. And as a professor, if you're pushing him through these points and the kids are jumping over those points and using an AI assistant, that's got to be a little bit scary because one, you know, like, you're really working and you have this plan of learning for children or students or adult young. You guys are older students, but you have this plan for them and they've skipped it. And even though, you know, you want to skip it too, because we're all human, it's part of that learning growth. So is that part of the study that you were going into as well? Sam Lau: Yeah, for sure. I think having students skip those kind of foundational steps was a real concern for everyone that we talked to. So we did an interview study with 20 instructors who teach programming in first year programming classes all across the world, in every country except for Antarctica. Not the many computer science classrooms in Antarctica, but everywhere else we managed to hit. And across the board, they expressed the same concern, which was, okay, if students use Chai GPT to skip over that process of struggling with those easier questions, and when they get to the medium questions and the hard questions, how will they have the tools, the mental tools, to tackle them? I think what we found was that it was very concerning to instructors, and they had a very wide range of views on how to approach it going forward in the longer term, ranging from okay, I'm going to say, my syllabus, it's banned forever. And if you use it, I'm going to consider it cheating all the way up to well, even if I tell students not to do it, they're going to use it anyway. Might as well embrace it and embed it into my curriculum itself. So pretty much the whole spectrum of that. Sean Tibor: Yeah. And I think it's interesting the way that you've kind of grouped that, right. The short term versus longer term goals and plans. Right. I think we're definitely in that early stage where the shared concern by everyone is, please don't cheat. Right. Don't cheat. It causes so many problems. Philip Guo: Right. Sean Tibor: But I think in the best situation, the problem is you've skipped over the actual learning, right. You've shortcut the learning process, and you didn't actually get the benefit of going through the problem solving process or the learning process or whatever the learning outcome that we wanted would be. And we're not even yet to the point of trying to figure out how we either embrace or resist AI as you've put in the longer term plans. And I think we're all starting to think about it, but whether we're the instructors or whether we're the learners, it's really hard for us to predict which path is going to be best at this point, I think, in most cases, right? Sam Lau: Yeah, for sure. Philip, do you want to say more to Sean? Philip Guo: I think this is a great conversational thread. I mean, I think that I love this focus, at least right now, on Know. What is the purpose of giving these small, self contained programming assignments for Intro CS? The purpose is not to make something useful, quote unquote, right? Because if you're in an intro class, you're probably not making a brand new app or a brand new thing. You're doing kind of the equivalent of math exercises or drills or writing exercises. And the output, in a sense, is a change in your brain. Right. That's the ideal thing, that by doing these exercises and learning about how lists or dictionaries in Pythons work in Python works, you're trying to reconfigure your brain somehow to know, oh, you can actually put things in order, or you can map one thing to another, or you can look up something by a key and get a value. And you're using the programming to learn these computational concepts. And you're absolutely right that these AI tools can do it all for you. But then again, I guess the human analogy is if you just hired an expert to sit next to you and do your homework for you, then they could obviously do all the homework for you, but you didn't learn a thing, right? I think that these AI tools are really good, one mental model, that they are like a very smart person you hired to sit next to you and do your stuff for you. So that may be useful if you're a professional programmer or a scientist. If you're a professional scientist and you're like, I need to get this Matplotlib Pandas graph working. I don't want to look up on stack overflow for an hour to see how to tweak the graph. Can I just write a prompt and say, give me a bar chart that has this vertical axis, this horizontal axis, this binding, whatever, and it pops out the code and then you can tweak the code. That is a great productivity boost for the scientists because they don't care about learning about Matplotlib and all the API functions. But if you're in a first year computer science class wanting to learn fundamentals, then if the tool does it all for you, then you haven't really done much of learning. So, I mean, the devil's advocate. The other side is that some people are advocating we'll get more into this later, that maybe we should think about transitioning the skills that people have to learn the learning objectives, right? So in one view of the future, the learning objective could be how do we learn to talk to these AI tools and do prompt engineering and prompt development in the most intelligent and useful way? And some people made an analogy to machine language and assembly language, right? So nowadays we no longer teach students how to program ones and zeros or in low level assembly language because we have Python and C plus plus and Java and stuff. So there's one view of the future where these LLM based languages, where you're writing kind of in English or maybe using some GUIs and stuff, maybe that's the future of programming in a certain domain. So then we should teach people how to do that better instead of teaching people, say, Python or Java or whatever. And I think that the reality will be a compromise. There will still be just like nowadays, there's still a role for C programming and maybe some kind of low level assembly programming in very niche domains, like in microcontrollers and operating system kernels. But most people don't need to learn that, and most people are doing Python and Java and stuff. Maybe in the future most people will be doing LLM stuff, but then there'll still be people who need to know Python and Java in order to build the tools for so that was just a large swath of stuff to respond to. Kelly Schuster-Paredes: You just summarized everything that's going in my head all the time. I'm so wishy washy about AI first. I'm like this huge advocate. Yeah, let's put it into the curriculum. Let's let the kids be these great agents of change, and they're going to make so many things. And then my other side is like, that cognitive science side. Yeah, but what about the myelin? And how are they going to reinforce those neurons that are going back in those learning paths? And I think that on some level, a lot of teachers are in that path of that, here comes this AI tsunami that everyone's quoting, and what are the educators doing? Are we putting out fires after it because not many people are having the PD for it and they just are reading what they're reading? Or are we kind of surfing this wave and going with the flow? And I want to go back and let you tell more about the study and what you guys but I can imagine that's kind of that path that a lot of educators have been flipping back and forth on. Philip Guo: I guess I'll put Sam on the know. Sam is just starting know he's starting as a professor this fall, and he's obviously taught a lot of classes, even as a you know, what is Sam planning to do with so I'll put Sam on the spot. Sam Lau: Yeah. Of Kelly. I feel you really very much so. Because personally, if I had infinite time, I would take the class I'm teaching this fall and integrate AI tools in it right now. If I could. If I could wave a magic wand and give students assignments that were crafted so that when they work with AI tools, they won't get quite the right answer, and they'll have to think more carefully about what Chat GPT is giving them in order to actually get the right answer, then I think I would love to do that. I think it would be a really interesting learning experience for them and also to even show them that, okay, at the end of the day, programming is from a really high level. You type something into your computer to tell it what to do, and it does something that's not quite what you want, and you fine tune that over some iterations. And I think writing programs with a large language model is in that sense, pretty similar to how we programmed before. It's just that the type of thing that we type into the computer has changed. So if possible, I would love to do that, but practically speaking, class is starting relatively soon. I don't really have enough time to go through all the assignments and the lectures and change everything all at once. So, yeah, I'm caught in that tension, too, between these are the way the assignments are. They've been written for a time before AI, and for better or for worse, I pretty much have to use them because otherwise I don't have enough time to create better alternatives. And at the same time, I want to tell students about a future that's coming up for them. So I think for me, the challenge I'm thinking about is, okay, well, assignments are the way they are. Is there a way to integrate LLMs a little bit? And if so, how much of that is possible or how much should I allow? And in the future, what things should I prioritize in teaching in order to make that? Sean Tibor: I think based on what you're both sharing here, and I'm kind of keying off of what Philip said around compromise, right. And then also, Sam, what you're saying around trying to add a little bit to it. Right? I think the temptation is to treat these as two discrete binary. Like it's it's one or the other. You can either have all AI or no AI, but a light dusting of AI is pretty cool too, right? Having a blend, a compromise, a way to include it and acknowledge and respect AI in the process, and respect the student to know that when they get stuck or when they get in a place where they don't have enough time and they don't have enough brain power going to be able to solve that problem. Where do they go? Right. Chat GPT is a pretty easy alternative to probably get a B plus on the assignment, right? So how do you acknowledge that? How do you incorporate that in there and be transparent with the students as well, to be able to say, like, look, there are reasons why you would want to solve this yourself, right? Because it'll help you understand the concepts better, because the learning that you get will be better. And then the next time you use Chat GPT, you'll be more effective at using it. Right. And the tricky part is knowing where that balance is, because it's ever changing. Right, and knowing how to use it and where to use it. Philip Guo: Yeah, that's a great point, Sean. I think that from the instructors that we interviewed, one kind of compromise solution. Also, I like Sam's point about the practicality of it, right? I mean, I mean, you all know at the K Twelve level, you probably even have more time constraints. It's like you wish you could redo the curriculum, but you got to be running the class next week. Right? And I think that from what we've heard, this is kind of a composite from several instructors we interviewed. One idea is they already have a class going, right? They're teaching intro CS class, intro booking class. They have a curriculum going. Now, the Chat GPT stuff is coming about. One thing that they've done is some instructors have said, as an optional thing, play around with Chat GPT and report back on what you think. Right? So if you actually are using it, you should report on the homework that I did use it, and maybe write a little reflection on it. And that one philosophy for teaching at the college level. Is that the homework assignments are really just formative exercises, right? They're really for you. Maybe they count for 10% of your grade or 20% of your grade, just as a nominal check thing, just as a kind of a check to get you to do it. But most of your grade are the several midterm and final exams and those are on paper proctored in person and stuff. With the pandemic, it was a lot harder because people have to do remote exams, but now people are going back more in person. One kind of standard story that people that instructor is saying is that we're just kind of going old school, right? The paper exams are the way we assess summatively. Did you learn this concept right? So paper exam says here's a linked list, or here's a dictionary, what does this do? Here's some code for python, here's some diagram, what does this dictionary do? And in order to answer this, presumably on paper, you have to understand how dictionaries work. And if the homework assignments there's a bunch of dictionary homework assignments, if you do them with Chat GPT, then it's your responsibility to learn how it works. Maybe you could actually use it productively. If you get stuck, you use Chat GPT for a hint, it could give you the answer and then you can ask it. Can you explain to me how you got the answer? And then it says, oh, here's how my code works. On line one I make the dictionary. And then line two, here's the dictionary. Like chat JPT is actually quite good at explaining in plain English how the code works. So if students are going to use it constructively as sort of a personal tutor, if it helps them, then great, because the exam is where they really test their understanding. And that way you get around kind of the cheating issues, at least on homework, because you're like, you can do whatever you want homework, but if you're not studying properly for exam, then you're going to get a bad grade on it. That's what we've heard from some instructors as a short term thing. Sean Tibor: Yeah, I just had a funny comment on there. I think I saw it on Instagram. Someone was requiring their students to put their homework assignments in as handwritten, whether it's essays or code. And so they built a handwriting robot to write all of their answers. So there's always another way around it. But sometimes the cheating method is more intensive than the actual problem itself. But I like where you're going with that because a lot of it is about the ability to demonstrate knowledge and less so about where the knowledge came from or how it was acquired. And I think one thing that I've thought about a lot as I made the transition from teacher to engineer and working with a lot of young engineers and still going through this learning process with them, is asking them the. Question. Is the code that you're producing an artifact of the process or is it the product of the process? Right? So if learning is the product and my understanding is the product, that's the actual valuable outcome that I want, then all the code that you create along the way is the artifact. Right? It's what you produce that demonstrates that it happened or it's just generated along the way versus the actual code that needs to be written. So it's a pretty clear answer. Like, if you're engineering and you need to solve a problem and the code that you create is the product, then Chat GPT is just another method for being able to produce that, right? It helps make it happen the same as if you had another engineer working with you to solve the problem. And so it's perfectly valid. If you're learning as the output, then you have to be careful about how you are using that tool because it may not produce the actual product that you want. It may produce a substandard product and you're focusing all your attention on these artifacts that don't really matter. Kelly Schuster-Paredes: And I think that's the key for beginners, whether it's a 6th grader or a 10th grader or a college student taking CS. And it's like, that's why I get so wishy washy, because I'm sure people have heard this now on all my LinkedIn. They know I'm addicted to AI. It has been my life. I am learning. I'm following so many people and it's like my new passion, right? But I worry about the dopamine hits of those AHA moments in our classroom. Because if the students aren't doing those little desirable difficulties and struggling with the code and getting those wins, those small wins of I did it. Oh my gosh. And here's my basic little program that I wrote that asked people to come to my party and collect it into a list versus the kid who just dumped it into Chat GPT the night before and hit every point of the rubric. The kid that did the basic one who learned it and was so proud of that that they accomplished is going to get more of a dopamine hit and more of a connection to the passion of learning versus a child that dumped it into Chat GPT. And even though it was beautiful code, like gorgeous, and the app was great and the creative thinking may have been phenomenal, there might not be I'll say that because I don't know, but there might not be that same level of dopamine and that addiction to coding. And that is one of my biggest fear, that the learning and the passion for learning will go flat. Philip Guo: That's a great point. Kelly. Yeah, man. I think that this emotional aspect and as a Time podcast listener, that's been a recurring theme right when the two of you have been talking about your classroom experiences and the kind of emotional changes in kids when they struggle with the problem, they figure it out and they're really proud of themselves and stuff. And again, like we're talking about earlier, it's not like they made something completely novel that the world hasn't seen before. It's something that's quite simple, but for them, it's novel and it's interesting. And the analogy, I guess, for this is like we're talking about crafting and woodworking. It's like the difference between you actually made your own table or your own piece of furniture or stuff versus you bought it at a store and of course you bought at the store. It's like AI plopped it for you. You have a Star Trek replicator. It made it for you fully formed. Beautiful. Yeah, it's great. You have a device, you have a table or a chair you can use, but you didn't feel that craftsperson connection to it, and you didn't feel that pride and that problem solving that's needed to put all the pieces together. And if you don't have the right parts and what do you do? And that sort of engineering mindset. Yeah, I think that's totally a risk. Sean Tibor: That's a fantastic callback to Sam's win of the week, too. I mean, that's Kelly a great sam. Philip Guo: Could have just bought everything instead of making right? Sean Tibor: Yeah. Yeah. And I think that it really fits and it's a great way to think about it. Is this something you want to buy or something you want to build? And there are absolutely justifications for both approach, but the real outcome is having the ability to decide wisely about that right. And having the wisdom to know when to buy and when to build. I think it's a really great point and I really love that analogy. It's very elegant. Kelly Schuster-Paredes: We're all processing that is just like. Sean Tibor: Very well, I think we've solved everything. Kelly Schuster-Paredes: I want to throw in because I want to take a moment because you talked about GitHub Copilot and Chat GPT. But there are more. It's not just those two. For example, I just started playing with Anaconda assistant. I got on the beta release of it, and you got your little notebook in there and it pops up and it says, what do you want to do? And you say, Well, I want to import my table and make a bar. And it's right there. Philip's Leg back in there looking, but it's there. And I started playing with it. I lied when I said I hadn't code. I coded a couple of days ago with this AI assistant when I got the invite. But you're talking, here's your data science kids coming into university. You tell them to install Anaconda navigator. The Anaconda assistant is like, right there coming out. They have this next generation of all kinds of data analysis, and then with hopes that I'm assuming and again, this is a big assumption on Anaconda's part, that this data is going to be safe. Because the whole fear of everybody was like, if you put in your own data into Chat GPT, that's like leaking it out. And here's all your personal investments. But if you have Anaconda backing this and saying, we're a secure site, we have our downloads that are secure and here's our there people are going to start hoping that our data is going to be safe there too. And you got replet, replet's got a deployment on their AI assistant. And then someone told me about which I haven't used the notable plugin on the back end of Chat CPT 4.0. So you have not just GitHub and all these other things, but I'm waiting for Vs code to say, hey listen, you don't have to do a download anymore, we're going cloud and here's your AI assistant. Because if they're still, they're going back in the game, right? So they're going to lose some power when you have these other people producing AIS. So what's your thoughts? I just threw a lot of stuff at you, but I told you I have an addiction this summer. Sam Lau: My thoughts are, well, first, Anaconda system looks cool. I actually haven't seen it before. I'm a little embarrassed to admit it, but I just signed up. I literally signed up as you're talking about it because I looked at it and I thought, wow, this looks really amazing. Which brings me to I think I'm just thinking about it from the students perspective. I think about students who are seeing the news, see news articles. They see all these cool new tools that can do so much for people. They meet people like you who say, hey, I love AI tools. They help me in my work. They help me get more things done. And then they go into our classrooms now and then, I guess, even for myself. I might even say one of my policies might be like, okay, no chat GPT or no copilot. And so I think it's tricky as an instructor because we want to prepare students for their futures. And students right now are seeing AI as part of their future. If they go into software engineering, people there might very well be using copilot on the job while they write code. And actually they see that as part of the future and that may actually be their future. It may not be like the future that instructors like us who have been doing this for a little while have kind of walked, are coming into instructing from before. And so I think a tricky thing for instruction is helping students have that healthy relationship with AI to say, okay, it's not a bad thing, it's not something that we need to ban necessarily, but here are the limits of it, here's what you need to do. And I think ultimately how I feel about it is that I think in the end it'll push students to. Philip Guo: It'Ll. Sam Lau: Force students to kind of need to be better at self evaluating what they need to learn because they have the choice, right? They have the choice to put their assignments into Chat GPT and they can do that, but at the same time, it's kind of on them now to recognize, okay, what's helpful or not for my learning and what do my students recommend. But at the end of the day, it's their choice and actually has always been their choice. But I think now the choice is more in your face than ever. Sean Tibor: I think that's a really healthy way of looking at it. Right. What's interesting is we've talked a lot about giving students agency over their own learning, right, and being able to choose what they learn and how they learn it and having that enhance the learning process in a really powerful way. But this is almost like the extreme example of that. We've given them so much agency that they can choose not to learn and still get the outcome of the grade, right, in a way that's really convenient and available. 24/7. Right. This is not something we can reverse, right? Like you can't put AI back in the bottle. It's already out and it will be here. So now the question is, how do we adjust, adapt and leverage those tools to get back to that desirable outcome that we all have, which is that students are learning really useful and valuable skills for the future. Right. And it's not an easy problem to solve. I don't think anyone who claims to have solved it is really there yet. I don't think we're at that point where we know how to solve this problem. But I like your point, Sam, about these are skills of the future and depending on how much you're going to use them or what your role might be in the future, you may use more of them all the time. You might use them sometimes, but they're not going away. So how do we deal with them? Kelly Schuster-Paredes: I wanted to point this to Philip because he actually hinted about talking about skills and earlier in the show. What are your takes on transitioning skills? Because we all have our opinions of what's going to be the jobs of the future and the future meaning like 2029 before singularity. Sean Tibor: At the rate things are going, like next month, like, what. Philip Guo: Will I be doing next month? Sean Tibor: Philip? Philip Guo: Yeah, I think that just on a meta point, just kind of zooming out more. This space obviously has a lot of hype. There's a lot of press attention, there's a lot of thought pieces. I think everyone has opinions on this topic and maybe what we see in the popular press and online is the more extreme viewpoints. So just like what we're talking about. That's why it's great to have a long point podcast, because I think that the kind of talking points, that kind of the clickbait talking points are like, oh, programming will change forever and we need to upend everything. Those are probably a bit too extreme, right? And the more kind of nuanced, thoughtful conversation that we're having here is more realistic. Again, it's so hard to predict the future, but realistically it seems like more and more companies are going to be using these AI assistants internally. Whether the exact tools they use will depend on, like Kelly mentioned, the privacy policies of the companies of the bakers and stuff and all that stuff needs to be hashed know, I know that large companies are developing their own internal tools, right? So I think Meta just released they had a paper about this, so it's public. They have a paper about they trained their own large language model on their internal code base, and they're having a bunch of software developers within Meta use it to boost their own productivity, which makes a lot of sense. Engineer time is expensive. So if you can make your own software developers at your company more productive in your code base, you're going to have a competitive advantage. And I'm sure other companies are doing the same thing at that scale. So the life of a professional software developer will involve more and more using these AI tools, whether it's GitHub copilot or at a specific one at your company. And there are other companies like Sourcegraph Cody, they just released this thing called Cody, which is supposed to train on your own code. Source Graph can index millions of lines of code and they're going to feed that into their LM. And they probably have privacy things saying we respect all your internal enterprise privacy stuff. So I think the life of a professional software engineer will involve more and more of these LLMs, and I think that people are going to start getting good at it, right? Like I think as software developers they're going to start getting good at writing prompts and stuff. But my intuition is that still at that fundamental level kind of teaching computer science, k twelve and college level, I don't think it's going to change a ton just because of everyone has to acknowledge elephant in the room of AI tools. But I think with some tweaks to curriculum, I don't think it will change a ton just because the fundamentals that you want to learn are still fairly stable. This debate has been going on for decades, right? So like this idea of why are we still teaching X language or why are we teaching people to make linked lists or whatever, this is not what you do in industry. You use libraries. This whole debate about what is teaching computational thinking, computer science versus practical software engineering has been a forever debate. And it seems like the role of one major role of schools, both at the Kate Hove University level, is to teach these fundamentals of computational thinking and computer science, algorithmic thinking and designing algorithms, data structures, all that stuff. So the hope is that with those fundamentals in place, then whatever new libraries and frameworks you have to learn on the job. I can speak a bit about this, right? We obviously know that if you get out of college with a bachelor's degree, even a master's or PhD, and you jump into a company, it's not like you can be productive right away. You have to be onboarded with the company's code base and how they do things and what their infrastructure is. But the hope is that with a good training and fundamentals that folks like Sean can onboard, you very, you know, you're going to have to do onboarding at any specific job you're, you know, perhaps the role of know, maybe I'm very old school now, is to teach you those fundamentals that don't change as much. So maybe I don't see things changing too much at the teaching level, but maybe my words will just come back to bite me in a few mean. Sean Tibor: The other thing that brings to mind, too, and I think Kelly and I have talked about this also, particularly with Eric mathis, is that there's a certain number of students know, no matter how you deliver the information about computer science. They're going to think that this is the greatest thing that they've ever learned, and it's amazing, and they can't wait to dive into it. Philip Guo: Right? Sean Tibor: Like they're a natural fit for this. They just love it. And no matter what tools are there at their disposal, they're going to want to try everything and be curious and do all of those parts. At the opposite end of the spectrum, even if you're the most amazing teacher in the world, you have people who are just not for them. It just bounces off, and they don't like it, or it's not for them, they're not into it, and that's okay. Also, I think the students that are at risk are the students that maybe would discover something interesting or something that they loved or didn't expect to love about coding and computer science in this space. And are we robbing them of that opportunity by taking away the dopamine? Right. I solved it myself. I figured it out. I use these things. So there's an additional dimension to this of the type of learner and their personality and their relationship to the content that we're teaching them and where they kind of fall on that spectrum. Sam Lau: Yeah, for sure. I definitely agree with that. And I think actually this is one area where I think coordinating AI could help because I think right now in programming classes, our assignments are very, I would say structured to a very specific type of person. So a person who likes solving little puzzles, so to speak, because a lot of programming assignments are almost like little puzzles. I've seen an example of an assignment where the instructor gives students some code, but then ask students to do something creative with the code. So the code produces a little dot on the screen that moves, and they tell students, like, okay, now make two dots appear, and then when they collide, write a story about it. Tell us a story about how two friends at the supermarket ran into each other and then said, hey, let's get some bananas together, and they walk off and go buy some bananas. So I actually think that if AI tools can give people a different entry point into programming, maybe we can actually help those students who were hesitant, weren't quite sure why they would want to learn programming or what could motivate them to and kind of give them that motivation up front. It's optimistic, but I think there's opportunity here to kind of stretch our imaginations a bit about the first thing people see when they write a program, or the first program that people see, what could it do for them? Because right now, the first program that most people see is like, print Hello World. I think for a lot of people, that doesn't quite resonate. I think for some other people, if they saw, okay, I can use a program to, I don't know, do something useful for myself. Maybe it's like how to split the bill properly with my friends. Then maybe they could imagine themselves becoming programmers because they can see themselves using what they've built. Kelly Schuster-Paredes: It's funny you say that. And having taught this, I don't know now how many classes I've done. Sean too many basic classes. But the first four or five weeks of a 6th grade class when you're teaching them all these basic because I literally will go through every basic concept in Python in the first six weeks. Everything except for dictionaries and libraries, right. And functions. Everything except for that. And it's like that. Wow, this is great. This is awesome. And the 6th graders are really excited, and they're making stuff kind of starts pittering a little bit towards the last of the three weeks. 7th grade, though, there's like this I got to make this up. Sometimes you have this sort of decline and you can start seeing the split of, wow, I did this amazing thing to, oh, my gosh, I'm back to doing homework. And that's an interesting point that you make, Sam. What if after you get those really fine tuned basics and you have that vocabulary and you start talking about these little things as if you can sort of add that more, that hit that excitement with an AI tool that can help generate that momentum or split it up. Here's my four weeks of boring concepts. And here we're going to go, bam. Now use those concepts and make something really cool. And here's my boring concepts, and bam. Here's make something really cool with this AI tool. It could be a game changer, putting foundations in there with AI, getting those wishy washy people like myself, who's back on the fence of both sides of the team, both extremes. So I don't know. That would be interesting. I didn't want to switch real quick, and I know I said a lot, but I wanted to ask real quick with professional development, like, what were anything of your CS teachers or your university teachers? What kind of professional development and preparedness are they getting in the wake of this supposedly tsunami of AI? Right. Philip Guo: Yeah. I mean, I think the short answer is I don't think they're getting any. I think at the university level, there are some workshops. I think, Sam, I've seen some seminars in place that are basically people scrambling and saying, we don't know what's going on, and then what are you know, are you setting a policy and academic honors policy or I mean, I think that looking forward. I'm sure that people are going to start coming up with best practices like the Computer Science Teachers Association, the ACM. I'm sure there are probably work groups being put together now as we speak for what is the recommendation for best practices and such. But at this point, it's so early that it's been less than a year since Strategy BD came out. And in that year, so many new tools have come out right. And things are changing. So I know it's still kind of a wild world now. So that's the note we're ending on, I guess. Sean Tibor: Yeah. I think we're just about out of time, and I know we could probably talk forever about this because there is so much to discuss. I want to say thank you for joining us and bringing us some information that you've gathered and collected and compiled for us. It is refreshing to be able to hear the words. We interviewed people, we talked with them. We gathered information. We got a sense of what's happening across so many different areas of education right now. It is way better than the sound bites that you referenced, Philip. So thank you both for the work that you're doing to help bring this together and give us some real information about the current state of things. If people wanted to learn more. Philip, you mentioned you had your blog post that was out today. Where can people learn more about the work that you're involved in and how you're continuing to gather this information? Philip Guo: Yeah, you can just search my name on Google and just go from these blog posts. I think just starting out, those are great. Starting right. Sean Tibor: Great. And, Sam, you are newly minted and ready to teach this fall. I wanted to say good luck to you, and I'm glad to hear that we've got another great teacher out there joining the professor ranks and wish you the best of luck as you go into it. If there's anything that we can do to help figure out curriculum and how to incorporate some of these things, you can always reach out to us, and we're happy to help. Sam Lau: Thanks, Sean. Thanks for the encouragement and I really do appreciate that offer. Sean Tibor: All right, so, Kelly, do we have anything going on for our listeners? Anything that we need to share other than to say, everybody, good luck going back to school this fall and we're here for you. Kelly Schuster-Paredes: I have so much to share, but no, we are recording next week, so join us on the live. We are having. Josh Lowe, I believe. Yes, from Edgy box. They combined with Anaconda. We have friends sean and I have friends at Anaconda now, two amazing people that we know working at Anaconda and he's joining us. And they just launched Edgy Blocks, which is a kind of a nice little switch too, because now instead of having to type out a whole bunch of Python words, you can just drive a block. And that's been a process for about five years. And Anaconda has supported him and has taken on the project with him. So he's going to be on the show. So that's a great listen to for everybody else. Sean Tibor: Sounds good. And I think we are up to I think we've crossed the 450,000 download mark on the podcast, so we are making good progress. Thank you, everyone, for listening. Please remember to share with your friends anybody who might get some value out of our show. If you have the chance to leave a review on your favorite podcast player also, whether that's itunes or Spotify or wherever, I think it goes into some big, massive algorithm that puts our podcast in front of more people. So let's see what happens. Put a review out there, tell us what you think, and we'll go from there. So I think that'll do it for this week. Philip. Thank you. Sam, thank you. And we hope to have you back on the show soon. Sam Lau: Thanks so much, everyone. It was a real pleasure. Philip Guo: Right, thanks again. It's great to be back. Sean Tibor: Yeah, it's really good to have repeat know so for teaching Python. Kelly Schuster-Paredes: This is Sean and this is Kelly signing off.