ARTY: Hi, everyone. Welcome to Episode 167 of Greater Than Code. I'm here with my fabulous co-host, Jamey Hampton. JAMEY: Thanks, Arty. And I'm here with my fabulous co-host and friend, John Sawers. JOHN: Thank you, Jamey. And I'm here with our guest, Ted M. Young, aka JitterTed. He's been developing software and training developers for several decades. In the 90's, he traveled the world as a Java trainer and consultant. In 2000's, Ted led extreme programming projects for the government and during his time at eBay. He went on to introduce lean and agile concepts at Google, Guidewire Software and Apple. In 2017, Ted came full circle and is once again focused on human learning through technical training and coaching, both in-person and online. His company, Spiral Learning, uses the science of how we learn to design, create and deliver well-tested Java training and coaching for those who code, and those who want to. Welcome to the show, Ted. TED: Thanks. Great to be here. JOHN: And you know what question's coming, which is what is your superpower and how did you acquire it? TED: My superpower is translating things for people to understand better, which kind of makes sense as a trainer. That is my job. There's sort of two parts to that job. One is taking knowledge and figuring out how people who don't have that knowledge or maybe have part of that knowledge and figuring out how to get it into their heads. And so, how do I acquire it? I have no idea. I wish I knew. I wish there was some event that happened that said, "Oh, you can do this." But I've always been a researcher. I remember my time, even back when I was a kid, just loving to read, loving to figure things out. And in university, in sort of pre-internet days, being able to roam the stacks, especially the stacks of computer journals and just read that stuff and be able to translate that into working software. It's like, "Why hasn't anybody told me about this?" Or, "Why aren't people looking at this more?" So taking that kind of -- especially in research journals and I still do this, Arty knows this. If you ask me a question, it's very dangerous because I'm going to go down a road of researching, going to Google scholar, going elsewhere and trawling the bibliography and then coming up with it and saying, "Here's what I think is going on here," or, "Here I think is what's interesting." And so, sort of translating that to everyday kinds of things is my superpower. JOHN: It sounds like a big component of that is curiosity. TED: Oh, yeah. Probably a little too dangerous curiosity. That's a really good point because one of the things I do when I'm not training is I do livestream and I code on the livestream. And I have to fight this tendency of going down a rat hole of trying to say, "Why isn't this working? What's going on underneath?" And digging several layers too deep where I really don't need that information, but I'm really curious what's going on. So, I think the curiosity is really good, but I think it also, given my perfectionist and ADD tendencies, can be actually really, really bad. So, I wish there was a tab limit on my browsers because I have 432 tabs open right now. And that's a low number for me. JAMEY: I'm stressed out just thinking about that. JOHN: I have 865. JAMEY: Oh, my goodness. [Laughter] TED: Yeah, it's hard. The hard part, the curiosity is useful but it's also dangerous like I said, because there's also the part of, at some point, you got to pull it back together and that's really hard. That's like consolidating it. And that's why I love giving conference talks because it's like I've got a deadline, I know all this stuff. I've gone three levels deep and now I've got to condense it into 45 minutes. And so, that's a good thing. That's usually I'm over-prepared in terms of the research, and just getting all that stuff together is just a matter of sitting down and doing it. But that's not as much fun as the research. JAMEY: What strikes me about this story is that a thing I often say and think about is the programmers love to do puzzles and I think that writing code is like a big puzzle and that's what is appealing to a lot of programmers. And the way you talk about reading and research and figuring out what stuff means also feels to me like a similar sort of puzzle. Would you agree with that or no? TED: Puzzles are interesting. Still in this day and age, there are interview questions for developers that are basically logic puzzles. I hate them. I so hate them so much. I've actually turned down interviews if that was the question. And there's a set of people who like sort of the very math-y computer exercises, [oilers] kind of thing and figuring out. For me, I am totally not interested in that, which is really weird because you'd think I would be, but I am not interested at all. I think because I'm much more interested in, how does this relate to anything real? Why you asked me how to wait quarters and find out which one is bad. Like what does this have to do with anything? I'm much more interested in like what are users doing, what's going on in the domain, what are the bottlenecks? And so, I'm much more interested in solving problems. We're in this industry because we'd like to solve problems. If you don't like to solve problems, you're probably in the wrong place. But we definitely are problem solvers. But I think for me, the problems have to have sort of depth to them. And I think a lot of the logic problems, like there's a trick or there's something you have to know, and then you just get it. For me, there's no right one right answer. And maybe that's it. It's like there's no one right answer. There's multiple different possible ways and I have to balance which one is good, which one is not. And to me, that's always been sort of the fundamental of engineering. It's just trade-offs. Like, what am I trading off here versus this? I was doing a livestream yesterday and I'm working on a game to teach test-driven development. And one of the things I was trying to figure out is how to have multiple players get real time information from each other. So, I'm using WebSockets and attaching listeners and I'm like, "Am I attaching too many listeners to this? Is that going to cause a performance problem?" I'm thinking too hard about this and it's a trade-off, which is easier for me to implement and get done so I can figure out if this game is worth anything or not. And so, that kind of trade-off, and later if it becomes a problem then I can worry about it. And I get this question from my students when I teach, it's like, "Isn't this way of doing this, using streams and Java, isn't this going to be faster or slower than for-loops?" And I'm like, "If that's a problem, you'll notice it. Most likely, it's going to be overwhelmed by everything else that's going on in the internet." And so, it's really about the trade-offs and maybe that's why I don't like those kind of logic problems. There's no trade-off involved. There's no sort of give and take. JAMEY: Do you find that you have like a Eureka moment when researching? I think that's one of the things that I associate with both puzzles and programming. I didn't understand this or it wasn't working and now suddenly it is. Do you find that you have that when you're reading and researching like, "I didn't quite understand this but it just clicked." TED: Research for me is a drug and every time I learn something new from it, it's a hit and it's a high, that's why I'm really addicted to it. I literally was, just this morning looking at a paper that was discussing basically comparing lecture versus board game and how well that did in health education. And I'm like, "Oh, what did they do? What did they find?" And like, just finding that paper, even just finding something, "Oh my God, this is exactly what I was looking for," is such a high. And then it's like a treasure trove. I get to now open it up because it's kind of bibliography. I have 3-5 references that I can now easily follow. So, there's definitely a hit. And then tying things together, that's what I live for. Like the realization, you never know if you're the first one to connect these things. But when I connected some of those research I'd done on how people learn with test-driven development, I'm like, "Oh my God, these are mirrors of each other. These are so similar." The idea of, you're building on prior knowledge, on prior knowledge of the system, you're verifying that, a multiple choice test is verifying that you have knowledge. That's what a unit test does is it verifies your understanding of the system. I actually have a pending blog post basically running through the comparisons of these where it's just these things are the same and it fits so well that we talk a lot about writing software is learning. And so, of course, they're the same. JOHN: I've always gotten a lot of satisfaction about finding ways that disparate ideas fit together neatly. And so, I can totally relate to that sort of dope mean hit that you get when you see that these two things are either correlated or fit together or one could explain the other and vice versa. And those are really gratifying, for sure. TED: Yeah. And it's nice to also connect it with your intuition. It's like, "I know this works and I'm not quite sure why." Maybe I've researched it a bit and I can't find anything. And then I come across something that's like, "Oh, that's why." That's why this makes sense. And here's some places where people have done some studies on it and it's like, "Oh, okay. That explains it." And then it gives you a little bit more of a nuance to it. It's like, "And here's a little aspect that maybe you didn't know." And so all of a sudden, you have these three new ways of thinking about something. It's always funny, I always find, based on the latest thing I'm reading, it sort of infects my daily thought. I was reading a book by Anders Ericsson on expertise. He's one of the experts in the area of expertise. And I found that I had read that just before a consultant's retreat that I went to and sort of that infected everything just the way I was thinking. And so, that's why I tend to read a lot of different books because I want to sort of pull from different places and sort of set that context or [inaudible] sort of environment that I can pick from. ARTY: Yeah, I definitely have the whole addiction to research sort of thing too. One of the things you said early on in the beginning about translating all these academic things, thinking them together, translating them to everyday context of software things. So you're learning about all these intricacies of how we learn, how information is absorbed into our brains such that we can use that to solve problems. What's an example of a case where you've taken some kind of abstract academic idea around how expertise is formed, how we learn, and then map that to an everyday context? TED: I think the biggest impact that my learning research had was around when the research is called worked examples. So worked examples, and it's not some obscure, hard to understand very intricate thing, but it's something that -- so a worked example is, here's the thing you're trying to learn, except maybe we've left out some steps or we've shown you step-by-step. So in a math example, it might be here's long division and here's every single step of that long division. And then what you do is you start taking out some of those steps and you say, "Okay, now, fill in the missing step." But the idea of a worked example is showing your work, showing how you arrived at the answer. And this just comes up everywhere for me. So work comes up a lot is because we do a lot of learning. And so, I'm reading a lot of tutorials on stuff because I'm doing a lot of frontend work that I don't normally do and I'm trying to go through some of these tutorials. And it's like, "Here's a blob of code. We're not going to tell you why I did this, this, or this." And it's very much like copy and paste. And it's like, "Okay, I could copy and paste this and maybe get it to work, but why did you choose this versus this?" And so, I was sort of ranting on my stream about like, what are you trying to get across? And so being able to say, why did you choose this? And I do this when I teach and I do this when I live code. It's like I am making this decision because I'm balancing how easy it is for me to write this code versus how many listeners are attached to this window. And so, it's all about showing the decisions that are made. And we codify some of those in patterns. That's where the patterns come from. It's not supposed to be, let me just copy and paste this pattern. It's supposed to show you what are the trade offs, what are the differences? And so the worked examples, I go through a lot of documentation. I use these in my examples in my talks about what is a good example. So for example, if you're using foo and bar in your example, that is not good because that has no meaning and it's just a placeholder. And to be honest, it's a bit of laziness. It's very hard. And I know this because this is my job as a trainer to come up with really good examples. Really good examples are those that -- and this then starts pulling in cognitive load theory. How much can we sort of as a novice who isn't really familiar with this stuff, how much can we manage and sort of try and hold -- I use the juggling term, how much can we juggle at the same time? Not a lot, three or four. It's not seven, it's actually three or four. We talk about memory limits, it's actually three or four before we start hitting errors. And so, three or four is not a lot, especially for a novice and we don't understand the terms. How do we juggle that? So worked example where it's not too much. And so, this tutorial I was going through, it's like there was this multiple lines of code, half of it wasn't necessary. And so you have to provide the simplest possible thing and then you build on it and then you kind of come around again and build on it again. That's why I named my company Spiral Learning is because you're kind of spiraling up from something simple building, but you have a complete rotation or a complete circle, but you're sort of moving up the complexity. And so, the cognitive load theory, the worked examples, how much novices know and how we build knowledge, all that kind of now gives me a solid foundation based on research that I can now build on. Whereas before, I just had sort of a natural intuitive way of teaching that. I never studied this stuff when I was doing training 20 years ago. I happened to be good at it. Who knew? I was terrified speaking in front of people before. So who knew I'd be a good trainer? So, kind of all that. And now that I build up my expertise, I can now read some of this stuff again and say, "Oh, okay. So here's some things I can think more about and start thinking about how can I do some research on my own? How can I do some sort of field research?" And that's something I'm thinking about. ARTY: Fascinating. There's so much to unpack there. You started with here's steps of long division, showing your work in long division. I think that's a really great reference point to think about how you have this problem defined and then we can say, "Oh, the answer is 42." But how do we get from the problem to a solution? What were all the paths and decisions that we make along the way? And if we think about another human being able to learn, how do I make decisions and navigate between this problem and answer, they need a way to see all of these steps in between. And when we're looking at code reviews, like this context, after we've written the code and we're evaluating the quality of the code after the fact, it's like, "Okay, there was this problem," this ticket we needed to implement and we're looking at the solution, the answer. And that work, the steps of the long division of how we got from A to B is not this thing we're gradually getting experience within that context of code reviews. And there are other contexts that we can learn this path of decision making, learn the showing of work of these examples. But it makes you realize how many of these contexts that we have integrated into our everyday processes don't support the type of learning that you're talking about. TED: I feel like you were bugging my bathroom this morning because I was sort of thinking out loud about code reviews and how that's exactly the problem with code reviews. I've always hated code reviews, sort of. Well, the way we currently do code reviews as pull requests. So I want to be precise about that. Because traditionally, code reviews back in the day was everybody sat in a room, read through some code and talked about it as a group. These days, it's asynchronous, which I find to be problematic for that exact reason. In order to unpack what you've done, we have to basically go through your decision making process. And to me, that's such a waste of time. And why I love pair programming and mob programming because when we're pair programming, we're both building that knowledge and going through those decisions. And so we have at least in two heads that information of how we got there. And I think a mob is even better because that way, everybody has it. And now we don't have to explain it to everyone because the code is just these snapshots and time is exactly as it says. It's the answer. Here's the answer. Well, how did you get to the answer? Maybe you got to the right answer, but what decisions did you make along the way? And if we don't know the decisions you made along the way, maybe you actually have an incorrect mental model or representation of something that we didn't pick up and that may hurt us down the road. And it's funny because I was also thinking about -- so one of the things I do is I do a lot of live coding and then I'd go in and review them because I go and edit them down and take notes. And it's fascinating. It is fascinating to be able to see yourself when you didn't know something and see how what you struggled with and see what you figured out. And I'm like yelling at the screens like, "Ted, no, don't do that. You're going to go down a rat hole for half an hour." And I can't stop him because he's already done it. And the extra benefit of, since I'm doing it as a livestream, I have to externalize my thinking. So not only do I get to see the artifacts along the way, but I get to hear my thinking and hear what was wrong or what was incorrect or just what I was confused about. And as a trainer, this is gold for me. I mean, this was not expected when I started doing this, because the problem as a teacher trainer is what's called the curse of knowledge. I am now an expert. I've built and constructed this intricate graph of information and how do I now have somebody else build that? And I've forgotten the simple stuff. I've forgotten this stuff I didn't understand. And being able to go back and see exactly what I didn't understand is just awesome. It's been so surprising as how big of a gold mine it is. Actually, a guy I know did this for himself and he wasn't livestreaming, but he was recording himself doing it. I highly recommend taking some time if you're doing some coding and do that, but not just like half an hour, but like your entire day. Your entire day of is probably four to six hours if you're lucky. And recording that and then reviewing it. If you're learning something new, not if you're doing kind of boring kind of tasks. But like when you know you're going to be learning something, I think it would be awesome exercise to do that. JOHN: So are you talking about, you're recording it and then you're also narrating it as you're doing it, so you've got that extra information about these processes. TED: Exactly. For me, as an introvert, that's exhausting. It's exhausting to externalize your thinking the entire time. And this is why if I do live coding for three or four hours a day, I am done. I'm like, "I'm happy. I'm done. I got a good day's work." So basically, take a screen recording, take a microphone and externalize your thinking as much as possible while you're struggling with stuff. I think it would make fascinating -- it's a little tedious to review. But you can play stuff back at one and a half, two times speed and skip over the boring parts. For me, it's just been amazing at what things I find. And I've forgotten. I mean, that's the thing. We forget what we didn't know. We also forget some of the painful process. And I think this is where you can actually build a lot of empathy just doing that. Like who are you going to be more empathic with? If you see yourself struggle, like, "Oh my God, this is horrible." And then go fix it, contribute to something. I am now going to write a tutorial on what I just learned because I couldn't find a good one and I saw exactly what's wrong. And so I think, improving documentation, improving our communication, improving that kind of thing can really help if you can see where you get stuck as a novice. It's kind of funny because this is -- I'm also sort of an amateur user interface researcher -- and this is user research except on yourself. You're observing yourself doing something with a task. It's a little bit more unconstrained, but you can see where you get stuck. And I just realized that that's what I'm doing. [Chuckles] JAMEY: You've been talking about livestreaming. It's like a tool for yourself. But I guess what I also wanted to ask about is you've talked a lot about being an introvert and being afraid of public speaking when you first started doing it. I know that you do the livestreaming on Twitch, right? TED: Yup. JAMEY: So I assume that there are other people participating in this with you, at least some of the time. And I'm wondering, how is that for you and how did you become comfortable with that? And also what do you think those people -- I understand what you're getting out of watching yourself, but what do you think those people are getting out of watching you? TED: I'm going to start with that because sometimes I'm not sure, and I have to remind myself because I do so much livestream myself and not a lot of watching that I forget. It's like, why are people watching me do this? Like here I am reading documentation, scratching my head, getting frustrated visibly. Like, why? And then I go watch somebody else. And it's much more interesting than sort of pre-recorded, pre-packaged tutorials, which are useful. I certainly hope so because that's my business. But seeing people struggle and being able to help. And even if you don't say anything, but even just -- and this is what I really hope people get out of, is that, look, I've been doing this for decades. I am an "expert", but I make mistakes all the time. I go off on the rails all the time. And if people can see that, even as somebody senior as me does that and it gives them a bit more comfort that when they get lost that it's okay, we all get lost, to me, that's a win in and of itself. So when I watch others, sometimes I'll get a lot of viewers and there'll be some chats and then sometimes it'll be quiet. But I've had more than a dozen occasions where suggestions from people have made their way into the code. And then I basically record their usernames as variables in my tests or comments in my code. It's a way to immortalize, this person helped me figure out that I needed to go the decorator pattern. And so it's not quite pairing, not quite mobbing because it's very asynchronous. Like I've got the full bandwidth of audio and video and they've got chat, which hopefully can change. But for now that's sort of the way it is. And when it's really going well, there's discussion and there's questions that get asked. And there's interactions between the viewers and so there's a bit of community. And I'm big on community and building community, and building community is hard, but getting people and giving even people a place to ask questions that they maybe don't have another place to ask it. So, I will get almost certainly multiple times a week people saying, "How do I learn this stuff? Where do I go?" Or , "I'm about to start a junior dev position and they said I need to know testing. I don't know about testing. What do I do?" And I will basically spend half an hour and tell them about testing because I think that's good that they asked. And I think testing is important and I love talking about it. And so it's interesting like, I mentioned the introvert and being afraid of public speaking. I realized once I'm passionate about something and really care and want to share that information, you can't shut me up. I will talk about it and talk about it until your ear falls off and keep talking about it. I don't quite know why, but there's sort of the reverse high of being able to share something and see somebody else get that information, I think is at least a part of it. JAMEY: I think it's really interesting that you are talking about the parts where you struggle and are visibly frustrated as like the important parts because I think that those are the parts that I would be afraid of showing people on a stream. TED: Yeah. I mean, I have the advantage of, I've been a trainer and tech lead and sort of someone who's done that and is very comfortable making mistakes. But even so, I have certainly -- I'm coming up on my year anniversary of doing the live coding and so I was looking at how many hours I was streaming. I spent 376 hours of streaming of live coding. And so, I look back, it's like, what have I learned? What have I changed? Sure, there's the tech and graphic stuff, but being more comfortable with saying, "I am really frustrated. I don't know what's going on. I don't know why this works. Why is this documentation so crappy? Why is this broken? Why can't I figure this out?" And then saying, "You know what, I'm going to take a break. I'm going to take a little walk and clear my head and I'll come back and see if we can figure it out." And only recently I think have I been comfortable in like -- sometimes there was more than one occasion where it's like I need to stop streaming. And I basically just stopped. But I think again, sort of being comfortable in doing that -- I mean, it's a little low risk in a sense because maybe nobody's watching and so nobody's hearing me say that and it doesn't really matter. And if people are watching, it's hard to tell if they're not, if they're not speaking. So there's a bit of an advantage to that, but I think it does build up sort of that resilience to be able to be more open in that way. JOHN: Similar to you, I've gotten that same sense of gratification when you successfully convey something to somebody else. I was a mentor at an online bootcamp for many years. And I also tried to focus on doing this, that same process of showing the things that I didn't know and how I would go learn about them rather than saying, "Oh, let me go research and get back to you." I would try and demonstrate the process of like, I don't know everything ,and try and let people see that the messy internals of what your day looks like and how much time you spend on Stack Overflow, especially for people new to the industry, they think we're all just these sort of geniuses that type out perfect lines of code all the live long day and trying to make it feel more comfortable to them when they will air like broken and it's still broken and still broken. TED: And also, "This worked. Oh my God, this worked. What? This worked right out of the box? What the heck? That can't be right." That happened to be yesterday. It's like I was just implementing something and it just worked. And the fact that I was surprised that it just worked. I think it's something that is also sort of the flip side of stuff is broken. I can't figure it out. It's like it just worked and I'm surprised. I mean, I've been doing this forever and I'm still surprised when stuff works because I have this expectation that everything's going to be broken and sometimes it actually works out, which is nice. ARTY: I think this is like another variant of a superpower. I'm thinking about the juxtaposition of these times when we're super confused is this time where we feel really uncomfortable naturally because we're in this state where we don't really know what's going on, but at the same time being able to be comfortable with your own confusion to be able to put yourself on stage, making mistakes, being frustrated, not knowing the answer, being in this confused state to normalize the okayness of that as a leader. Those combination of things create space for us to look right in the face of our confusion and trying to understand what it was that caused it. And so, it's like creating opportunities for higher vulnerability conversations through normalizing things. I think that's worth calling a superpower. TED: Sure. And it's still hard for me, like doing live coding while in a number of ways is very similar to the training I do, it's also very different because when I'm doing training, I know how this is going to go. I know I've set the curriculum. Yeah, we may go off on tangents, but I kind of know how this is going to go. There may be questions that come up, but I kind of know how it's going to go versus there are times when I'm live coding where I get lost. I have no idea where we're going to go. I have no idea what to do next. And it's hard. It's hard. And one of the things that live coding gives me for me in that case is to push on through it because otherwise, I'd have the tendency to just put it aside and I'll do something else. But I kind of feel like, let me push through this. And it's also a way of an adaptation for my perfectionism. It's like, I can't be here for 30 minutes on this stream and talk about this one variable name that I'm trying to get perfect. So, let's move it on because there is a little entertainment aspect of it. I got to keep this show moving. And so, that's been really helpful and that's how it started because I'd watched Suz Hinton who is noopkat on the streams well over a year ago. And I was pairing with a friend of mine and I said, "We should do this more often." And I forget exactly how it came up. And I said, "Oh, maybe I'll do livestreaming because I wanted someone I could pair with more often." And there wasn't anybody available. And so, the live coding provided this sort of virtual pair that even if nobody was watching, and seeing that there's one viewer and it's basically me is a little depressing sometimes. But keeping in mind that while I'm doing this, hopefully that other people would get benefit, it is a very selfish thing. I am doing it because it's a way for me to get work done. It's a way for me to adapt to the fact that I've spent 376 hours and I've gotten two complete projects completed that I don't think I would have done otherwise. JOHN: Provides little accountability to keep it going. TED: Yeah. JOHN: And also I think like you were seeing, it helps clarify the priorities. So maybe the rabbit hole don't go as deep because you're trying to keep the audience engaged with what you're doing. TED: Yeah. And it sort of sits in the back of my head like, I see the little green light for the video and it's like, "Okay." And then I'll still do some things like, "All right, I'm going to try this one thing and I promise this is the last thing." And then it's like, "No, no, I'm so close. Let me try one more thing." And it's like, "All right, no, no, that's it. We're done. Let's move on." JOHN: You're talking about showing confusion and frustration and like not knowing things. I think it's particularly important for those of us who have a lot of experience in the industry and particularly those of us that have a lot of privilege in the industry to demonstrate that and show people that as a way of making the entire industry more friendly and more welcoming so that people can really see what it's like rather than -- because we're the people that can afford to show mistakes without ruining our careers. And so, I like that you're doing that. TED: Look, I'm an old white guy. I am totally privileged. I had an Apple II Plus and a TRS-80 when I was a kid. And that's when I learned how to code. I am completely aware of that. And that definitely feeds into how can I normalize the fact that I don't know what I'm doing. That meme of a dog sitting at a typewriter just came in my head. I don't know what I'm doing. I'm just clacking away. So trying to show that this is okay but also, and here are some ways, techniques that I've learned to get myself out of it. And so, I have this little tagline of tiny validations that I use a lot. It's like, okay, this is too big. I don't know what's going on. There are too many moving parts. And again, this brings in the cognitive load theory. It's like, "Okay, we only can keep track of three or four things. How can I reduce it to three or four things?" And so showing I get lost, but these techniques are not hard to focus. Because I see when I teach folks, they get lost, and then I don't know where to go. They don't know how to get themselves out of it. And sort of reducing the space, making things smaller, making things tinier is hard because it's a little bit of extra work. I know yesterday I avoided that. I knew I should have done it and it's like, "Let me make a little project to test this thing out because I don't want to put it into my bigger project. Because when it doesn't work, it's going to be harder to figure out." But it's like, "No, that's extra work. I don't want to do that." So I didn't do it and then it didn't work and now I've got to go unravel it, but a bigger problem. So I also show that I don't follow my own advice. So hopefully people take from that. [Chuckles] JOHN: I think those techniques are really useful to communicate as well because then it's about like, "Oh, I just apply this technique," to get in that situation rather than I'm now in this situation because I'm a bad developer. TED: I offer lots of advice and sometimes I take it. And sort of showing that even though we may know the correct way to go, we don't always go that way because we're human. Because that's more work than I feel like, maybe I'll be lucky and it'll just work. And I think we developers do that a lot. It's like, I know maybe I should do it this way, in this small piece over here, but that's going to be extra three hours. So let me take a shortcut because it'll probably work and maybe it sort of does or maybe it doesn't. I know I don't learn very well, but I really should do that. So that's sort of the discipline, showing that even though I have this thing that reinforces my discipline of live coding, even there, I still -- because I get tired. I was streaming for five hours, which is way long for me. And I was aware of that. It's like, "All right, I'm going to just try this and I know this is probably not going to be the right way to go." And yup, I was right, it was wrong the way to go. So at least, that prediction was good. ARTY: So, we were talking earlier about all this cognitive science related research around learning theories and how you've applied these things in the context of software development training. And in your bio, you mentioned this TDD game that you developed. I'm curious if you could draw some links between some of the research that you've done and how you've applied these techniques in this teaching technique. TED: Yeah, so this came out of when I teach TDD -- if you ask most people what is TDD, it's red, green refactor. That's very simplistic. And when you actually do it and doing the live coding has shown me what my actual steps are, I found that there's actually a dozen different steps and there are two parts that aren't usually called out, although James Shore talks about it too, so I feel validated. It's like, "Well, if James Shore talked about it, then it must be right," because he's well known in that community. And so, there were two parts that came out. One was the thinking part, which is what is this thing going to do and how do I know it does it? And then there's the predictive aspect of if this test is going to fail, is it going to fail in a sense, the expected correct way? So there's a bit of a validation there. Because a failing test is actually good. It matches. And so this brought in like we have mental representations, we have these mental models and we validate these mental models by running tests. So if I predict, this is going to fail because this thing's going to be empty and it's going to throw index out of bounds exception. And it does, then you know that at least for that small piece, your mental representation of it matches the actual code. If it doesn't, that's a surprise. And now you need to take a step back and say what happened? So there are all these steps, these 12 steps. And so one of the things I was trying to figure out -- I don't know if it's exactly 12 but something like that. I was trying to figure out, how do I teach this so that people really get the point that it's not about the test failing, it's about the prediction that the test fails or passes. I'm here to tell you it's not about the test so much as validating your mental model of the code. How do I get that home? How do I drive that point home? And so there's a lot of research that talks about retrieval practice. So this idea of we remember things by attempting to recall them. One common misconception about studying is if I read this chapter over and over again, the information will sort of sink in and it doesn't work that way. In fact, there's now quite a -- since this kind of research is social science, it's one of the most established kinds of research that rereading something does not work nearly as well as testing yourself. So testing, there's this whole idea of testing as part of learning, not as assessing learning. Most of the time, we think about tests is how well do you know this stuff? But actually the tests can help form that knowledge. And so the technical terms, there's formative and summative assessments. The formative is I am giving you this test, I don't care whether you pass or not. Taking this test will help solidify this information in your head. The analogy I use is like strengthening a muscle, every time you have to recall it, especially if it's a little hard to recall. So there's this whole idea of the forgetting curve, Ebbinghaus, back in the late 1800's, talked about how information fades over time. But if you catch it before it's completely forgotten, you've actually strengthened it. And if you continue to do that, and this is what's called space repetition and as a flashcard program called Anki that does this for you. If you are like trying to learn something -- and I was doing this when I was helping my son study music, I would show him a musical note or something. And if he got it quickly then it's like I put that in a pile way over there. We won't touch that for a week or two. If he was struggling with it, I'll put it in the bottom and he's going to see it again in five minutes. And so as things become more fluent, you basically start putting them later. And when you then have to recall it, the strength of that recall, the strength of that is much bigger. So Bjork, who is a researcher in UCSD writes about desirable difficulties. So this difficulty of retrieving. You're struggling a bit. It's like, "What is this thing? I know it's hidden." And you get the answer. You will now recall that much more easily in the future. So I was trying to say, how can I apply this very well supported research in teaching TDD. I didn't have the initial idea of making a full game. I just said, "Okay, I start with the idea of flashcard." I can have a flashcard and they have to put the steps in the right order and I have to include all the steps. And I thought, "Well, that's kind of boring. Can I make something out of it?" And it turned out that when I was putting it together and I was working with actually William Larson and Anna Larson at this retreat about coming up with ideas and figuring out. And there's so many things, it's so easy for this stuff to grow out of control. So really focusing on what do I want them to learn. And so had basically turned it into a game and really focused on what is it I want them to learn. I had my first play test with somebody who I hadn't worked with the game on yesterday. And the first thing he said was, "Predicting I really got." And I'm like, "Thank God," because that was the main point of the game. The order of things is useful and important and all the steps are useful and important, but really building that muscle and this is something I got from willingness, you want to build that muscle for the things that may be harder to do sort of in the real situation. So doing a game or something like that can emphasize things or call things out or make things in a faster loop that you couldn't do in the real world. And so all this stuff about how many things can you keep track of? So that's in my head for how complex do I want to make the game. And I had to make simplifications where, "Oh, this doesn't quite match the real world," but it's better for the game. And then using the idea of testing. I mean, that's what play testing is. Like, let me test it. Did I get the outcome I expected? So one of my major outcomes, that prediction really nailed that point. That's great. What about the other stuff? Did it make it clear that writing better code makes it more likely that you'll progress faster? Can I emphasize that or de-emphasize that? And so all these, I don't think I would have been able to create this game a year ago or two years ago or it wouldn't have been as good because now, I have a lot more of this knowledge about -- and I learned this also from Jerry Weinberg back in problem solving leadership course workshop is you want to make things as simple as possible because people will make it more complex. So you don't need to make it complex, people would make it complex all by themselves. And that's always in the back of my head as well. But now I can have a richness of what does that mean and why is that true? Now, I have a lot of the cognitive and educational research to sort of give me more nuance on that and why does that work. But it all comes from the retrieval practice and the space practice and putting all these things together and then tying it again back to like, I didn't discover these steps until I really took a step back and say, "How am I actually doing it?" And this is something that Rebecca Wirfs-Brock talks about in heuristics and collecting heuristics that she's been talking about recently and going back and mining. Like, what are my principles? What are my heuristics? What are the things I'm thinking about? Oftentimes, especially as we build expertise, we sort of build these, but we don't take a step back and say, "What are those things," so that we can tell others. And I think that's really, really important and I would love if more people did that. Because I think there's so much knowledge locked away in individual people's heads that doesn't get shared not because they don't want to, but because they don't know what it is. They're not aware enough of it. Because why would you? You build expertise, you do it now and you do things better. But being able to take a step back and saying, "What am I doing?" And again, this ties in patterns. For me, it's all connected. The patterns come out of that was the purpose of the patterns to take the stuff that's in our head that we're not quite sure why we did this one versus this one and package it in a way that we can then have other people build and construct their own knowledge from it. And I feel like the patterns movement kind of has died for the most part because people felt like there's nothing left to discover maybe or we got so -- there's this, I know Martin Fowler calls it sort of semantic diffusion, but there's sort of like what patterns now means the 23 original gang of four patterns and nothing else. Like no, they're actually hundreds of other patterns. In fact, there's a whole area of learning patterns that have been been identified and I would love to focus more on the deeper understanding of those patterns. I feel like they've been sort of watered down to being not very useful. But that was the whole point of the patterns was to put down those decisions, those trade offs, that knowledge that people have in their heads and put it down somewhere that we could share it. And I would love to see more people writing patterns. They don't have to be these high end, sophisticated, fancy language things. They can just be like, here are the trade offs and put them in a framework and talk about them. And so, I've actually been sort of toying around with what are the patterns of TDD? And I know Kent Beck has written some stuff about principles of tests and JB Rainsberger and some other folks. And it's like, why isn't there more writing about that? I feel like we're stuck at very intro stage for a lot of these things. Every time I see somebody give a TDD talk, it's the same thing, the same basic things over and over again. It's like, can we go deeper? We need to go deeper. ARTY: And I think that's this depth that you really bring with the focus on cognitive learning cycle research. So, when I'm hearing you talk about how you designed this TDD training, you have this paradigm shift of thinking about the actions that we're doing. What are the red, green refactor emotions versus let's look at the cycles of our cognitive process, of our thinking process and the intention behind all these actions that we're taking. And then by recognizing your own processes, recognizing your own thinking and decision making and steps of these cognitive loops, you're able to start from a place of, "Okay, what's one small thing that I really want to get across in this training," instead of thinking about this red, green refactor doing. It's not about whether the test is red or green, it's about our mental model and the predictions we're making and setting up an experiment to validate our mental model. And what does that thinking cycle look like? So, if you can take that one idea and get it across, you're talking about a fundamental anchoring of paradigm of how we think about the cycle and that is, as you say, the deeper side of of what is TDD is the space where patterns are difficult to see because it's all in this sort of like meta fuzzy space. And you talk about coming up with new patterns. I think these two things go together, one is to make a conscious effort to shift to this paradigm of looking at these human cycles, which I call this ideal flow process. I was very much focused on the same sort of thing in my book of these cycles of confusion where we go from our [inaudible] moment and extracting the learning out of that, focusing on these human cycle processes, these evaluation processes, getting back to OODA loops and some of these core elements of research around the feedback. How do we make decisions? I mean, I think there's a rich gold mine of learning there, but it's dependent on this paradigm shift as well. And the challenge though is we've got all this research about how we can learn better and how we can do all these things, but how do we bridge that to the real world? How do we bridge that to our everyday stuff? And I think you're on the forefront of finding ways to take all this abstract stuff and bring it into practical applications. So, that's really cool. TED: And that's why I continue to try to give this talk on human learning, which at technical conferences, I've given it five times and six times and given it a couple more times this year because it's not -- yes, if you look at the research and you try to read the cognitive load theory [inaudible], like if you look at -- you have to know how to read research and not everybody should do that. But there are now, especially over the past three years, there's a lot of books or maybe five years, there's a lot of books that have already distilled it down and I'm just pointing people at it. But I'm also sort of further distilling it down so you can get a taste for it. And I always tell them, "If you learn nothing else, good examples are it." Because we learn from concrete. We don't learn from abstract. We learn from concrete. Like metaphors are relating two concrete things that we already know. I was just re-watching the Star Trek: Next Generation, episode Darmok and Jalad, and if you don't know what that metaphor means, it's literally that jibberish of what the heck is a Darmok. You have nothing to relate it to. And this is the fundamental thing about teaching is you have to find out what do people already know. And this is what I rant about in my presentation, stop teaching people what they already know and find out what they don't know, like find out. And I feel like we're still stuck in the wonderful power of the web and AI machine learning kind of stuff. And we're still stuck at basically textbooks on a screen. Can we please progress a little bit past that? Before you go into an online course, can we ask you what you know, figure that out and then adjust. To me, this was the whole promise of personalized education and we've gotten nowhere with it. And it frustrates the heck out of me. I am so angry about that because we spend so much money and time on machine learning and other places that really, who cares? How does that change anything? And we're focused on places in education where it's just not really helpful. So, using the information about what do we know and what don't we know? And then correcting things like, again, going back to the TDD and it's exactly the scientific method. We have a hypothesis, we run the test, which is the experiment and we see if the results match our hypothesis. And if not, then we need to change our hypothesis. And we are constantly doing this, but to me this makes TDD much more about the thinking because one of the things I find is -- and these are the first two steps in my cycle, is what do you want it to do? And are you clear enough about that, that you could write a test for it? I don't care if you actually run that test. I don't care if that test ever gets finished. But if you understand it well enough that you can write a test for it, which will make you think about, how would I write a test for this? I'm doing something to the system. How can I observe that it changed? I can't. That's a bigger problem. Maybe you need to go and do some stuff before you can write this test or maybe you're not clear about it. Whereas writing code, it's very easy to be unclear and a bit ambiguous about what you're trying to do and just write codes with lots of decision statements and so on, and not be clear about what you're trying to do. And so to me, the TDD is much more about the clarity of what it is you're trying to do and fits with all the other stuff around it. Example, specification driven development, all these kinds of things. They're always trying to make this concrete. "Give me an example." That's often my response to a lot of things. It's like, "Can you give me an example of that?" Because again, we go back to the concrete. We build up abstractions only after we've seen enough concrete examples. And so this idea, basically the two ideas that I tried to focus on in the TDD, which is the prediction part, but also that clarity of thought. And we do this when we have to write something down. It's got to be clear, otherwise that writing won't make any sense. And so to me, the two things about TDD are that clarity of thinking, what do I want this thing to do and how will I know it does it? And then basically the scientific method, plus validating that my mental model matches. And so, red, green, that stuff is like surface level. The depth that you mentioned, it's the depth that's important. And I feel in general, we don't go in enough depth because there's just so much to talk about. But I think we have to. People often ask me like, "I know stuff in Java, how do I get to the next level?" We need to work on your thinking, your thought process. Not that you're a bad thinker, but we need to figure out what do you know, what do you not know and how do you think about it? And how clear is that thinking? ARTY: At this point in the show, we usually wrap up with reflections. Any final takeaway or thought. There's so much good stuff in this episode. So I'll let you go first, Jamey. JAMEY: Sure. I've been thinking about what Ted said about predictions in TDD because that was kind of something, I guess I was -- I think I kind of already understood that. I was already familiar with doing TDD and this idea that my tests have to match what I think they're going to match, but I hadn't really used that language of calling them predictions in that way. And I think that that's really helpful. It's happened to me a few times recently that having more succinct language for things that you already have in your head strengthens understanding a lot. And it gives you that clarity of thought that you are talking about. And also as a side effect, one thing I really like to do is explain to non-programmers in my life the kind of things that I do when I'm programming. And this is a good example of one because my partner doesn't know much about tech and he'll see me working on my tests and he'll be like, "But they're failing. Isn't that bad?" And I'm like, "No, it's fine." And then later my test will be passing when I expected them to fail and I get real mad about that because that's very stressful. And he's like, "But they're passing, how is that worse?" I'm like, "It just is." And so, I think that having that kind of language about predictions makes these kinds of concepts possible to explain to people that don't have that background in a way that I think is really cool and interesting. TED: Yeah, absolutely. And I get really precise about my predictions. When I'm teaching, what I'm doing is like, "This is going to fail. Not only is this going to fail, this is going to fail with a NullPointerException. And not only is going to fail on NullPointerException, it's going to be on this line because this happened." And being able to be that precise I think is -- you know, you can start out with like it's going to fail and expect it to fail. But like I've had it where it fails but it failed for the wrong reason. And I'm like, "Huh, what's going on here?" And either it's like, "Oh, I forgot about this thing," or, "Oh wow, I don't understand this at all." [Chuckles] JAMEY: That's another thing I like to explain to non-programmers. I'll be working and frustrated in my code and then it'll fail and I'll be really excited and my partner will be like, "Why? It's still not working." I'm like, "But it's not working in a different way than it wasn't working earlier." [Laughs] ARTY: I've been thinking a lot about this concept of strengthening a muscle. And when we're in the situation and there's all the stress of the situation, why it's difficult to do the right thing. But if ahead of time, we're actively practicing in the moments where we've got time to do the right thing and practicing this muscle, practicing our decision making skills, then when we're in a higher stress situation, then when we're in all this pressure and urgency, we can kick back and go, "Okay, I got strong thinking muscles here. How do we take a step back and tackle this problem of confusion with a bit of strategy. How do we break this down and run experiments to validate our hypotheses and work our way toward some sort of answers to our confusion, some sort of repair to our mental model?" And if we can work deliberately on strengthening these muscles, then in the moments of our everyday work, we can improve the quality of our day-to-day decisions. That's really powerful. I mean, to recognize and start working on, makes me realize with the kind of TDD focus on red, green refactoring and all the doing up leveling to this paradigm of thinking about the scientific experiments and this wordless space coming up with patterns is all part of this muscle that we need to learn how to flex, that we need stuff set up that helps us learn how to flex these muscles as an industry so that we can really up our game. I have a whole lot of ideas related to that, but I was very grateful for just so much good stuff in this episode. So, thank you a lot, Ted. TED: For me, the more I talk about this with folks and see reactions like this, the more I interact and the more it gets connected. And I think that's sort of an aspect of expertise as you start connecting more and more things and the thinking process and the prediction being the important part. And I think we tend to sit at the surface because that's the easiest stuff to do. I'll write this test, it passes or fails and I do this next thing. But the thinking about the why I think is really, we have to find a place for that and we have to find and we have to get better at that. And I love how you said like, this is practice, this is building a muscle. And it really is. And the problem is that the muscle can atrophy pretty quickly. When I took a break from doing some coding, to be doing working on some other projects for a few weeks, I came back and was doing some coding, I forgot fundamental basic stuff. And I'm like, "Wow, we forget some of this stuff really quickly." It also comes back quickly if we've got an expert at it. But doing that practice and having the space for that and like you were saying, it's like doing that and building that muscle. And the language I use is fluency. Are we fluent in it? Can we do it without much effort? I used to struggle to run a mile. Now, I can run a mile without thinking about it. Have we gotten to that point so we can now focus on other things? But it takes that deliberate practice. And that's the term of art is like you have to do it but have to practice it in the right way. I used to hate metaphors because why do we need this metaphor, that's not very useful. But talking about it today, it's like seeing how that's just a form of a concrete example. And it's really weird because you don't think of metaphors as concrete, but that's sort of what they are. And so, that's really interesting. ARTY: So, we need something to attach it to, was you said, find out the existing state of where someone is at. And if for example, someone doesn't know anything about tech, metaphors become a concrete reference that we can relate to. "Oh well, it's kind of like this." JAMEY: I have a metaphor for that metaphor. Are you ready? I do like mini painting for D&D in war games and you have to prime your little models before you can paint or the paint won't stick to them. And when you said it needs something to stick to in your brief to prime your brain, so the thoughts will stick to it. TED: That's awesome. I love that metaphor. That's great. Because I use a more boring example, but I'm going to use that. I'm going to steal that. JAMEY: Cool, you can have it. TED: Can I steal that? I will steal it. JAMEY: I can give it to you. TED: All right. I will happily take it. ARTY: All right, so we ready to wrap up this episode? TED: Yeah. ARTY: Thank you so much, Ted. It was really wonderful talking with you and that's another great episode of Greater Than Code. TED: Well, thank you so much. As you can tell, I love talking about this and talking about it to you folks has been great, so thank you so much. JAMEY: Thanks for coming on. It was a really good conversation. And if any one of our viewers wants to continue conversations like these on our Slack community, if you pledge to us on Patreon at any level, you'll get an invite to our Slack community and we can all keep talking about this content. TED: Find me there.