NOEL: Hello and welcome to Episode 68 of the Tech Done Right Podcast, Table XI's podcast about building better software, careers, companies, and communities. I'm Noel Rappin. We're running a brief listener engagement survey. Unlike most of these surveys, ours has nothing to do with advertising. We're just trying to see what kinds of shows and guests you like so that we can do more of them. Please fill out the survey at http://bit.ly/techdonerightsurvey, that's all one word, and we'll send you some Tech Done Right stickers if you'd like, and you'll have a chance to win a deck of our Meeting Inclusion Cards. Thanks. That's http://bit.ly/techdonerightsurvey, all one word. If you like Tech Done Right, keep an eye out for our new podcast, Meetings Done Right. Meetings Done Right is 12 episodes with communication and culture experts all focused on how to improve your meetings using the new Table XI Meeting Inclusion Deck and other tips and techniques from those experts. For more information about the podcast and to learn how to buy a Meeting Deck of your own, go to http://MeetingsDoneRight.co or search for Meetings Done Right wherever you listen to podcasts and that podcast will be coming soon. Today on the show, I am very excited to have Dave Thomas and Andy Hunt. Dave and Andy are the authors of the book _The Pragmatic Programmer_ which has a 20th anniversary edition that is out now and they are the publishers of the Pragmatic Bookshelf where they have, full disclosure, published my own books a time or two. We talk about what's changed in the new version of the book, what it means to be a pragmatic programmer, whether there's still a role for tech books 20 years after the original, and how to make automated testing pragmatic. It's a wide ranging conversation that I enjoyed quite a bit and I hope you like it. And now, here is the show with Dave and Andy. Dave and Andy, would you like to introduce yourself to our audience? DAVE: Hi, my name is Dave Thomas. NOEL: You know that is the part I did. [Laughter] NOEL: Usually, I let the guests do the part where they say like, "This is what you should know." I mean, we can assume that a lot of the people are going to know the names, but usually we let... DAVE: OK. Hi, my name's Dave Thomas. What's your name? And that's how you're supposed to introduce, that's how my mother used to say I should introduce myself. NOEL: Cool. We have already derailed and we are 36 seconds in. I'm very excited. [Laughter] ANDY: Is that a record? This voice belongs to Andy. NOEL: And don't think it's a record, actually. ANDY: Oh, lordy. We have to get better then. We need to improve that Mean Time To Failure. NOEL: So, we have Dave and Andy on because they are probably in a lot, if you are a developer, they're probably going to be in a lot of your developer podcast feeds over the next couple weeks or months because Dave and Andy had just released or in the process of releasing the 20th anniversary version of their, I want to say seminal, very important book, _The Pragmatic Programmer_. I have an interesting history with this because _The Pragmatic Programmer_ came out almost exactly a year after I became a professional developer. And so, it has sort of been part of the background of my entire career. First as like, here's this really cool thing that you should look at. And then another wave as being like the received wisdom that everybody just knew about how to do things right. And then it's something where technology changes had made some of the advice less relevant, some of the advice still relevant. What parts specifically made you want to revisit this 20 years on? ANDY: All of it, I think, really. First of all, let me just speak to that first thing you noticed there. This is Andy's voice, by the way. A lot of folks have reported, and I've experienced the same thing. When you go back and read this book, it's sort of different every time you read it because you're in a different place every couple of years when you read it or however often you read it. So on the one hand, it's really great that you can pick it up as a beginner, as someone new to the field and you can learn some things, get some insights on some things. But then you can go back and read it after some experience and new parts of it will speak to you. And you can go back to it 10 years later and realize things that you had never picked up on the first few reads. So on the one hand, it's really great and I'd love to pat ourselves on the back for that, but I think that was kind of accidental that it just worked out that way. But yeah, there's different stuff that'll speak to you as you're in a different place on your path. As to what we had to change, in a way, we kind of had to change everything because the world of 20 years ago was very different than today. I literally was just cleaning out a drawer and I found a CD for Lycos, the search engine, had some kind of promotional CD and their motto was "Get Lycos or Get Lost". [Laughter] ANDY: I thought that was actually kind of ballsy. NOEL: How come we're not all using Lycos today? ANDY: It's sad. I don't know. DAVE: We are the lost, that's why. NOEL: That's exactly right. We all got lost. I actually, just like about an hour ago, I had occasion to sort of dig through some code that I wrote in like 2002 and it was a web program, but it's pre-Rails. It's pre almost all of the tools that we use now. A lot of things have changed. DAVE: But a lot of stuff hasn't changed, and that's, I think, the key. And that's also I think part of why, as Andy says, the book keeps being relevant and that is what hasn't changed is the obvious. And that's us. We are still the enthusiastic but really not that competent at doing what we're doing developers that we were back then and that human beings have always been dependent in whatever they're doing. We always make the same mistakes. We always have the same concerns, the same problems. And the way we address them is always going to be the same. And that's not a tool or technology specific thing at all. It's just human nature. NOEL: Yeah. I was going to say that what it feels like hasn't changed is the need for the outlook and the tone of being pragmatic, of being as simple as possible, but no simpler, that that outlook through the increasing complexity of the tool universe that we live in as developers still feels really, really important and relevant. DAVE: I think it's even slightly deeper than that because being pragmatic is doing what works, but nobody knows what works. That's the challenge we face every day is that we have no idea what to do at a particular circumstance because that circumstance is the first time anybody in the universe has seen it. One of the cool things or one of the scariest things about software is that we are constantly reinventing the entire field. Every project is new. Every project has new circumstances, new technologies, new requirements, whatever. So, when you don't know what to do, being pragmatic is really a question of making sure that you explore as much as you can in order to find out what works and making sure that you have in place a system of feedback to tell you whether it's working or not. And then secondarily making sure that the steps you take are reversible so that what you do screw up, you can always just go back and fix it. And I think that is the underlying advice of the entire book. And almost everything we say in that book is, in one way or another, supporting that kind of underlying mantra. NOEL: All else is commentary. ANDY: I would absolutely agree with that. And I think there's an interesting trap, I think, that we all fall into, people fall into is kind of our default settings to lead us toward a more dogmatic approach. It's like, "This is the way to do it. This is a best practice. This is the way I was taught in school. This is a corporate policy." Whatever it is, "This is our standard methodology", Lord help us. But that sort of very dogmatic approach that we're going to do, these same steps in this same order every single time. And that is just inappropriate. That often just doesn't work because you're not paying attention to the feedback. You're not changing your approach based on changing conditions. You're just sort of blindly charging ahead for whatever dogmatic faith you subscribe to. And it slays me, and Dave too, when you see people talk about like agile canon or "this is the only way to do it" or "this is the right way to do it". And that's nonsense because the whole game is constant adaptation. How do you get away with doing the same thing every time? It doesn't work. NOEL: One of the things that you said that I want to go back to, you talked about every software project is different, the technology is different, the requirements are different. Also, the team is different. And I think that the set of people that you have working on the problem is a profound piece of the technology and the process choices that you look at. And one of the things that I think has definitely changed in the way we talk about software development over that 20 year period is that we are much more likely to talk about it as a team sport now than I think we would have when this book first came out. What do you see as a pragmatic approach to a team to the way a team works? ANDY: We have a whole chapter on that. DAVE: Yeah. I think one of the things that's really critical is to recognize that a team is really nothing more than a voluntary collection of individuals. And that there's no such thing as "Here's a team, now I'm going to put people into it". It's "Here's a group of people, let's see if I can turn them into a team". And that's a very different way of thinking about it. And a lot of the techniques that you use for doing that are the same techniques that you actually use as an individual, which is why the chapter we have on teams kind of echoes the first, whatever it is, seven chapters of the book that are for people as individuals because teams have to know all the same things that people know, like teams have to realize that they're going to make mistakes and they're going to have to be able to find out ways of determining when they make mistakes and how to fix those mistakes and how to try to minimize them in the future. All of those kinds of feedback mechanisms that apply to individuals apply to teams. Individuals, one of the things that, we've got a bit more kind of psychology in the book this time around, and one of the things we stress really strongly is that as a programmer, you have to accept the fact that your job is not to write perfect code. Your job is to eventually get something that meets the requirement, that does what the customer wants. And if there's bugs along the way, then that's not an exception. That's not a weird thing that happened. That's just the way it is. And you don't freak out when you get a bug. You don't panic. You don't blame yourself. You just say, "OK, there's a bug. I'll fix that." In the same way that if you're driving, from A to B and there's an accident, you reroute and you go a different path. The same thing applies to teams. And when teams adopt that kind of approach, then what it means is the teams lose the incentive to stop blaming people for things to start individualizing problems and mistakes and instead it becomes a collective ownership thing. So, I think all of those things are really important for teams. And I think, like I say, they are all really just extensions of good practices for individuals. ANDY: Except the one thing that's then different once you put a bunch of people together in a team, in addition to all of that, you have to have that kind of underlying trust of each other. That's what actually makes a team gel and work is knowing each other's strengths, weaknesses, knowing what people are like, but trusting them to do the right thing, knowing they have your back, knowing you have their back. So, we're very keen on the idea and the evidence supports this, that you're much better off with relatively small stable team formations. A team of 50 people isn't a team. That's a horde. [Chuckles] That's not it. Or even a team of six people, if you're pulling these two people out to go put this fire out and pulling these other people out for this fire over there and bringing these other new people in, every time you change even one member of a team, it's a new team because you've changed the dynamic, you've changed the balance, you've changed the relationships inbetween the people. So, anytime you make a change like that, it's basically a whole new team and you kind of have to start over at building relationships, building trust, knowing how to work with each other. So, we very strongly advocate don't do that. Keep it relatively small, under eight, 10 people or so. And let them get to know each other, let it be a stable formation. NOEL: The original Extreme Programming book came out almost exactly simultaneous with the first version of the Pragmatic Programmer. And then you all went on to create the Agile Manifesto. How do you see the overlap, the compare and contrast between Pragmatic Programmer as an outlook and Agile as an outlook? Are they 85% the same, or.. ANDY: They're different. It's like the old thing about trying to describe the elephant and this person says it looks like a tree trunk because all they see is the leg, and this person says it looks like a twig because all they see is the tail. They're all different levels of take on the same problem. So, XP is kind of a different thing. It is a methodology at a technical practices level. That's a different thing from Scrum. I mean, they're kind of held to be, well they're both agile practices but they're really not. Scrum is a set of project lightweight project management practices. It doesn't include the technical level. So, even those are different things. And even though XP is at a more, I guess, lower and more technical level, it assumes you're running on a base of other things that people do. You use version control. You do this, you do that, you do the other thing, that they don't explicitly call out because of course you do these things or you have this expertise or this experience before you start using that methodology. And that's a bit more where Pragmatic Programmer comes in. It's that sort of tacit knowledge or implicit experience that this is what you should be doing or how you should view problems that's not necessarily called out in any of those full blown or other methodologies. DAVE: Yeah. If maybe you could think of XP as a roadmap then Pragmatic Programmer as "this is your car". NOEL: The thing that strikes me as similar, especially as I was going back through _The Pragmatic Programmer_, was the emphasis compared to XP in particular was the emphasis on taking small steps that, that kind of do one small thing and build on it seemed to me very much the essence of what I originally got out of the XP work and also a very strong component "don't get out in front of your headlights" and other things like that in the Pragmatic book. ANDY: Because that's an absolutely critical sort of foundational level idea that again, it's a trap we fall into all the time of doing something big and grand and heroic that's more than we can take on. And now, you've delayed whatever feedback you're going to get, you've increased your risk, all these sort of terrible things start happening to you. Whereas if you'd just taken smaller steps, it would have been fine. DAVE: And you lose the ability to undo because at some point, the sunk cost becomes so high that you're going to follow this road no matter what. Whereas if you're taking little small steps, then all that's required is just shift your weight and you go back to where you were. For me at least, that's one of the major reasons for doing that is just the sheer undoability of it is so much higher. ANDY: That's the foundation really of "agile" approach is that's actual agility when you can just change your direction at whim and it's not a big deal. You can undo, you can experiment, you can get feedback. That's what actual lower-case-a agility is all about. It has nothing to do with pair programming or stand up meetings or Kanban boards or whatever. It's the ability to adapt quickly. NOEL: I think that there's sort of a difference between the XP set of practices solving one problem, the Agile Manifesto and Scrum kind of thing solving a different kind of problem and Pragmatic Programmer solving a different kind of problem. Pragmatic Programmer obviously is the only one that says like you should be using version control. DAVE: Yeah. And careful, we're not solving a problem except at a very abstract level. This is not a recipe book. NOEL: I would dispute that, but I would say you're not solving a development problem. DAVE: Okay. I'll be interested. What problem are we solving? NOEL: I would say, at least in the way that I received this book 20 years ago when I first got it, you were solving for me a career problem. DAVE: Huh! NOEL: My career start was a little unusual because I did a full set of graduate work and came into professional development, having been been a Computer Science graduate student, which was pretty bad preparation for being a developer. It was lovely in many ways. I learned a lot of interesting things, some of which I occasionally still use. But when I got thrown in front of like, here's your Fortune 500 healthcare company and they want to build a website that combines six teams that have never worked together before, "Go", like that didn't help me very much. It was in the wake of that project that I received both XP and Pragmatic Programmer. And to me, they solved a problem not in how do I make this loop work or how to make this webpage render. They solved the problem of, it gave me a vocabulary for talking about the things that I was already thinking about, for communicating the things that I saw in this project that were problems. There were things that we were doing wrong and it gave me a framework for saying like, "This is why what we were doing didn't work," or why it felt so bad. We should really be doing more things like this. We should be working in smaller steps. We should be testing more. We should be not, we should be using version control. We actually didn't in the first couple of projects I was on in 1999. DAVE: That's a frank admission. [Laughs] NOEL: Well, the statute of limitations has run out. [Laughter] ANDY: There you go. There you go. Perfect. ANDY: They'll never find me now. DAVE: Yeah, we'll never find the source code either. NOEL: The source code is long gone. The Fortune 500 company, which used to scare all the vendors by talking about how much they could sue them for, I think we're probably okay at this point. Not so okay that I'm mentioning the company though. [Laughter] NOEL: But yeah, I do think that the outlook identified in Pragmatic Programmer helped me think about "this is how a professional programmer approaches a problem". ANDY: I think that's fair because when this bit of this conversation started, we were talking about solving a particular problem. And I know instantly I get this mental image of that's kind of the Stack Overflow approach. Here's this very specific, what flag do I pass to this thing to get this to happen? Or what's the setup sequence of this API call? Or something very narrow and very tech specific solve this problem. And in that Pragmatic Programmer solves problems, it's a very different notion of what a problem is. Or it's a much higher level concept. So, it's not specific like that. It's, as you say, it's what is a better philosophy to take? What's a better way to think about how to approach this issue or this set of other problems? And you have to remember, this grew out of Dave and mine being out there in the trenches to use the cliche out in the world, seeing clients who were all doing the same things poorly in the same fashion or making the same mistakes, if you will, pretty consistently. This company was doing this in a bad way. This other company was doing this same thing in the same, less optimal way. So, our first inclination was we would just going to write a little white paper just to help get folks on the same page as us because we would go into a new client and have to explain all this. I have to give them the same little stories, the same little "have you tried holding it this way, it works better" sort of level of advice. DAVE: Have you tried testing it before you ship it? ANDY: Little stuff like that, yeah. Have you tried version control? Have you tried committing once in awhile? User, has anyone ever talked to a user to find out? No? Okay, maybe let's start with that. And to us, it wasn't rocket science. This wasn't anything revolutionary like had never been done before. This was stuff everyone knew about but didn't really do, didn't actually do. And yeah, I think a lot of the value that Pragmatic Programmer brought to it was giving it a vocabulary. It was putting into words some of these phenomena or some of these ideas that were kind of, again, that kind of tacit knowledge kind of floating around the edges that people knew about but didn't necessarily know how to talk about it or what to call it. Or was anyone else experiencing this? Maybe it's just me. So, we kind of really gave voice to that. Remarkably, that did help the discussion. And we find 20 years later, people use the words we invented, the principles. The stuff that we made up to label these things is now part of the conversation. And that's pretty cool. NOEL: In 1999, I was working for a boss who didn't want us to use Python because there was nobody to sue if it didn't work. [Laughter] NOEL: So, we had a long way... ANDY: We've come a long way. NOEL: We had a long way to go. One of the things that's interesting about revisiting it and one of the things that I noticed as I was looking over it as I was reading it again is the stuff that, not just the things that I didn't appreciate when I first read it, but the stuff that I dismissed when I first read it and then came back to it 10 or 15 or 20 years later, I was like, "Oh, that was a lot smarter than I gave it credit for when I was a youth. So, things like I was very IDE dependent as a young developer and probably still am more so than a lot of other expert developers, but learning to use command line tools, text manipulation, those kinds of things that I kind of shook off the value of early on and then was able to come back after having a little more experience and thinking, "Oh yeah, actually I have started to do these things and they're really helpful." DAVE: I'm really glad you said that because when we were originally writing the book, we took basically two years off billing and spent full time writing a book. And one of the reasons it took so long is that it took us a long time to find a voice because we knew we didn't want to be prescriptive. We didn't want to sit there and say, "Do this, do that." But we also didn't want to be the kind of like wishy washy, everything is good, do it your own way, whatever. So we had to find a kind of slightly, I don't know what the word is. NOEL: Pragmatic. DAVE: Well no, we tried to find a tone where we weren't being judgmental, but at the same time saying, "OK, this is a really a bad idea," or, "This is a good idea." And one of the nice side effects of doing that is that when you're reading it, the stuff that keys with your experience keys with your experience. But the rest of it is not like mumbo jumbo rocket sciency kind of stuff. It's just like, "Yeah, okay. I can see what the saying, but it doesn't apply to me," kind of thing. And I think one of the reasons for the success of the book is that it does have those multiple levels in it, but at the same time they're not kind of like, "Stop reading now unless you've got seven years experience." You just pick it up. And that ties in with this thing that we've been using now since probably before the Pragmatic Programmer, which is this model of knowledge acquisition called the Dreyfus model where you talk about how people acquire tacit knowledge. And one of the things the book does, I think, is play to that model. So, if you're very much a beginner, it gives you more directive stuff. Or if you're more advanced, then really it's collegial. We'll have little in-jokes that only work if you've been doing it for awhile. I don't know, it's nice to hear you saying that you pick up extra stuff as you read it through because that really is a very nice side effect of what we did. NOEL: It's a little bit like the way Sesame Street would always put jokes in for the parents, right? DAVE: Yeah. NOEL: One thing I wanted to talk to you specifically because of my own experience with you both as authors and as publishers is what do you see the role of technical knowledge bound in a bunch of dead trees in a book? Do you feel like there's still a role for technical books the way that there was 20 years ago? How do you think the way in which developers learn has changed over that time? ANDY: We'll both probably have different answers to this, but I personally think there's absolutely still a role for long form things that you're studying that are actually in paper format. I'm not really good at marking up on my Kindle or my eReader. That's really not for me. That's not a good format to study something from. If I'm reading a linear piece of fiction or an article or something, that's perfectly fine and I enjoy that. But if it's something that I'm really pouring over, something I'm studying, something that I'm working on, I want to be able to highlight and dog-ear pages. I want that kind of spatial memory of, "I remember seeing that thing on the lower left corner of a page toward the front of the book." These sort of road marks I think are really important for something that you're working on and studying and trying to learn. Now, not everything's like that. There's a great deal of knowledge that we didn't have access to in the old days that you can just search on the net for. You can find it on Stack Overflow or the vendor support bulletin board or the Google Group, whatever it might be. So, there are different levels for different things. Some stuff, it's better to watch someone do it in action and you want to see a video clip of how do you click and drag, and do this. Some kind of very visual IDE is really hard to describe in flat text. You really need to kind of see the animated version of that. So yeah, there's different modalities for different things, but I very strongly think that the paper book that you can work with is still a big part of that. DAVE: I don't know if the delivery mechanism is the distinguishing factor - paper versus photons. But for me, I think what I would say is the distinguishing importance of something like a book is not that it's on paper, but that it's curated, that the process of putting it together was difficult. It took the author many months of really quite hard work to organize their thinking. They worked with a team of people that helped them express their thinking, try to make it as clear as possible. The voice as approachable as possible. And all of those things contribute to something which has a measure of authority and also something that's actually easier to read as a whole than 1,001 Stack Overflow posts. And quite often with the technologies we're using, looking at a consistent opinion of the overall thing is really important. I mean, if you're sitting there and you're trying to use some latest JavaScript framework and you're just trying to pick up how to work on it from pieces of information all over the web, then you're going to find so many different opinions that seem to contradict each other. It's going to be very hard to make progress. And as a beginner, you really don't care about people's opinions. You just want to know what to do. And that's where a long form, that's where a curated piece of information is really important because someone will say, "OK, here's my approach to building these apps. And then once you get good at this, then feel free to disagree. But for the meantime, this is how you become productive." NOEL: Since I'm currently writing a long form book for the Pragmatic Bookshelf on the latest JavaScript framework, I certainly hope that's all true. One of the things that they say sometimes or used to say in the news business was that one of the effects of the Internet was that the newspapers had become like the older weekly magazines. The weekly magazines had to become like nonfiction books that since you could get the information, that there was more of emphasis on analysis in the longer form things than there might have been otherwise. And I actually think that that applies, too. The thing that you can get from a longer form book, whether it's on paper or whether it's delivered some other way is that level of not just curation but also placing it in context, putting it in a larger perspective. At least, I hope that that's true. ANDY: I would agree with that because we don't, as a rule, we generally don't publish reference books, for example, because what would be the point that is better served by just looking it up live online. So, there really is not much value to add there. You want that expert opinion, you want that consistent view, as Dave mentioned, you want that consistent opinion that, "I've done this. I've been in the trenches. I've worked with this technology. Here's my insights. Here's what I've discovered. Here's how you can do this too." So, it's much more of, yeah, we tell our authors, we like that tone of someone sitting across the lunch table from you saying, "Hey, I found this really great new thing and I'm passionate about it and let me explain it to you. It works like this and try it this way." That's the kind of environment and kind of attitude that we try to promote because that's where this is useful. A reference book, it's outdated as soon as it hits the paper, so what's the point? But the expertise that someone has built up over a year, several years, depending on the technology, that's really the value. NOEL: Dave, you talked about something that you need to do as a beginner, but then you no longer need to do as an expert. And you said something along that line a couple minutes ago and I'm blanking on exactly what. But it did remind me that I wanted to ask you about the section specifically about testing in the new version of the Pragmatic Programmer where you talk a little bit about how you went through a phase of not writing tests and how you came to that. It's related to some discussions I've been having online recently again, about what do you see as like the pragmatic value of doing automated testing. DAVE: This is where you get me in trouble with Andy because he was of the opinion that that should not be in the book. But he didn't want to lead people astray. So, let me give you the background. As part of kind of the way I work is once I get comfortable with something, then I try to stop using it, not religiously. But what I don't want to do is get stuck in a rut. And I don't want to get into this position where I believe something almost religiously simply because it's what I've been doing for so long. And testing is one of those things that people get seriously religious about, disgustingly so. There are people out there who are like, "Show me the test. If you don't show me the tests, I'm not going to look at your software." There are people who are like, "I went to a conference once and someone's talking about Cucumber and they were writing a code generator that would generate the..." I can't recall what it's called now. The recognizers. Yeah, but it was worse than that. They were actually using some kind of parser generator. And what they were doing is they were writing code that was writing code that was three levels removed from the actual code they were shipping. And they were proud of it. So anyway, testing is one of those things where I thought I keep telling people it's good to test, but really I've never actually done the experiment of not testing. So, I decided not to test for a period and to see what happened. And the results were actually really surprising. First of all, you got to understand why I typically test or where I find value in tests and that is testing for me is not for finding bugs. It's two things. It's helping me design in the first place because that test becomes the first user of my code. And therefore, it tends to reveal stupidities in the interfaces and missing data that I can't provide or all these kinds of things. So, testing is really good for design and then testing acts as a regression safety net for when you're doing changes in the future. And honestly, I don't use testing, I never used testing for the short term unit type, does this work, does this work? So, I stopped testing and I kept an eye on bug rates in my code and it was just as buggy as it always was. It didn't get worse. My productivity actually went up a little bit. And I was looking at my designs and this is just totally subjective, but I would look back at code three, four months on and I couldn't see any difference at all. I was still splitting it the way I would have split it if I had tests. So I was like, "Hey, this is pretty cool." I don't actually need testing because my brain has got so good at thinking about testing that that's all I have to do. And the act of having done the requisite 10,000 hours during my programming career, that's now so ingrained. I don't actually have to do the test to get the benefit of testing. I have modified that position slightly because I have done two things that have revealed the problems. One is when I was working with a couple of other people on a project and the testing then takes on a slightly different role in that it acts as almost like an interface check between the different people that are working on the code. And then secondly, it definitely has a lot of strengths as a regression barrier. So you put tests in place to make sure that changes that people who may not be so familiar with the code make don't break it. And so, I'm relaxing that stance a little bit, but I really document it more as an example of nothing is sacred. If you're truly being pragmatic, if you're truly being agile, then there is no best practice. There is nothing that you have to do. What you should do instead is to always experiment to see if what you're doing is a good way of doing it. And one of the ways of experimenting is to stop doing it and seeing if things get worse. ANDY: Let me just speak a few minutes on to my thoughts on this. First of all, I agree with everything Dave just said. Absolutely, you should experiment. You shouldn't really take anything as a "best practice". We kind of hate that term because it's devoid of context. Best practice, for who? Under what circumstances? Very few things are sort of universal like that. But the issue I have if you go on record with saying, "Well, I don't test for this reason, or that" or whatever, looking back to the Dreyfus model, beginners will have a tendency to take that advice badly. There was a classic story we used to tell that that actually happened to us early on one project. This somewhat new to the field programmer had just read the original Gang of Four Patterns book and was all fired up about it and thought this was just the most wonderful thing in the world. And so, he's writing this little insignificant piece, like a report, reporting piece of code, and he jammed something like 19 out of the 23 design patterns into this poor hapless piece of code because it was shiny and cool, and that's what you do. NOEL: We all kind of did that, yeah. ANDY: Yeah. To me, that was always a really telling example of here is a level of advice, design patterns, that was aimed at a more experienced practitioner who would understand the context of when this is a good thing to use, when this is not a good thing to use, and sort of have that appreciation and the experience of when to apply that. You get a beginner who doesn't have that experience and they will just take and misapply that advice. It's kind of like when you yell at the dog. You yell at the dog and say, "Don't eat the furniture." And all the dog hears is "eat the furniture". DAVE: Oh, no. I think the other thing there is that you're right. A beginner will look at design patterns and see a cookbook. Whereas a more experienced person will look at design patterns and see a glossary. ANDY: And then the expert looks at design patterns and say, "Well, this was really just to make up for failings in C++ at the time." DAVE: Absolutely. ANDY: And if you use a real language, you wouldn't need any of that. NOEL: I think you could almost make the case that testing is there to address deficiencies in non-type safe languages. DAVE: Oh, no. It's not. Nothing to do with that. ANDY: [Laughs] DAVE: Nothing to do with that. Type-safe languages are there to make off for the deficiencies in human brains. NOEL: I think I am not totally wrong here. At least some of what Ruby developers do with tests would be covered by a type system in Ruby that made certain things impossible. It is not the correct, not necessarily all of the things that Rubyists do with tests, but some of the things that I do with tests as a Ruby developer are there because Ruby will not stop me from doing various stupid things if I want to, to clarify slightly. DAVE: But that's not the reason for testing. NOEL: No, no, no. It's not the reason why you do testing, but it is a significant amount of the Ruby tests that I see in the wild, which is a larger issue about the Ruby testing that I see in the wild perhaps. ANDY: So, you're saying more that this is what people are using testing for incorrectly. NOEL: I gave a talk at RubyConf a year and a half ago about low cost tests versus high cost tests. And as part of it, I sort of just did it like a show of hands thing in the audience about how long everybody's test suite was on the project they were on. And I don't remember the exact number, but a staggeringly high number of the people in the audience, probably at least a third, admitted to working on projects that had test suites that ran over like say 30 minutes. I don't remember the exact detail, but effectively infinite. And I was kind of shocked by that. And as like a good Smalltalk-trained TDD test should run as quickly as possible kind of person, at least in theory. And when I went out to talk to people, what they said was, "Well, we just throw it all to our CI server and we don't really care about test speed anymore." And I've found that somewhat unsatisfactory. One of my takeaways from that was that they did not have a clear theory of why they were testing. DAVE: I would like to see a regression, look to those organizations and compare it to the end of year review metrics that are applied to individuals. Because I would guess that those organizations, people are told that they have to write so many lines of test per line of regular code or something like that. NOEL: I would guess it's not that baldly put, but I would say that probably most organizations, there is no cost for adding a lot of new slow tests to the build. DAVE: Well, a lot of any tests, because that's one of the things that really, really bugs me is that one of the reasons I dislike TDD as a religion is that it leaves a whole bunch of really vacuous tests lying around and people never go in and delete tests, which they should. And so you end up, if you're doing pure TDD, you end up with a test that you wrote to fail because you haven't done a class definition yet. And that test is sitting there and it will sit there for the next 20 years. You say, "OK, maybe it slows the tests down by millisecond. Who cares?" What it does slow down as well is if you decide to rename that class, then suddenly that's one more test that fails. That's a really small example of an entire legacy string of tests that gets left behind. Before I did the "don't test at all" little rant, one of the things I used to recommend was delete all your unit tests at the end. You've got your project, you've shipped, delete your unit tests. Keep your integration tests, delete your unit tests because they don't really help anymore. ANDY: Well, they could. I mean, the problem is, as you say there, you leave like that kind of scaffolding-y test, for lack of a better word. And not only does it slow things down by whatever imperceptible amount, but worse than that, you're adding noise. So now, if you did go in to read the unit tests to have an understanding of the code, there's this noise that you've added to the signal. And that's going to slow you down reading the code, which to me is the worst sin. So if you've got, again, say a newbie goes in and puts in, I don't know, 20 unit tests that are basically just simple, rote, accessor level tests because they don't quite understand what they're doing yet. But you go back to read that and now you've got this slog of really useless tests that don't reveal any intent. They don't demonstrate anything interesting or exotic about the API. They probably wouldn't show any decent regression failure because the nature. To me, that's almost like the thing of having wrong comments being worse than no comments. DAVE: Exactly. ANDY: Actually, you've got testing in there but it's not helping. It's not showing intent. It's not communicating. NOEL: Right. So what I was seeing and some of these people were reporting to me in organizations and some of what I've seen too is like that, only worse in that instead of writing a bunch of unit tests that were quick but failing, essentially people were like copy pasting a bunch of integration tests and end-to-end Capybara browser tests and writing a bunch of those that effectively overlapped and leaving them all in, which gives you all of the downsides of leaving bad unit tests in plus they're two orders of magnitude slower. And that was what I mean when I say they're doing this without really a clear theory of what they're doing. They're not really catching bugs. They're not really helping with the design. They're there because like you said, they think they should be there. And then you get into the uncanny valley where you have all these tests but you don't have any more confidence in your ability to refactor. DAVE: That's where I think the idea of, "Hey, I've got really comfortable with this, so stop doing it." So, if I was a project manager on that project, I turn off the CI server and see what happened. I probably wouldn't even tell people, or turn off the testing part of it and just let it run and see what happened. Do I get more errors in production? And if so, why? And what's going wrong? What are the problems I'm actually getting? Because if you're sitting there and you're writing exhaustive tests like that, then what you're doing is you're trying to predict the future as much as anything else. You're trying to say what could go wrong and you're putting in place all of these guesses as to what could happen. So, why don't you find out what actually goes wrong by actually letting it break and then fixing it and then keeping a track of what broke so you can fix it. If you're writing a Pacemaker, then obviously that's not going to work. But if you're writing Ruby, then you're not writing a Pacemaker. NOEL: Yeah. I do think that most, at least most Rails projects are much more nervous about production bugs than they probably should be. A production bug is not necessarily the end of the world. Even as I'm saying this, I can tell people are going to get mad at it. ANDY: The real problem here isn't necessarily the volume of tests or their fear of failing in production. The problem is failing to think. And again, relying to that sort of dogmatic, "I'm copying pasting this hunk of code because that's what I always do." It's that autopilot, that relying on autopilot that gets us into trouble. And somewhere in the book, we even say, the old IBM motto of THINK! in all capital letters with the explanation mark, that's what you've got to do. So, it's the unthinking, "Let me just copy and paste this wad of code because that's what I do every time. That'll keep me safe." It's almost like a superstition at that point. It's like throwing salt over your shoulder. NOEL: Throw tests over my shoulder. ANDY: Yeah. DAVE: But being afraid of something, that's really interesting. And one of the things that developers need to get far better at is listening to that. When they start getting afraid, when they say, "Oh, I really don't want to put bugs into production," and state it as if it was an obvious, every time you find yourself getting upset or worried or afraid, you should ask yourself why. If someone says to you, "I don't write tests," and you feel a little hair stand up in the back of your neck and you have to resist punching them in the face. Instead ask, "Why am I so upset about that? What is it that's making me upset?" And quite often, the results of that are quite revealing. If you're afraid of bugs in production, then what that really means is you're in an environment where you don't have control enough to be able to roll back production immediately. NOEL: In my case, I'm just afraid of a lost sale. But that's when somebody says they don't test. But that's a completely idiosyncratic problem. DAVE: Here's the story. Why are you developing the application? You delivered one version of the application. It's running, it's absolutely fine, it's stable. Why are you making changes? Well, you're making changes because you want to make more sales. And assuming your sales go up by 20% when you roll something out, then losing one sale because you rolled out a bug is just kind of cost of doing business if it means you can get things out there faster. And I think that's the important thing to remember is, yeah, it sucks and you've got to have good customer service and you've got to find ways of compensating people who suffer as a result of your bugs. But at the same time, it's just a bug. And a single sale is not going to make a difference to the overall graph. So, get it out there and try it. ANDY: And interestingly, let me just underscore what Dave is saying there. You noticed he very, very sneakily reworded the problem in terms of actual economic impact, actual business value. And that is really critical because when people say, "OK, why are you changing the team? Why are you adopting agile practices? Why are you doing this? Why are you doing that?" They're like, "Oh, we want better code quality." Or, "We want faster time to market." No, I mean, not really. Those are just means to an end. What are you actually trying to increase? Profit. Market share. There's more direct measures that we often lose sight of. We don't think of what's the economic impact of rolling out this new set of features. Or if the server's down for an hour, what does that actually cost us in real dollars? At the end of the day, I think it's very helpful to actually cast these decisions and these discussions into real numbers. Not something a little more airy fairy like, "Oh, we want quality." Ah! My grandmother wants quality. [Chuckles] That's really not it. DAVE: There's a whole bunch of quotes about perfection. And the idea that perfection is the enemy of progress. And if you're delivering the typical web style Rails application, nobody is ever going to say, "This application is as good as it could ever be. We can seal it off and we're done." So, you're going to have to accept that you're going to make changes to it. And what a lot of companies do is they do AB testing. Let's see what happens. So, I do this way rather than that way. And when you're doing AB testing, guaranteed, one way is not going to be as effective as the other way, unless it's incredible coincidence. So, during the AB testing, the one that's not as effective is losing you sales, except that that's part of the statistics gathering that you're doing. So, it's no different doing AB testing than it is delivering the occasional bug to production and then fixing it and delivering the occasional bug to production. If that means that you can deliver five times as often, then you're getting value out there faster. Now obviously, if every single time you deliver, you deliver bugs, then you got a problem in your process. But what I'm saying is simply being afraid of bugs in production, that's a false fear. NOEL: OK. We are sort of coming to time, I think. So, is there anything else you want me to ask or something else you want to say before we run out? ANDY: Yeah, we have a book out. Thanks for asking. Yeah, we have a book out. The 20th anniversary edition of The Pragmatic Programmer, the new Pragmatic Programmer book is available in eBook, as we speak from PragProg.com. It comes in PDF, MOBI, and ePub formats, all for one price. And the nice thing is by special arrangement with Pearson, who is actually publishing this book, if you buy the eBook from us now, you get a coupon for half off the hard cover when it comes out in September. And that's another nice thing we changed. The first edition was a soft cover book. Ironically, a lot of folks sort of complained that it got really beat up because they carried it with them from job to job. They loaned it out to team members and friends and it got dog-eared and sort of ratty from a well-used life. So, we're making this edition a hardback to give it a little bit more heft, a little bit more protection. But that'll be out in mid-September. NOEL: And where can people find you online if they want to talk to you more about the book or Pragmatic Programmer or anything else that they have on their mind? ANDY: My personal website is toolshed.com. I'm on Twitter @PragmaticAndy. And all the Pragmatic Bookshelf books are at PragProg.com. DAVE: And if people Google PragDave, then they'll find me. NOEL: Thank you very much for being on the show. I'm really glad we got a chance to do this. DAVE: Us, too. ANDY: Thanks so much for having us. NOEL: Tech Done Right is available on the web at TechDoneRight.io where you can learn more about our guests and comment on our episodes. Find us wherever you listen to podcasts. And if you like the show, tell a friend or your boss, or your friend's boss, or your boss's friends, or your social media network, or some Instagram followers you know, or tell me or tell my boss. Also, leaving a review on Apple Podcasts helps people find the show. Tech Done Right is hosted by me, Noel Rappin. Our editor is Mandy Moore. You can find us on Twitter @NoelRap and @TheRubyRep. Tech Done Right is produced by Table XI. Table XI is a custom design and software company in Chicago. We've been named one of Inc. Magazine's Best Workplaces and we're a top-rated custom software development company on clutch.co. You can learn more about working with or working for Table XI at TableXI.com or follow us on Twitter @TableXI. We'll be back in a couple of weeks with the next episode of Tech Done Right.