JAMEY: Hi everyone. Welcome to our 60th episode of Greater Than Code. I am Jamey Hampton and I’m here with my great friend and fellow panelist, Sam Livingston-Gray. SAM: Good morning from sunny Oahu. I’m here also to welcome my good friend Rein Henrichs to the show. REIN: Good morning. And I am super excited to be here with my friend, Coraline Ehmke. CORALINE: Hey, everybody. It’s super cold here in Chicago so I’m very jealous of Sam being in Oahu. I’m happy to introduce my dear friend, Jessica Kerr. JESSICA: Good morning! I am thrilled to be on Greater Than Code today. We have a very exciting guest. Please welcome Kent Beck. Kent Beck is the creator of Extreme Programming and one of the original signatories of the Agile Manifesto, the founding document for agile, lowercase A, software development. He’s a strong proponent of TDD. He pioneered the software design patterns and he was responsible for the commercial development of Smalltalk. Kent lives in San Francisco and currently works for Facebook. Kent, welcome to the show! What is your superpower and how did you acquire it? KENT: Jessica, thank you very much. My superpower is putting together things that don’t obviously go together. JESSICA: Like metaphors? KENT: Like metaphors, yes. That’s a thing that I seem to be able to do that other people don’t do as regularly. CORALINE: Was that something you were born with or was that something you developed over time? KENT: Some of each. My mom used to play a game with me where we would take a random two pencils and then we would pass the two pencils back and forth and make out of it the most bizarre possible thing. So, walrus tusks, and then somebody else would use them as chopsticks, and then back and forth. And we used to play this game for hours. And I think there’s a big part of… I’m just a curious person and I learn lots of stuff. But there’s also kind of a habit to it where I hear something and I immediately compare it to other things and things that rhyme with it and things that sound like it and stories that I remember and so on. Does that answer the question? CORALINE: Sure. It reminds me. I just recently read Douglas Hofstadter’s new book which is all about metaphors and analogies. And his basic premise is that we think in metaphors, we think in analogies, and that’s the basic mechanism of thought. So, being able to make associations between things is the source of innovation. And I kind of contrasted that with Dijkstra who said that it’s not advisable to approach new things with relating it back to things you already know, because new things need a new language, which I pretty strongly disagree with. But I’m curious if you have an opinion on that. KENT: Yeah. So, I go back to the ‘Embodied Mind’ from, I think that’s [inaudible], where I agree that all abstract thought is metaphorical. And his premise goes further and says that the fundamental metaphors are physical. So, we know about inclusion and exclusion because we have a body. And the way we think about up and down even metaphorically is grounded in our physical experience of standing up and sitting down. Of course, saying that abstract thought is metaphorical is a metaphor, since we’re computer scientists. We can’t avoid that conclusion. CORALINE: Meta metaphor. KENT: Yes, because thought is actually this electrochemical goop in our brains and we use metaphors as a metaphor for thinking about it. CORALINE: Wow. SAM: Okay, I’m going to go. I’ll be back and I’ll be thinking for about five days. [Laughter] KENT: I couldn’t believe that I could shut up the entire panel. [Laughter] CORALINE: [Laughs] In terms of metaphors and software development, I think a lot about metaphors. I’m working on an AI project to write a program that understands and can generate metaphors. And I’ve thought a lot about how we borrow metaphors from other disciplines. And for the most part, we borrow metaphors from engineering and construction and science. Do you think that there are metaphors in other fields of study that are applicable to software development? KENT: Sure. I think there are some metaphors that so built-in that we don’t even recognize them as metaphors anymore, like table or stack or… CORALINE: Data store. KENT: Association. CORALINE: Scaffold. KENT: Yeah, yeah. And so, I think we already have metaphors from lots of different sources. And every metaphor’s a two-edged sword, to use a metaphor. JESSICA: [Laughs] KENT: I’m sorry. I’m a computer scientist. It’s just a reflex. That it both reveals and conceals. And you say, well these two things are alike but they’re different. And if you get trapped into thinking that they’re exactly alike when they’re different, you’re confused. But if you think that two things are completely different when there’s actually similarities that would help you make some predictions, then you’re also losing. CORALINE: I remember when I was doing some research from category theory. I wish I could remember and/or pronounce the name of the person who presented this YouTube series. But one of the things I took from my study of category theory is that traditionally, in science, we try to break things down into atomic parts and solve small problems and compose small problems into a grander solution. And category theory seems to take the opposite approach of taking a very high-level view of things, looking for ways that they are the same, and taking a macro as opposed to a micro approach. KENT: I think there’s an analogous split in programming where you have the ‘for all’ people and you have the ‘there exists’ people. The ‘for all’ people want to make universal statements about programs and derive their understanding of some particular situation from the universals. And the ‘for each’ people are, which is my camp, as the TDD works by induction. You say, “This works. And this works. And this works. And therefore by induction, a whole bunch of other things work also,” which we don’t have to exhaustively test all customers. We know that if they have a name and an age then this computation’s going to work correctly. JESSICA: Whereas that deduction is not mathematically valid. It is not literally a proof. But it’s darn well good enough for business software. KENT: Yes. REIN: A thing that I think is really interesting is that even in mathematics, what constitutes a proof is determined by humans being confident… KENT: [Laughs] REIN: That a certain thing is valid. KENT: Sure. Proofs are about convincing other mathematicians. They’re not about universal truth. JESSICA: I read somewhere the other day that reasoning is a social activity. We choose our beliefs based on whatever we do, usually people. And then we invent reasons for them in order to socialize them with other people. KENT: Yup, I would agree. And the same is true in programming. You make a change to a program and it’s the beginning of a social process. It’s not the end. JESSICA: Right. And I make tests and check them into the test suite for social reasons. As in [Laughs] I am going to make you uncomfortable if you change this. KENT: Yeah. JESSICA: Where by change I mean change the particular feature that I’m testing, if you change how that works, because I don’t want you to. SAM: And we write tests because we’re embarrassed not to. JESSICA: Oh yeah. KENT: Oh, that’s not why I write tests. But I can have many reasons for a thing and that is one of them. SAM: [Laughs] Fair enough. JESSICA: That depends completely who you are, because yes, as Kent Beck, or as we, I don’t have to write tests and I can say, “I didn’t test this one thing because it was too hard.” But I have a friend, Amy, who just started as a developer and she’s got a PhD in History. She knows what she’s doing in life generally but she’s got to build up this credibility as a developer. And she was telling me about this test that was being difficult and, “How do you mock a void method in Mockito to throw an exception?” And I was like, “I would not test that.” And she’s like, “Yeah, I know. But my job isn’t to write good code. My job is to get it past this particular code reviewer.” KENT: Yeah. And fair enough, if I was the code reviewer, I would want to know that somebody had taken care. JAMEY: I think tests are interesting because I find myself, when I have [failing] tests, staring at them and going, “Okay, the first thing I need to know is, is this test showing me that my code is failing or is there something broken with the test itself?” And then once you determine that, you can proceed. [Chuckles] KENT: Right. JAMEY: I find sometimes I’m like, “I cannot proceed because I have to figure this out.” [Laughs] KENT: Right. Have you ever kept a diary of unexpected test failures? This is a really fun exercise. Just keep an index card and even as simple as, “Was it the test?” Unexpected Failing Test: Was it the test or was it the code? I found that I had about one-third the test was wrong and two-thirds the code was wrong. JAMEY: That’s interesting. REIN: On that one I want to mention that I’ve been keeping a work journal for the past five years. It started when I had a very picky client who wanted me to account for every minute of my day. But what I found is that going back to it, it was the same thing you get from keeping a diary in life. I found it helped me understand the context of where I was when I made a particular change or how I was feeling or it helped me look back at decisions I’ve made in the past and compared them to what I was doing today. And it gave me a lot of context for my work life. KENT: Mmhmm. JAMEY: Do you do that on paper or in [inaudible]? REIN: I have an… Emacs Org Mode has a journaling subsystem where I just say ‘start a journal’ and it puts me in a file with a heading for today’s date. JAMEY: Cool. I write down everything on paper. And I get made fun of a little bit by other tech people sometimes. Not all the time. Not everyone is a jerk. KENT: [Laughs] JAMEY: But I feel like… [Laughter] KENT: [Inaudible] Thank you. JAMEY: Not everyone. No, lots of people aren’t jerks. And lots of people tease me in a way that’s not mean. But I feel like I’m very often the only person in a room writing on paper while everyone else is typing. I’m doing it write now, too. [Chuckles] SAM: For our listeners, Kent has held up his notepad as well. JAMEY: Well then, I’m in good company. KENT: Notepads. I have a small format Clairefontaine, I’m a stationary nerd, for notes. And then I have a large format Rhodia with the spiral on top, of course, for writing longer form stuff. And everything that I write, the last two books, all my blog posts, everything gets drafted on paper first. JAMEY: Why is that? I think it’s helpful. What do you think is helpful about paper? KENT: I can type faster than I can credibly think, but I can’t write faster than I can credibly think. I can get words out typing that I haven’t really thought about. And there’s a sync between my thinking speed and my writing speed. CORALINE: That’s really interesting. I was doing some [research] recently and learned that in the early days of computers, when the lag between typing a command and the execution of a command was on average between three and five seconds, a lot of people didn’t see that as a problem because they actually saw it as an advantage because it would give the programmer time to think about what she was going to type next. SAM: I hope you’ll use this time to think about what you’ve done. [Laughter] JESSICA: I find it’s often useful to have to think about, “What do I expect to happen again?” KENT: So, I have a trick when I’m doing TDD that I predict the test run results out loud. SAM: Calling your shots, yeah. KENT: Yeah. It’s such a simple change, it makes you sound a little bit odd if someone’s just listening in. But I’ve just come off a stint where I programmed for 30 or 40 hours in a couple of weeks and man, it just makes a huge difference. JESSICA: Yeah, that makes it an experiment where you’re trying to [falsify] what you think is going on. KENT: Right. JESSICA: Also, there’s a big difference between the test is red and the test failed the way I expected it to fail. KENT: Right. And sometimes I’ll call a sequence of errors. Okay, now I’m going to get a null pointer exception. Okay, now I’m going to get array index out of bounds. Okay now, it’s going to pass. And if I’m making progress on the errors that I’m seeing in a predictable way, it feels almost as good as the test being green. CORALINE: I have a saying that I never trust a passing test if it passes the first time around. KENT: [Chuckles] That is a worthless test. CORALINE: I’m not testing something that I should have been testing. Yeah. JESSICA: [Inaudible] CORALINE: If the code doesn’t surprise me, then I’m not thinking of edge cases. I’m not doing something right. KENT: Mmhmm. REIN: I think it’s also interesting to talk about this in a more general sense. So, there are systems that take in feedback and make adjustments. But then there are steering systems that have a good enough model of the thing under control that they can make predictions and act on those predictions and verify those predictions. KENT: Right. And knowing which of those regimes you’re in is really important. So for example, blog posts, I have no mechanism for predicting the viewership of a blog post. So, I don’t bother trying to predict it. I just spew out whatever and then react to the reactions that I get. And sometimes, I posted something last week about complexity partitioning. A set of strategies I use to avoid having to bite off big chunks of complexity all at once. And the viewership was 10 times what my usual viewership is. And I thought it was not really my best work. So, if I’m in that regime where I can’t predict what’s going to happen then I shouldn’t waste any time, mental energy, trying to predict. I should just do something and see what the reaction is. But then the flip side, if you could predict and you choose not to or you just ignore that, then you’re also wasting possibilities. SAM: Well, it’s interesting that you bring that up because that was one of the things that I was hoping that we could cover a little bit. But now I’m feeling all self-conscious that you didn’t think it was your best work. Do you have a hypothesis as to why that took off more than you thought it would? KENT: My opinion of the work is kind of irrelevant. SAM: [Laughs] Fair enough. KENT: It’s not just that I’m inaccurate in predicting how things will be taken. I can’t. And it’s irrelevant. The other previous big readership blog post was one called ‘Mastering Programming’ that had 10 times what the complexity one had. And that to me was just, I was just spewing out some notes. Did you have questions about the content of the complexity partitioning piece? REIN: Could you maybe give us a very brief overview as to what that is? KENT: Sure. So, there are broadly speaking in programming, two camps: the lumpers and the splitters. And the lumpers don’t mind dealing with a bunch of complexity all at once because they can see it all in one place. And splitters, and I’m a splitter, like to take the complexity they have to deal with and break it into parts and deal with the parts. The bet being that if you deal with the complexity in parts, you can do a better job of each thing. And you lose the bet when you break something into parts and it turns out the interaction is more complicated instead of less complicated. So for example, TDD is a complexity partitioning strategy that says, “I’m going to make this test work and then I’ll make all the rest of the tests work.” So, I’ve partitioned the complexity into this little piece that I have to do right now and a bunch of other stuff that I’m going to do later. Extracting a helper method is a complexity partitioning strategy where I say, “Okay, I don’t want to change this little bit in the middle of this big method so I’m going to pull out a helper so I can just change the whole helper method.” But I’m changing the whole thing all at once. That makes it easier for me to analyze what the effects are. CORALINE: That kind of requires good integration testing, right? The way I see TDD practiced in the wild, at least for the companies that I’ve worked for, they tend to produce a lot of unit tests and then integration tests seem to be an afterthought. KENT: Yeah. I’ve always had a hard time with, even with that distinction between unit and integration tests, because my tests skip between different scales with wild abandon. And I don’t even care what I call them. Hey, here’s a thought that I need to capture. This is the need to do this. Adding these two numbers together there gives you the sum, or canceling a subscription results in a rebate. To me, those don’t seem like terribly different things so I don’t make a distinction. SAM: It’s interesting. I use a taxonomy of unit and integration and short-span integration just because I find that it helps me think about the properties of a test. Can I run this one a hundred times in a second or am I going to have to throw this one at a server and let it run overnight [Chuckles] to do it? And the other thing that I find it useful for is thinking about how important it is to preserve this particular bit of behavior, right? If I’m looking at something that most people would call a unit test, then I can look at it and I can say, “Well, okay. So, this test is probably not terribly valuable. I can feel okay about changing it.” But if I’m looking at something that describes a large chunk of the application then I might be a little bit more cautious about it. JESSICA: Yeah, because tests prevent change. And like I was saying earlier, I put in a test for stuff I don’t want people to change, but I try to avoid committing to the automated test suite anything that’s implementation-dependent that it would be legit for people to refactor as long as the API-level outcome is the same. Does that make sense? KENT: Yeah, yeah, a hundred percent. And I would say that even that is a complexity partitioning strategy where say, I’m going to separate the stuff that might change from the stuff that might not change. Now I can deal with them with different levels of rigor or concern. CORALINE: Kind of funny. I write tests for completely different reasons. I see the primary value of tests as a form of documentation. I want to capture what the edge cases are so that the developers who come after me are aware of those edge cases. I want to have tests that thoroughly describe the functionality so that someone who goes back and reads the code can also read the accompanying tests and understand what it was that the code was trying to accomplish. KENT: I think I primarily write tests so I don’t need Zola. I have a lot of anxiety while I program. Like, “What could go wrong?” and “Am I really smart enough to do this?” and blah, blah, blah. And that’s the reason I do all of the complexity partitioning stuff is if I have a flow that lets me take one recognizably chewable bite at a time, then I don’t get overwhelmed. If I can stream a whole bunch of those together, then I have a really effective programming session. JAMEY: You answered a question that I was going to ask, which was I was wondering if this idea of lumping versus splitting was necessarily just about actual complexity and efficiency or if it could also be about the emotional state of the person who’s doing the programming. And I think you answered that because I agree. I find splitting things into chunks, I do that a lot in my code and it’s not necessarily because I’m making a conscious decision like, “I think this will be better in terms of complexity in my application,” but just like, “I don’t think I can do this unless I split it up this way because it’s overwhelming to me.” KENT: Yeah. And at Facebook, I’m surrounded by lumpers. JAMEY: Interesting. JESSICA: [Laughs] KENT: Yeah, yeah. I don’t [inaudible] JAMEY: Lumpers sounds mean. I understand that it’s not mean. [Laughter] JAMEY: But it sounds mean. REIN: So, I know some of the folks at Facebook like Simon Marlow and folks from the Haskell community and I’m fascinated [by code] to hear about the interaction between the Extreme Programming community and the Haskell ‘If it compiles, it works’ community. Because I’ve been a member of both. KENT: Yeah. I took Brian O’Sullivan’s Haskell course when he gave it at Facebook. I actually took it twice and I still can’t get my head around monads, so there you go. And that was the first place I identified this ‘for all’ versus ‘there exists’ people because the Haskell community is definitely… my first programming assignment for that course, I did it TDD style and Brian used my result as an example of how I had a bug and I missed it and a ‘for all’ test caught it in two lines of code. And the way that I do complexity partitioning test first, there has to be a way of doing the equivalent thing in Haskell. You don’t write a thousand lines at once. You still write them one at a time. So, there has to be some decision criteria for which line you write first and then which 999 you write after that. But I wouldn’t claim to have gotten my head around that. REIN: Kent, I’d like to maybe make something explicit which I think has been an implicit part of this discussion, which is the way that systems change over time as being an important part of how we design software. I think a lot of people think of systems as how they exist now and then how they exist at some point in the future and spend less time thinking about, “What is the best path from here to there through time?” KENT: Right. I call that succession, that conscious decision of how you get from here to there. Sometimes, you can have a system in one state and you can imagine another state but you can’t get there and safe steps. And then either you have to do it in unsafe steps or you just have to heave a heavy sigh and realize you’re never going to get there. We talk about, design patterns are snapshots, but we don’t have a good vocabulary for talking about, “And then we… and then we… and then we… and then we...” So for example, one of those is the, if you’re changing a data store there’s a sequence that works really well which is, you start writing all the new data to the new data store as well. Then once you have that working a hundred percent, you start migrating all the old data in the old data store to the new data store. And once you have all of that migrated, then you start reading from both data stores. And when the answers get close enough and they’ll never be at a hundred percent, but once the answers get close enough, then you can just start reading from the new data store. So, as opposed to the, “We’re going to shut down the system over the weekend, run the migration script, and then magically this time everything is all going to work,” which that trick never works but that doesn’t stop people from doing it. JESSICA: Ah. CORALINE: I see that making a lot of sense if you’re doing a very specific task. What I don’t see a lot of is systems thinking in terms of, “How is the system going to evolve over time?” I see a lot of thinking in two-week increments. I see a lot of thinking about “This feature needs to be added.” And I don’t see a lot of thinking about, “Where is the software going to be in a year?” or “What is it going to look like when we’re all finished?” I feel like we take a lot on faith with sprint methodologies that somehow everything is just going to come together and we have the best, the system will be as good as it needs to be. REIN: And even when you do have the sort of one-year architectural vision, in a lot of my experience, that’s where people stop. They say, “Okay, we know what we want the system to look like in a year and then we’re done.” In my mind, that’s where you get started because now the really hard problem is, “What is the path that’s incremental that we can make changes in a continuous way from here to there?” CORALINE: And of course, we can’t predict what it’s going to be in a year but we can at least think about it. REIN: Yeah. And we also have to keep in mind that where we think we want to be in a year is in fact not where we want to be in a year. KENT: Alright. So, I’ve had seven years’ worth of lessons in this kind of architectural evolution, watching projects and being involved in projects at Facebook. And yeah, there are things you can predict. You can say, “Every user uses this many kilowatts and we have this many megawatts coming into our data centers,” and you can make some predictions based on that. But every once in a while, the users will suddenly get really excited about live video and then all your calculations go haywire. So, you have to have some model for long-term growth, especially for things like building data centers that have a really long lead time. But you also have to recognize, have the humility to recognize the limitations of those models and recognize when it’s time to abandon that path and move onto something else. JESSICA: It gets back to what you said about the blog posts. You can’t predict what people are really going to resonate with. And so, you have to react. KENT: Or waste your time. You choose. REIN: For me in my experience, the only systems that have ever worked under these circumstances are systems that take in feedback and react to feedback on a time scale where that feedback is still relevant. So, that’s why Waterfall became Agile for instance. That’s why SRE is focused on making changes based on feedback. All of these systems are now becoming cybernetic systems. I think that’s the way forward. SAM: Can I get a definition there? REIN: Yeah. A cybernetic system is a system that takes in feedback and responds to feedback and makes changes to the underlying system in an attempt to keep it in a certain state or change its state in a certain direction over time. So, test-driven development is a cybernetic system. You write a test, you look at what the test says, and then you make another change. CORALINE: But all of that is irrelevant if our teams don’t also react to change in the same way as our systems do. JESSICA: Our teams are part of the system. They’re not separate. CORALINE: But teams can resist change in ways that software… it’s easier to change software than it is to change people. It’s easier to change software than it is to change teams or organizational structure. JESSICA: Yeah. [Inaudible] REIN: Yeah. You’ve been [impunitively] correct. KENT: It’s easier… REIN: Change to software may not be correct in your organization. It may fail for reasons that have nothing to do with the correctness of the software per se. I’ve seen that frequently. SAM: Right. Right. You can rewrite a system that was messy and ugly in beautiful Haskell and if nobody knows how to write Haskell, it’s the wrong thing to do. JESSICA: Yeah. CORALINE: Kent, you look like you had a thought about what we were saying about changing organizations versus changing code. KENT: Yeah, so cultures change, period. And they do it without a lot of conscious thought. I’ve made at least part of my career out of trying to make those changes more conscious. So, I would say that the skills are different. The way you measure progress is different. But it’s… there are ways in which changing culture is easier than changing code. The outcome of trying to change, say a team’s culture, it’s harder to measure. You don’t get red and green. But you can look back and say, “Wow, six months ago this would have been a big fight and now we just had this conversation and resolved in 15 minutes. How did that happen?” Well, it probably didn’t happen at random. Somebody actually exerted some effort and now a team has learned conflict resolution skills. JESSICA: So, I agree with you at a team level. The interesting bit about changing software is that culture change at an organization level is much harder than as a team and slower. But software change at an organizational level is not any slower than at a team level in the strange way software scales. Like, if I push out a new version of something, I could push it out to a lot places and a lot of customers faster than I can, say, train all those customers in a new way. Or employees. KENT: I think again, try and change an API. Pushing out a new implementation is one thing. You can… push of a button. But if you have an incompatible API change, that’s a human problem. The human part of that problem is far more difficult than the software part of that problem. It requires completely different skills than deploying and implementation change. But… JESSICA: [Laughs] Yeah, so maybe which one is easier depends on what skills you have. KENT: Yes. Yeah, that’s what I’m saying. So, I think storytelling is the fundamental skill for change. And once you learn how to, I guess it’s again this complexity partitioning thing, once you learn how to say, “Okay, what’s the next story that I need to tell? How am I going to tell that? Is it going to be literal? Is it going to be metaphorical? How much emotional content? What metaphors do I use?” all of this, once you get good at that, the strategy of storytelling, then culture isn’t scary or complicated or hard. It’s still risky but it’s not fundamentally harder than programming for me. CORALINE: I think that really depends on the person. We have classes where you can learn Haskell. We have classes where people can go to a bootcamp and learn how to be programmers. What are we doing to promote those storytelling skills? KENT: I enlisted a storytelling guru. I practice. I read about storytelling. That stuff is out there. it’s just not maybe the classes that I took in computer science school. But people get degrees in Literature, which is all about storytelling. And if that’s not part of what we’re used to, that doesn’t… I realize that I have to play catch-up on storytelling. REIN: Wait, are you telling me that a Liberal Arts education actually is relevant for a STEM career? Who would have thought? [Chuckles] KENT: I was in Computer Science and Music school every other year and I just ended up on the wrong year, so… [Laughter] JESSICA: The storytelling, I want to relate that back to what you talked about in terms of succession. Making change in software architectures is also a matter of that path from here to there and then you told a story about a database roll-out with migrations and double reads and dark writes. I injected the dark writes. And it’s the same kind of thing, right? The design pattern books contrast with for instance, refactoring. And the brilliance of Martin’s book there was that it took people step by step and told them the story of how to make the change. So, maybe storytelling is also how we get our software from one point to another safely. REIN: Can I just add onto that a little bit? I think it’s not just that he told a story but he made us feel safe. He made us feel like… JESSICA: [Laughs] REIN: He makes us change… The way he said we can make this change, that we’ll be okay, that things will work out. KENT: Yup. SAM: Wait, are you saying that feelings are important? KENT: [Laughs] JESSICA: It might [inaudible] CORALINE: We have to change the nature of this podcast entirely. [Laughter] CORALINE: I thought it was all about technology, but I think we’re learning that people matter. KENT: Oh. Yeah, this is… JAMEY: Only Jess’s feelings. [Laughter] JAMEY: That’s what I learned. CORALINE: What version of your API are we using, Jessica? JESSICA: At least 18.6.k. [Chuckles] CORALINE: I want to see personal release notes. That’s what I want to see. JAMEY: Yeah. I want an open source version of Jess’s API so I can feel the same feelings as her. [Laughter] KENT: Do you get to submit a pull request? JESSICA: Dude, if I could [inaudible] what it’s like to be in my head, I would be very rich. [Laughter] SAM: Let’s see. Aren’t the feelings an internal implementation detail that only leaks out through the API? JESSICA: It leaks out pretty loudly in my case. SAM: [Laughs] CORALINE: And with great enthusiasm. JESSICA: [Laughs] REIN: That actually reminded me of a point I wanted to make about whether the people stuff is harder than the software stuff. And I think there is one objective way in which is harder, which is that software, computers are perfectly rational and humans are extremely not rational. And it’s much harder to predict human behavior on that basis, especially because… CORALINE: Thank the gods. REIN: Groups of people are even more irrational than the sum of their parts. KENT: Well, that good news then is that once you get to sufficient scale, computer systems look like biological systems. JESSICA: Yeah, because you can’t… REIN: Wait, is that good news? [Laughter] JESSICA: You can’t [inaudible] anymore at scale. [Chuckles] JESSICA: Well, it makes our job more interesting. KENT: Yeah, I remember the first time I pushed code at Facebook and I was watching the IRC channel and it said, “This release has gone out to 6600.” I thought, “Wait a minute.” [Chuckles] KENT: And I panicked. I grabbed the person next to me. I’m like, “What’s going on, on the other hundred machines?” They’re like, “Ah, who knows?” [Laughter] KENT: It’s not causing any problems. We’re just not going to worry about it. CORALINE: Those machines are just having a bad day. They need more coffee. [Chuckles] JAMEY: Coraline, you just got into what I was going to say, which is what… I think it’s really fascinating how people personify their technology. CORALINE: Don’t personify computers. They hate that. JAMEY: [Laughs] JESSICA: All networks are rooted in the physical. JAMEY: Rein was talking about computers are so logical. Obviously, in my rational mind, in my rational computer scientist mind, I know that that’s true. But when I think about writing code and working with computers, it definitely occurs to me that I often feel like, “No, it’s doing this to spite me. No, it’s fighting with me.” I’m like, “No, it’s doing exactly what I told it. It’s just, I don’t know what I told it because I’m the one that’s stupid.” [Laughs] I’m not stupid. But it’s doing what I told it to do but it feels like it’s plotting against me. I think about that a lot. Because wee all do this. We all give emotions to these things that we work with that don’t have emotions. CORALINE: We’re very happy that this episode, which is a very special episode for us all, is sponsored by Code Here. Code Here helps teams to be more effective by applying deep technical expertise with the human side of software. Code Here partners with companies to help them achieve their team’s goals through feature delivery, pairing, individual coaching, and group training. They would love to hear about your team and help you get your teams to the next level. For information, visit WeCodeHere.com/GreaterThanCode. SAM: I would also like to add a personal testimonial that I met several of the principals of Code Here at RubyConf a couple of weeks ago and they are lovely, lovely people. JESSICA: Also, thank you to all our supporters on Patreon because if you donate even a dollar, you get to be part of our Greater Than Code Slack team, which I really like that Slack team because everyone is nice to each other and we talk about interesting things. JAMEY: So unfortunately, we’re coming near the end of our show today. And we like to finish off our shows with everyone giving a reflection about something that stood out to them that they’ve been thinking about that we talked about on the show today. So, I guess I’m going to start. And the theme that I really got from today’s episode was thinking about the reasons why we do things. And I started thinking about this right at the beginning of the episode because there was a quote that Kent said about mathematical proofs, that proofs are about convincing other mathematicians, not about universal truth, which I thought was really interesting. But then as we kept talking I think we talked a lot about why we do things. Like why do we write code the way we do? Why do we write tests the way we do? Why? Is it about our efficiency? Is it about our emotional state? Is it about the future of our codebase? Is it about how other people interact with it? And I think there’s a lot of different motivations that we may have behind doing something that all may be just as valid as another one. But I think it’s really interesting and maybe necessary to be thoughtful about why we’re doing stuff because I think that gives you a lot of insight into your routines. That’s very important. So, that’s on my mind, I guess. REIN: So, my reflection of all of the really interesting things that were said today is that when Kent Beck talked about how computer systems are beginning to take on the complexity of biological systems, I think this is in my mind one of the most important technical stories of software engineering for the next 20 years plus, which is that the system models that we currently use to understand software complexity that we largely brought from the mainframe era that got us here aren’t going to get us there. They’re going to fall apart when we have to start dealing with computer systems made up of 2000 machines that talk to each other in ad hoc or in emergent ways. These models are going to fall apart and they’re not going to get us where we need to be. And we need to start working on new models that are able to handle this level of complexity. CORALINE: I think one of the things that stood out to me that I want to think more about is this concept of lumpers versus splitters. As soon as he said that, I wanted to figure out, am I a lumper or am I a splitter? And I think the answer is both and it depends on where in the software development life cycle I am. I think in the beginning of a project, I’m a lumper. I’m thinking of the system as a system, not as component parts. And over time, as I’m building it out, I’m splitting things into smaller and smaller pieces. But at the end I have to come back to thinking of it as a system to determine if the software is successful or not. So, I think that was something that I particularly was interested in hearing about and thinking more about. SAM: Well, now I have a totally new takeaway, because that is fascinating. I am the exact opposite way. [Laughs] I start out thinking about splitting the system into little bits and pieces that I can actually understand. And it’s only over time that I’m able to put those pieces together and chunk them into something that I can lump together and think about as a whole. But interestingly actually, that does tie into the thing that I was going to use originally as a takeaway, which Jamey is something that you said about how strategies for dealing with complexity, they don’t just have to be about the problem itself. They can totally be about the emotional response of the programmer who has to get some work done. And yeah, that’s absolutely valid and a perfectly reasonable response to the complexities that we have to deal with. JESSICA: Oh, I have at least three different things I could reflect on, but I’ll go with the one about the for-all-ers versus the for-each-ers, universalists versus existentialists. And the for-all-ers wanted to go so abstract that things become the same. I’ve been thinking lately about the expression ‘It’s turtles all the way down’. We want the same mental model to work at multiple scales. We want to be able to zoom in and out and still think of things the same way. We want Agile to scale. But the thing is, I started out as more of a for-all-er I think when I was younger and now I’m more of a for-each-er because to actually change anything at every level of scale, you need to get into the details. And most of programming winds up being about the details and the errors cases. And I think the for-each-ing becomes important. It’s different to change an organization versus a team. It’s different to change a distributed system versus a single service, a single compiled unit you can say different things about. So, I find that really interesting. They both have values but it’s important to create those metaphors. And then probably take them too far so you remember they’re just a metaphor. That’s my theory. [Chuckles] KENT: So, my reflection from the conversation is that I'm comfortable with influencing human systems to change now in a way that I didn’t realize how comfortable I’d gotten at that. I’m comfortable changing technical systems and have been for a long time. But culture change doesn’t scare me in the way that it seems to other people. So, I need to think about first, whether that’s really true. Am I as good at this as I think I am? And second, if I am, what I can do to help other people learn it. JESSICA: Awesome. CORALINE: Kent, thanks a lot. It’s been really great talking to you. I didn’t cover everything I wanted to cover but we only had an hour. So, thanks again. And I think this is going to be a really great episode that our listeners will really enjoy. KENT: Wow. Thank you very much for inviting me on. JAMEY: Yeah, it was a real pleasure. Thank you. SAM: Thanks. Come back any time. KENT: It’s a deal.