SAM: This episode of Greater Than Code is brought to you by Atlas Authority. Atlas Authority helps organizations manage and scale their Atlassian stack. With expertise in Jira, Confluence, Bitbucket, and other software development tools, Atlas Authority offers consulting, training, licensing, and managed hosting services. Visit AtlasAuthority.com/GTC to find out more and learn why organizations trust Atlas Authority to implement, to support, and maintain their critical Atlassian applications. ARTY: Hi, everyone. Welcome to Episode 137 of Greater Than Code. I am Arty Starr and here with my fabulous co-host, John Sawers. JOHN: Thank you, Arty. And I'm here with our two, count them two fabulous guests. Today, we have Clare Macrae, who's an independent consultant helping developers work more easily with Legacy C++ and Qt code. She has worked in software development for over 30 years. Until early June 2019, she was a Principal Scientific Software Engineer at Cambridge Crystallographic Data Center. And we also have Llewellyn Falco, who is an agile technical coach who specializes in teaching teams how to slay their legacy code dragons. He's also the creator of the open source testing tool ApprovalTests, co-author of the Mob Programming Guidebook and Co-founder of TeachingKidsProgramming.org. Welcome to the show. LLEWELLYN: Great to be here. JOHN: Okay. So, we always like to kick it off in the same way. It's going to be a little bit different since we have two guests that we have a question to ask for. LLEWELLYN: More super powers. JOHN: Yes. So, who would like to start and tell us about their superpower? CLARE: This isn't a comfortable question for me. I didn't really think that I had a super power. But thinking about it, I think is collecting useful information and sharing useful information, as well. I've been lucky enough to be going to programming conferences for quite a few years now and to local meetups, meeting lots of interesting people, hearing lots and lots of useful information. And at some point, I started collecting it all in a big pile of markdown documents and it's so useful to be able to search back through notes of all the talks I've been to. And I find that when I meet new people, sometimes they say things that remind me of other conversations and I start sharing links. I start sharing information and sometimes even introductions to other people, and once in a blue moon hiring consultants or contractors that way. So, it's turned out to be a really, really useful thing to do and I'm so glad I've done it. LLEWELLYN: Yeah, like the archivist. CLARE: Yes. I think where it probably came from is as a teenager, I ended up doing a lot of work on my family tree. And it was natural to take notes, you wanted to check facts and be able to refer back to them. Although that was a long time ago, I guess that's probably where it started. JOHN: Did you find yourself doing specific things to develop the skill as your career developed or did it just happen naturally? CLARE: I always spend quite a lot of time trying to work out how to store the information and how to make it easy to edit on iPhone and iPad, went through a few different technologies and ways of structuring it and adding hyperlinks to it and things like that. So, it just evolved. That particular app I use now is not of any great interest, but I ended up customizing it quite a lot for my convenience and mimicking that customization on a PC, sort of perilously close to setting up a sort of one note like thing that something I had much more control over the data than where it was stored. ARTY: So it sounds like you're not only collecting this information, you're also organizing it in a way that increases its usability by structuring it in such a way that makes it searchable and easily accessible to sort of answer a question you might have with that information. Does that sound right? CLARE: Yes, I think that's right. LLEWELLYN: And also, Clare is really good at sharing that information. So it's not just coming in, but it's also going out. ARTY: So you've got an inflow and an outflow. So, I've got a whole set of questions here. But before we get into all those questions, Llewellyn, what is your superpower and how did you acquire it? LLEWELLYN: I think my best superpower is my ability to collaborate with others. The earliest thing that I think contributed to this was in high school. I used to juggle. And one of the things I really wanted to do was I wanted to juggle torches. And I knew that there's no way my dad was ever going to let me come close to fire. And so, using my creativity, I -- you know, my dad was a university professor. We lived at Michigan State and I found a local user group there at the juggling club. And I brought my dad there and I asked them to convince him that I should be allowed to use torches because I knew that I could not do it myself, but maybe -- you know how sometimes when someone else says something, it makes more sense than when you say it. Having me there was in no way helping this. So, he called over Laurie and said, "Show Llewellyn how to pass." And Laurie started teaching me how to pass clubs which is an extremely fun activity. It's still the thing I enjoy most about juggling, but it's also something you can't pass by yourself. Like you need another person there. And in fact the people who are best at juggling this way, it's not that they're so good at throwing tricks, it's actually that they're really good at catching that garbage that you're giving them and smoothing it out and giving it back to you so that everything -- like it's the good passers are the ones who when you pass with them, all your throws are caught and everything comes right back to you. And it was many years later before I had my first pairing session. And I think one of the reasons it felt natural there to work in this sort of very coupled mode was because I had already had so much experience with passing. I juggled for almost a decade at that point. And then early on, I paired almost religiously. I had my own company. So, it was just the two of us. And it was very easy to get people to pair when you're hiring them. And then when I started going to conferences, we started realizing that the thing I'm doing is very different than other people. And this is where mobbing started. When I brought that style plus Coding Dojo back to [inaudible] who really took it and went with it. I've gotten the chance to work very closely with tons of people in pairing and mobbing. But I think probably it was just my desire to juggle torches that started this whole thing off. JOHN: That philosophy of being a good passer, it reminds me of the Unix philosophy "Be liberal in what you accept, and strict in what you emit". LLEWELLYN: Yes. Take anything, clean it up, give it back nicely. ARTY: Isn't that like Postel's law, like robustness principle, right? JOHN: Yeah. ARTY: Applied to juggling, juggling torches. JOHN: And pairing. ARTY: And pairing. Catching the garbage and smoothing it out. I love that. JOHN: Yeah. Actually, that's a really great metaphor because it makes it clearer, especially I assume, if you're the more experienced person or at least the more experienced with pairing person in a pair that like part of your job is filtering out the lack of skill that the other person has and building a cleaner environment for them to learn and pair in. LLEWELLYN: Yeah. I like the fact that you mentioned that skill and pairing because a lot of times we take sort of skill as an absolute, like single dimensional thing and it is very much not. ARTY: So this is really interesting. You have very kind of compatible superpowers here. We've got Llewellyn who's very observant about the dynamics of pairing and strategies on how to catch the garbage and smooth it out. And then we've got Clare who is collecting and figuring out how to share all this useful information. And it's been sort of this distiller structuring, kind of distilling all the lessons learned and structuring that information, figuring out how to share it and be useful. So, what type of work are you two doing together now and what's it like pairing together? LLEWELLYN: We have been creating the C++ version of ApprovalTests and it started about two years ago, Clare? CLARE: Yeah, about 18 months. Yes. LLEWELLYN: With a tweet. I had just started a client that was doing stuff in C++ and my C++ was a little rusty. But also I knew I'm going to want ApprovalTests there because it makes testing easier and C++ is usually hard to test stuff. And so, I put a tweet out that said, "Is anyone interested in working with me?" And Clare responded, I'm still not entirely sure I know why. CLARE: At that point I'd gone to working part time at work. I was down to working just three days a week. And I, for a year or two, hadn't actually programmed any C++. I'd become a product owner, really sort of combined product owner, project manager work. And my C++ was feeling really, really rusty. And I think my identity, I'm a programmer at heart. And Llewellyn mentioned some work he'd done with [inaudible] and although neither of them knew it at the time, they'd put together an amazing two hour video with fantastic advice on refactoring, how to work with code. So, Llewellyn was kind of a programming hero of mine and I followed him on Twitter. In my memory, I left it a dignified half a day or something before replying and saying, "I might be able to help," and not admitting how rusty I was. But Twitter suggests that I replied actually a lot quicker than that. So Llewellyn immediately said, "Let's get on a phone call." And we just started screen-sharing and trying things out. I felt I contributed a little bit to it. I learned a lot and we just kept going. Just kept pairing together, remote pairing from a long distance part. LLEWELLYN: Yeah. One of the things that I do a lot when I'm coaching is retrospectives. And after the very first time we paired, we opened up a mind map and we did like a mini retrospective on like how that day had gone. And definitely there's a ton of things that I just couldn't even -- Clare's like, "I'm rusty C++," but she was not rusty at all. She knew her stuff really solidly. But we had set everything up on my computer. And so, I prefer to pair where for an idea to go from your head to the computer goes through the other person's hands. But a lot of this stuff was going from my head to my hands, which makes Clare more like watching over me than participating as much. And that came up in the retrospective. And so, we had a good enough time working together that she wanted to do it again. It's always a plus. And so we set up another time and this time we said we need to focus so that it isn't that way. So we spent time like getting her computer set up, getting stuff working so that it had to interact with both of us. And we did a retrospective at the end of that one. And I remember, I forgot how you worded it, but it was something like last time when I didn't know what you were doing, I didn't want to interrupt you because I didn't want to break the flow. That is such the exact wrong thing. Like the whole idea of two people working together is you want both minds, you want both understanding. And so when we switched that, then every time I would say something either stupid or unintelligible, which happens quite a lot, Clare would just sort of look at me like I have no idea what you're talking about. And it would force that conversation. ARTY: I don't hear a lot of people doing retros at the end of the day around a pairing session. It's not something I normally hear. Could you tell us a little more about like what your processes you go through for your end of the day retro? LLEWELLYN: Yeah. What was it like on your side, Clare? CLARE: You're very proficient with MindMup, which is a really lovely web-based free, or to my knowledge anyway, a really lovely free web-based mind-mapping application. And we just fired thoughts in. And as we went, you structure the information and we started seeing patterns in what we were talking about and things like that. So it worked really well. ARTY: What questions do you ask when you lead in a pair programming retro at the end of the day? You said you fired thoughts in, but was there any kind of structure at all of what you specifically asked or just 'how did this go today'? LLEWELLYN: I am a believer that you should put all observations that you can think of in there. At the time, I didn't ask anything very specific other than 'what did you observe'. I used to ask 'what did you learn', but it turns out that's a horrible question. It puts a bar that is not useful. Just anything you observed is good. But I very recently have taken to structuring that just a little bit more because sometimes when you look at like an abyss, you see nothing. When there's too much, you're just like, "I need little things to focus on." And so, what I very recently started doing is seeding that retro with, like in the center there's today's activity, but then what did you observe about the product domain? What did you observe about the language? What did you observe about the patterns we're using about the tooling, about the environment, about our teamwork? And this one is super useful but a little controversial, which is what emotions did you feel because very often, especially I think in our line of work, we try to push the emotions out. But usually emotions are your subconscious saying, "Something here is important to me, pay attention." And so if you put the emotion and then you put what happened right before you felt the emotion, it's not like, "I felt bored", is not enough. You have to say like, "When we were trying to get that script to run, that just felt tedious and bored." And then you can say, "Why? What does that boredom trying to tell you?" Or, "I felt angry when I tried to mention this and I didn't get listened to." What is that anger trying to tell? Or, "I felt sad when..." The emotion is not enough. You need the emotion plus the triggering event, but that usually is super valuable. JOHN: Yeah. I was just thinking that. Like part of that mind map could be not only the things that happened during the session but like the emotional context of the two of you coming into that session. Like what were those baselines that could also affect how things went? LLEWELLYN: I hope you would do that before you start the session. There's a core protocol of like checking in. So hopefully coming into the session, you start to think. CLARE: As time has gone on, funnily enough, we've often that initial part of the conversation has grown quite a lot. We've ended up doing more than purely talking about the programming. So from my perspective, there've been some really helpful conversations about, there've been some really helpful conversations perhaps if I was struggling with a small thing at work or something like that. Just having an extra voice, somewhere to turn to for, "Have you seen this before? Have you ever dealt with this kind of situation? Can I talk to you how I might handle something?" And that is not something I've ever really had experience of before. And it's been really helpful, and really, really supportive. For me that was a fantastic and totally unexpected benefit of answering a small tweet about programming C++ and Googletest framework. It's been quite an honor to get that kind of support. In time, I hope I'll be able to share that with other people. LLEWELLYN: I mean, I've already benefited. I think it's always easy to see the things that you benefit from and maybe not as much the things that you're helping your pair with. But I've had so much benefit as well from Clare. It's not surprising to me. I think maybe the surprise is that you thought of it as answering a tweet. That of course is not why this has been helpful. It's because of the kinship that's formed through working together. And it's not surprising to me all that kinship has unexpected benefits. Clare sort of mentioned when she has like an issue at work or some of the struggles when you are starting [inaudible]. But we also just have -- we usually session, we'll schedule like a two hour session to work together. And I think it's fair to say that we have not been keeping to that two hours very well. When we started, I was in Finland but now I'm living out in California. And Clare has been in Cambridge, UK the whole time. So we've had to adjust to make our timezones match up. It's usually my morning now and her, her evening. And sometimes I think I keep her really late into that, but we will have sessions where we spend the whole time just catching up with neat things. Just recently, I spent a month doing conferences out in Denver and out in Budapest and I got back and I hadn't spoken to Clare in a month. And I just had so much stuff I wanted to share. And she had just recently transitioned jobs. She had so much stuff she wanted to share and that's fantastic. And we got to share it. And when we were done we're like, "Well, it doesn't look like we're going to do much programming today." But that's just as wonderful and beneficial. It might not show up in the open source, but it's a huge advantage of having sort of a partner to work with. One day, we might actually meet in person. [Laughter] JOHN: It's interesting how that contrasts with the sort of typically remote and not synchronous nature of a lot of open source work where the teams are distributed and they're working in their own little areas, and then occasionally come together to integrate or whatever, but don't have that sort of dedicated time together with multiple team members collaborating more closely. LLEWELLYN: Yeah. And there's so much they miss out I think because of that. Even just this morning, I was working on something right before we started this podcast and I was like, "Oh, I think I have a way this works." I pinged Clare and as soon as she saw it, she was like, "Oh, that's not a good idea." A lot of times your first instinct, your first thing, you miss so many things right. And so that first draft is just really rough. And having someone else there who can sort of look at it and say like, "Hey, let's move this out. Maybe you're not considering that." It makes what you produce so much better and more polished. And I am really grateful that so much -- like this project would be so ugly if it was just me. And it is so beautiful. Thank you, Clare. CLARE: Really appreciate you saying that. Thank you. LLEWELLYN: So yeah, when people work asynchronous, I think that not only is it less friendly, it's a worst result. JOHN: What's striking me about this is so many people talk, or at least the ones I've heard talk about the open source community and how you're joining a project and you're joining the community for that project. And the way that tends to play out, at least as far as I've observed, and I'm not deeply embedded in a lot of open source work, but from what I've observed is there is a community in sort of a lightweight sense and that there's all these people working on a common thing. But the community that you two have formed is a very, very different sort of experience for people. LLEWELLYN: Yes. One of the things we wanted to talk about was something that has sort of grown out of that kinship of us working together. That is really surprising to me. So ApprovalTests was originally written in Java back in like 2009/2008, somewhere around there. But I pair a lot, so it has moved. It's in .NET, it's in Python, it's in Perl, it's in GO, it's in Ruby. It's all the languages. So, I've gotten to pair of quite a lot with a lot of different people and I tend to work in this style for remote pairing. Something that is very unique - we have been writing documentation. And so, Clare, I don't know why we're writing documentation exactly. Maybe you mentioned your superpower is the sort of archivist. It definitely wasn't in the very beginning. I think the first version of you wanting to try to explain this to other people came from your talk about Approvals at CPP on C? CLARE: C++ on C. LLEWELLYN: Yeah. Which is a conference out on the coast of England. You want to tell us about that? CLARE: Yes. By that point, I think we've been spending about a year together programming few hours a week, few hours every other week or something like that. And in a lot of ways, I feel quite privileged to be able to spend my spare time working for free effectively. I care a lot about testing your software. One of my motivations in life is learning, but I felt that we didn't have a good story around explaining how people could use the software that we were writing. And I think I felt that if we were going to be spending all of this time on it, then I wanted to be able to share the information. LLEWELLYN: And that's just because Clare is a better person than me. Like I'm writing this because I want to use it. I'm like, "I don't know how to put enough thought into the rest of the world. CLARE: So at that time, I was employed. As I said, I was working three days a week, but I'm sure I was doing more hours than that. So it really was my spare time. Work was benefiting indirectly for me but not directly. So yeah, we had quite different focuses at that time. And I'd also been become involved in a fantastic initiative in the C++ community called #include, which is a community trying to make C++ a much more friendly language to learn to provide a really supportive place for people to start to learn the language and to get stronger skills but also providing a very different kind of support about trying to make C++ conferences more diverse, more welcoming, trying to encourage adoption of codes of conduct and actually having people there backing the codes of conduct as well. LLEWELLYN: Let me just explain that a little bit. Because a lot of the viewers -- viewers, that's an interesting use for podcast. But a lot of listeners are probably familiar with the Ruby world. And the ruby world is actually an amazingly welcoming and supportive community. The C++ world has not been that way traditionally. I consider it a bit mean having experienced both worlds. I feel like the C++ culture is a bit mean. It's more like, "I am going beat you until you prove that you are worthy to be here." And in fact a lot of the C++ language is that way too. Like if you do something wrong, it's not going to give you like a nice supportive help message. It's going to give you like, "I'm not going to do anything and I'm not going to give you information and be your head against this code until it compiles." And so, #include is really trying to change that culture as opposed to the language. A lot of times we think of the language just as the syntax but the language is the culture, it's this tooling, it's the syntax, it's the libraries. So sorry, I just want to sort of give that context to people who maybe haven't had an experience with the C++ community. ARTY: I'm curious how like at the conference you're introducing ApprovalTests and sort of the story narrative around testing that you created, how that was received, how people responded to it. What kind of things you learned from, it sounds like you're a bit at odds with the current culture and the things that you're doing would have an effect and changing that, sharing that information. CLARE: #include kicked off a couple of years ago. First of all, we're all on the C++ language Slack channel. But that wasn't as welcoming and as moderated as we liked. So we ended up setting up a discord server instead. And that discord server now has over 2000 people, many of whom are on it regularly. So my experience in the time that I was enjoying working with Llewellyn was I was seeing a really positive and supportive and diverse community online as well around C++. And the discord server has good tools for moderation and things like that. So it's going really, really well. And people are reporting that they're finding it very welcoming. New people that joined in are saying they're friend recommended. They come here and join it because of how helpful it's been. So outside of that circle, it's making a difference with, as I said, conference codes of conduct and things like that. But for me, having Llewellyn's support and the support on the discord server for the first time ever gave me the confidence to submit and talk to a C++ conference. And that was this new conference in the Kent coast in England. I didn't have any idea at all of how it would be, what the response would be. LLEWELLYN: Your talk was awesome. [Inaudible]. CLARE: The talk was really, really good. The line that I took was, "Lots of people know about writing unit tests and mocking and all sorts of things like that. But where ApprovalTests is really incredibly valuable is if you can't even start restructuring your code safely in order to start doing the conventional unit tests." And it turns out that that's a really popular message. I'm showing a lots of languages, but it's something that maybe just Llewellyn hasn't been involved in the C++ community, I don't know. But there are many people who have seen this talk who have responded to it for whom it was a really welcome messaging. I have some really exciting conversations. We won somebody within a day of watching the video of this talk in February. They actually picked the Python implementation of ApprovalTests because that was relevant to their work. But with a small bit of information for me added to what was in the talk, within a day, they'd got complete coverage of stuff they'd been trying to test for years. And there's this friend on [inaudible] had managed to get total code coverage, multiple refactorings and ApprovalTests gave feedback multiple times when refractories broke things. So it's given me a lot of confidence and a lot of excitement to be able to share ideas that may be well known in other languages, but they are new to C++, and they're really being welcomed. LLEWELLYN: And even just going into the talk, one of the things that sort of showed up is, so I guess it was my ego and Clare's caring about other people, but Clare would be like, "You know, [inaudible] the talk, I have to explain this stuff and I just want to make sure I'm getting, they need to do this so that it works." And I sort of applied to that with like, "Oh, it's like you're showing my dirty laundry." I don't want it to have to be that way. I want it to be so you don't have to do this ugly thing to make something work. So, we actually started improving things for that. After the talk, she came back with like, "I got all this stuff I want to archive. I want to write some documentation." And so I guess maybe that first talk was you trying to, like taking an alternative to documentation as a way of getting this information out there. And so shortly after that, we started writing more and more documentation. And another thing that came from that conference was just a really helpful phrase from Kate Gregory, which was help messages. Instead of calling them air messages, call them help messages. And it's surprising how that changes what you write. And so we had written a help message that sort of said, "Hey, stuff isn't configured right because we're blowing up, but probably it's configured wrong in this way. You need to add this piece of code." And C++ doesn't have a good backend manager history. Every other language out there has a nice backend manager. But C++ does not. And one of the ways we sort of get around that problem is we take all the code and we put it into a single header file. And of course it's horrible to write code that way, but it's very good to sort of deploy it this way. And so we wrote a little utility to take all our separate files and combine them into the single header file. And as soon as we wrote some C++ code, it broke. And so we broke production and that was not a good day. I remember, we were happy with the help message. We were proud of of it. And I got a text very shortly after like, "Oh, [inaudible] broken. I sort of looked at it and I was like, "Oh, it doesn't realize that the text in this multiline string isn't C++," because we had put C++ inside of the help message. And I was like, "I'm going to have to rewrite the entire parser." It was such a big task. I had no idea what to do. And of course now Clare is properly sleeping because it is way past midnight. And so I'm just stressed about it. And when we talked in the morning, we were sort of like, "What are we going to do about this?" And I was like , "This is really hard to solve." And it turned out it wasn't. There was a nice little hack that we were able to figure out. But because that was so painful, there's a process that you can use to harness that pain for good called safeguarding. And so we did a safeguarding exercise. And I think this is the only safeguarding we've done. Is that right, Clare? CLARE: That's right, yes. LLEWELLYN: Yeah, it was extremely valuable. Do you want to explain the process? CLARE: Yes. To me, the idea behind it is quite similar to a retrospective, but it's a structured conversation that happens around editing in a shared document. And I'm used to retrospectives being perhaps people writing comments on Post-It notes and then talking about the individual Post-It notes. But then my experience with that is that often some speakers are more confident than other speakers. And once a conversation in a retrospective starts going a certain way, it can kind of narrow down other people's inputs. LLEWELLYN: Yeah. The first thing we did is we made a shared Google doc and we just started typing simultaneously - answer to three questions. Why did we write this book? Why didn't we figure out that the bug had happened before we released the code? Why was it hard to fix the bug? Both of us are just sort of typing different reasons for this. And the documents are making -- we had the bullet points and then we voted. Of all the things we just said, which ones are important. And so this is very similar to dot-voting except for no limit to the dots. Just vote the ones. And after that we sort of had a list of what are the top three things that we both thought were important? And then we decided on a budget. And I think we chose 80 minutes. Is that right? CLARE: That's right, yes. I think I started with the day and you said, "No way. I don't have that much time." So we narrowed it quite to lot. LLEWELLYN: So then we had a budget and we chose 80 minutes because it divides four easily. And then we did the same thing in the document of taking those top three things and brainstorming for them, "What could we do in 20 minutes that would make this problem less likely to occur in the future?" And we came up with a bunch of things there and then we voted again. The thing I remember most is the thing I thought was the most important. Clare did not think it was important. So I lost that one, but it didn't matter because so much good came out of this. And so, we budgeted that 80 minutes and we did these top three things. It turns out Clare -- I mean, Clare is really good at C++, but she has almost magic wizard skills when it comes to Bash. Our DevOps really took a step up after this. We had scripts that like not just ran our tests but ran our tests against the compiled header file which we weren't doing before, move the header file, even opens a web browser with a window of Twitter with the tweet partially filled out to announce our release. And from that, we also added in the compiling of the docs. We were using something called MDSnippets. What MDSnippets does is it takes mark down and allows you to put a tag that says 'snippet' in a label and then get that actual code from your source files because it's really annoying to write code in markdown. And so, we would write the code in our tests and then we sort of put this label around it and the snippets would take it from our tests and put it in there. And that became part of just every time we would hit the release, it would recompile our documentation. And that made us start to say, "Every new feature we write needs to include documentation as part of our definition of done." I want to say that we released something every half an hour to two hours of work, but I am probably optimistic on that. What do you think? CLARE: That's a bit optimistic, but it's not that far off the mark. LLEWELLYN: We started writing a lot of documentation. So every feature we write, we would write a piece of documentation for and that has changed the way I write code in an almost analogist to like how I changed when I first came into test driven development. ARTY: So you talk about documentation as using this word documentation and yet you're like, given the other things you said about turning error messages into help messages, the documentation also has this shift in meaning of if we're going to take the time to put documentation in with our code and you've got this whole process of trying to make it meaningful, I feel like it needs a new word. How would you describe what you're trying to communicate with these messages that you're bringing out in the world? LLEWELLYN: I'm not sure if meaning is the right thing. But when I would write my unit tests, I was always thinking about like, "Does my code work?" And the person that needed to work for was me. And so there's two parts of that. One, like I understand the code really well; and two, I'm just trying to show that it worked. Whereas when we started writing documentation, we were now thinking about how does someone learn to use the code? And that's not even necessarily a single starting point because you can have different people come in from different places that need to travel the path of learning. So the empathy changed quite a lot as we did it. ARTY: And they need help. They need help messages. LLEWELLYN: They need help. A lot of times when you would write about that path of learning, you would realize that you're missing a step. Like maybe you need part of your API that you don't want people using once they hit a certain expertise. But they definitely need to so that they can get to that point. Like sort of a stepping stone to where you want them to end up. But also maybe there's some ugly things that they have to do that you don't really have in your tests because you've sort of distracted away. But as soon as you have to start writing about it, you realize there's this ugly part they have to get through to get to the place that they care about. Because a lot of times when Clare will just sort of ask me like, "Is that really helpful for people?" And that will change our API a lot because a lot of times the answer to that question is, "Well, if they do this and this, it is." JOHN: It strikes me that the chain of events that you describe here is realizing that there was a problem with language in the way you're describing error messages versus help messages. And that led you to wanting to format your messages better so that they would have more information in them, which led you to having a new build system for your documentation, which led you to thinking about documentation more as a primary citizen in your ecosystem and has now changed the way you develop your code. So, tracing it back to that point of realizing that there is a deficiency in the language and then flowing that out into transforming the way you're developing your code and making it friendlier to people and helping them understand it better is a really interesting thread to follow. CLARE: And for me, it's had other benefits as well. Where I worked until recently, we had quite sophisticated C++ documentation system. And I always intended to write documentation but often struggle to know what to write. And having experienced this different focus of 'don't think about what needs to be said about the code, but think about what somebody's learning to use the code might need to know' completely transformed in my last few months there what I documented. And I'd actually made it more time consuming to write the documentation because it needed [inaudible]. But the end result was orders of magnitude more useful. As another example of learning something [inaudible] being able to transfer it to another area that I thought I was doing a reasonable job of until I discovered or sort of learned another way of doing it. So, that's been really satisfying for me. LLEWELLYN: And more time consuming in two ways. A, we have to write the documentation and then very, very often we have to now change the code, so the documentation that we wrote doesn't suck. This involves pride a lot, especially for me. So often when you have to document something, you have to write steps that you really wish you didn't have to. And then the moment that we're doing that, we have to fix that. And somehow it feels even more real now that you write the documentation. If I didn't have to acknowledge it, maybe it didn't exist. But now I have to acknowledge it, I'm like, "No!" And so, it becomes longer because you're taking the time to write the documentation, but it becomes significantly longer because you now actually have to write code that is worthy of sharing with other people, where before, we didn't have to do that as much. CLARE: We've toyed with the idea one time of trying writing documentation first and then writing the code. I don't know if that would make it better overall, but maybe it's something to experiment with. We did [inaudible] is a thing that multiple people have talked about. It's not a new creation, but I think it's an interesting idea to experiment. LLEWELLYN: Definitely writing it alongside of writing our code has been transformative. I guess when we started, I thought all the samples would come from the tests. Definitely, we had to write some new tests to get some new samples. But then we started finding things that weren't appropriate in the test. So, I guess the first things we started finding was you have to do some configuration stuff, and so it didn't really make sense to put that in the test. But we would put it in sample projects so we could grab our code from there. And then the thing that surprised me the most is we have some of our snippets coming directly from the API code. Not the test projects at all, but right in our API code. And notably like in the reporters, we use sort of a chain of responsibility. We're going to try this tool, then this tool, then this tool. And we take that directly from production code. And that code of course has to be very clean and just really intentionally revealing code. But it was surprising to me that I would be taking documentation from the API and say, "Here's how we're making our decisions. Let me share that process with you so you have an idea of what we're doing." JOHN: Yeah. That strikes me as actually particularly valuable because so often documentation is about the what and not the why or the how. And when you can eliminate that how, like this is the mental framework of decision making that the code is going through, you're going to get much deeper understanding than if A then B or like a quick outline and then go read the code yourself, you can figure it out from the branching. LLEWELLYN: Yeah. ARTY: And then there's this pride aspect of you're writing documentation for something and not revealing all these things that you feel like, "I don't want to talk about this ugliness. I want to be able to communicate this message of this beautiful easy process of the way it should be." And so when you have steps in there that shouldn't be there, but it reveals kind of the ugliness of the process of the things that you have to know about that you shouldn't have to know about. And so, you start to learn about the leakiness of your abstractions from a standpoint of what do you need to understand to be able to work through this process. And if you can hide something behind in abstractions such that you don't really need to know about it, you can understand it from the perspective of the extraction, then that's a good healthy abstraction. We can document it, we can explain it in terms of this metaphorical extraction and we're all good. You don't need to know anything else. But as soon as you have to start explaining the guts for someone to realistically be able to work with it, suddenly it breaks that magic effect. And the process of documenting creates this revealing loop, if you will. It's a way to test, like you're testing something fundamentally different about the... CLARE: The experience, maybe. ARTY: Yeah. LLEWELLYN: And you're also testing the communication. Am I communicating at the right level here? There's a tester, she started to learn to read code because she wanted to be able to look at the commits to figure out where to test. And she said like, "In the beginning when I didn't understand the code, I just felt like it was because I was a tester, I wasn't a programmer. I didn't understand programming enough. Now I realize if I can't understand the code when I read it, it's not because I'm not a good enough programmer. It's because the programmers didn't write clean enough code." It's not your authorship that needs to be improved so you can understand. If you can't understand a document, it's because it hasn't been written clean enough. And that really shows up when it hits the documentation. Like are we making things easy enough for people to learn? And we're doing better, now that we're writing the documentation, but we're still far from perfect. And so, one of the things that will happen is we will revisit our own documentation. One of the real things is now that I have documentation, it's surprising how often I use it myself. I send links to it to other people to explain things. It's super useful. And often we will read our own documentation and just sort of be like, "What were we thinking?" Just like a week later, it's surprising like the entire swap memory goes away and it's almost like you're a new person reading it. And very often that can be helpful to figure out the things that we thought made sense that clearly don't. ARTY: I have another question too, but this is a bit in a different direction. This came up a while ago when I was taking notes and I noticed when you were talking about the retrospectives, you made a point of asking lots of 'what' questions - what do we observe about all of these different things. And that when you asked about learning or like what did you learn, you found that that was a bad question. LLEWELLYN: Yeah. ARTY: And then when you did a retro on a bug in the safeguarding process, all your questions became 'why' questions as opposed to 'what' questions. LLEWELLYN: Yes. ARTY: I'm curious because I feel like there's similar dysfunction around leaping to 'why' questions before you've done the observation. And so, I'm wondering why you make that context switch to 'why' questions as opposed to 'what' questions, and what are some of the differences you see come out of that? LLEWELLYN: The first thing is when we're doing the retro at the end of the session, we asked for the what questions, but then there will be conversations, the why conversation's sort of are coming out of the what. For the safeguarding, we did not do that retro while the bug was still in play. We solved the bug before we did safeguarding because (a) who has the capacity to like critically think and reflect while things are still burning. So, a lot of the what had already been figured out because that's what debugging is. Debugging is that gap between what you think the computer does and what the computer actually does in that gap. Like lots of things happen. And so once we figured it out, a lot of the what had already been answered and we are now in a place to think about the why. The other thing that's very different is in a retrospective, you're trying to get all these things happen, "Let's figure them all out." Whereas the safeguarding is much more focused on what action are we going to take to make feature as easier. So, the idea of safeguarding is that it's not that we messed up. I mean we messed up, there's no question about that. But while we messed up that one thing, there were hundreds or thousands of other lines of code that we got right. So instead of saying like, "Why aren't you perfect?" It instead says, "Why was it so hard to make it right? How can we make it easier?" Have you heard the term like the pit of success? Like the idea that you can fall backwards into success. And so, we want to make it the train easier to go through so that we just sort of come to someplace that works naturally. And so, the safeguarding is much more action-focused, whereas the retro that we're doing is much more awareness-focused. We want to be aware of what just happened so that we can get some self-awareness, get something there. Whereas the safeguarding is if we do not do the actions that come out of that, there's no reason for us to do that retrospective. It is all about we are going to make the environment easier to succeed in. ARTY: Just with the question you asked, why is it so difficult? My brain immediately switches to what were some of the things that I observed specifically around the difficulty in getting myself out of why mode and thinking in what mode immediately because I feel like the same thing happens when we ask why questions is our brain goes into trying to explain as opposed to trying to observe. And in a bug retro, I do my [inaudible] retros. This is my thing. I'm always trying to ask what questions, what made troubleshooting take so long? I've seen other folks too, like gravitating toward what questions and away from why because we have this inner five whys process that comes out of lean. And the folks that I've talked to that are kind of doing more brain research-y type stuff, why basically invokes our explaining reflex, and what questions Invoke our observation reflex. And you made such a point about the importance of observation. I was really struck by the shifts there, and I'm wondering if unraveling those whys into more observation-oriented questions might be a more productive method to identify observations specifically around the safeguarding activities. LLEWELLYN: I have the document. One of the things that we both agreed on was the reason it didn't get caught sooner. The reason this bug made it to production is we are unsure if the starter project, which is where it first surfaced, was even part of our CI. So, our CI worked on one project, but of course there was a downstream project from us and that wasn't even part of our CI process. We've been keeping manual scripts of here are the things you need to do and our manual script had too much manual stuff in it. We both agreed there are too many steps that are manual in this process. And then we also agreed that the order of those scripts was wrong. Those were the big things that came up and why we thought this happened. And then we brainstormed like what should we do about that? And we decided to make it so that we actually check things off the list when we did it, which we're no longer doing. We decided to remove and automate some of the steps, which we did actually quite a lot of. And then we added commands to do things like dating, like a lot of those little manual steps where we had to figure out. We'd have to put version numbers and stuff and now we have like a global variable where we say, "Hey, here's what the new version is." And it pulls it off from there. And we rearranged the script. And those are the four things that we actually acted on. And so, I think that you might be right that we would get more insight into things, maybe by observing more, but I'm not sure that we would get more action. Like that was really painful. That pain is a good source of energy to fix things. CLARE: The other thing is at that point, we probably didn't have that many users of the library, so we were really worried about it. It probably wasn't affecting that many people externally. I'm interested though in the difference between what and why because looking at Llewellyn's blog post about the steps to ask in safeguarding, you described it as why did we write the bug? But the wording is actually what caused us to write the bug. In some sense, it's not the what caused versus why. Maybe that's just a wording thing. But if I ask you why it makes it harder for people to answer the questions then switching it for what caused, it's a tiny change in wording but it probably made us more open, and as we filled in those gaps in the document. ARTY: There's also a paradigm framing effect in the questions when you're thinking about mistake proofing or safeguarding versus what can we do to make these outlier phenomena more observable. How do we make it so whenever anything goes wrong, we're more easily able to identify the root cause of what it is. And so, I feel like the questions we ask around these things, like one of my common questions is what made troubleshooting take so long? And focusing on what kinds of things we can do to reduce troubleshooting time as opposed to mistake prevention, it's around engineering for observability as kind of the prime focus and just assume that you're going to make mistakes. How do we make it easier to observe? And it doesn't mean that mistake proofing isn't... LLEWELLYN: You can also make it though so that you don't make mistakes. ARTY: I'm not saying... LLEWELLYN: I mean, so that you make less mistakes. ARTY: Yes. I mean, I think they're both valuable practices, but the frames and the questions lead to different answers, different insights, different improvements. LLEWELLYN: Yes. Why didn't we catch it sooner is all about observability. CLARE: This is really interesting and it reminded me of something I saw recently of a different way of running retrospectives. I was really used to a sort of four quadrants - what went well, what didn't go well, what made us happy, what made us sad, that kind of thing. And I saw somebody run a retrospective completely differently, which was to say the same events would have different emotions for different people. So completely take away the what made us happy, what makes you sad. Just write down things that happened, write down observations without any emotion attached to them. And I thought it was really, really healthy and I wished I'd learned it quite a long time ago. So, I'm really interested in, you talked about the paradigm framing effect. I guess that means how you frame a question directs how people answer the question. Is that right? Have I understood that right? ARTY: Yeah. I think the questions we ask influence the thoughts that come to mind. We ask different questions, we get different thoughts, you get different insights. And so having a really good list of questions to ask that can help you bring different sorts of answers and insights to mind about things. The questions imply the underlying models you use to compare and understand your experience. Like for example, you mentioned the question, Llewellyn, of why did it take us so long to find this bug. There's also this question of, "Okay, now that we know that the bug is there, what made it take so long to troubleshoot?" And in the DevOps space, you see splitting of metrics of time to identify that something is wrong with the delay there versus once we've identified the thing, how long does it take us to recover and get a fix in the production, say, and uncovering that fix. There's a whole other set of dynamics around people who have the knowledge about the things, about our ability to observe what's going on, about the complexities in the code that there's increase in the likeliness of funky interacting parts that can cause weird emergent behaviors. And there's so many dynamics to the causes that I think when we try and ask why questions, it short circuits our mind to kind of look for a blame in a certain frame. I think we miss a lot of the opportunities to observe the dynamics of the system and observe it in the context of that rich complexity. Whereas if we ask things in terms of the different dimensions of risk, have a rubric of questions that we could ask around the different factors that influenced the likeliness of these things. Like we could ask specifically about knowledge and familiarity around whatever the understanding was that was required to understand this bug. What brains was that information in? Was one person aware of certain parts of the system and another not. I mean in this case, you've got two people involved. So you've got kind of a smaller system in terms of information spread and you've been pairing on other work. LLEWELLYN: Yeah. So it's almost closer to one person. Like one entity, one pair. ARTY: Yeah. And that's one of the cool things about mob programming, right? LLEWELLYN: Yes. God, yes. Because the communication tax just goes way down. Like there's not, how do I get everyone aligned? We are all aligned. And I'd like to mention this, all my holes in C++ are completely covered by Clare. My lack of knowledge doesn't matter. It's what I have to bring to the table that matters. So all my benefits are there and all my negatives don't count. And that's just huge. And I guess we haven't really talked about this, but Clare has a lot of C++ knowledge from what? 30 years of C++ you've been doing. CLARE: Twenty years of C++. LLEWELLYN: Thirty years of programming, 20 years of C++. CLARE: Yeah. LLEWELLYN: But you also have really deep and rich cultural understanding of C++. There are a lot of times when I would do something that completely compiles and works, and Clare would just like shake her head and be like, "No, that is not how you do that in C++." And there is no way I could do any of that, like I wouldn't even know that what I'm doing is like a faux pas. Is it possible to have a social faux pas in code? JOHN: Yeah. LLEWELLYN: Yeah. And so this is where that because we're sort of one entity, so many of the holes in my knowledge just don't matter. And so it's much easier to work with us as an entity than us as two separate people. And with a mob, that also applies to the team because now you have a team as opposed to five individuals. Also you mentioned sort of maybe the framing might slightly be dependent on the safety that is felt. I mean like, the thing that's resonating with me a lot through going through all this history has been just how valuable the kinship and how wonderful the kinship with Clare has been. We've been able to have what I consider difficult conversations, conversations I would not feel comfortable having with a lot of other people that I know. It was easier to use the why questions is because that safety and understanding we were building on top that. I want more things to come out of the safeguarding higher up the chain. I agree that observability is important. I agree that the ability to fix things quickly is important, but the things that make it so that we don't do it are really valuable to me. And I'm glad that we got a lot of things that made it so we are no longer making mistakes. Not because we're being more disciplined. Any system that relies on discipline is I think fundamentally broken. So, I'm really glad that we just took away things that we would stumble upon. So, if the code is so complex that it's easy to make mistakes, I'd rather see the code become simpler rather than we have better ways to figure out where the mistake is. CLARE: I appreciate what you said about my C++ skills, but I wanted to point out that C++ is a rapidly moving target these days. Up until 2011, it really hadn't changed very much. But now every three years, there's a new version. Major, major new additions to the language come out. And my C++ knowledge goes pretty well up to 2011 but not beyond that. Very little beyond that. I've enjoyed this working on open source projects so much and I've enjoyed speaking at conferences and sharing what I've learned so much that I recently took quite a big leap and decided after 31 years, it was time to change some things, time to learn some more and catch up more with C++ and spend more time in C++ community. So, the strengths and the knowledge I've got from this working relationship was a factor that contributed to my decision to become independent so I can focus on this learning a lot more. Lots of useful things coming out of this conversation. Llewellyn's very, very kind about my C++ knowledge. But any more up to date developers might look at our code and go, "Yeah, but you're not doing this, you're not doing that." It will come over time as I catch up and learn more about newer developments in the language in my newfound learning time. LLEWELLYN: So to quote you from one of her recent retrospectives, you don't have to be perfect. That is not a requirement to be great. JOHN: At the end of the show, we'd like to do what we call reflections, which is to talk briefly about the things that have most struck us from this conversation, the takeaways that we're going to have and just the new ideas that we're going to be turning over. I can get us started with this. Two things that definitely going to be sticking with me, which is one, the idea of using a retrospective in such a small scale. Like normally I'm thinking of them as you do them after a project or after a sprint or sort of a longer term, bigger project thing. But doing something simple like that after a conversation is a really interesting idea and it's actually something I'm going to start doing with the people that I'm mentoring and use that as a way of feeding back that learning into what we're doing so that our relationship and the way we do it can improve. And I think the other thing was also just the difference in level of community. There's like regular level at most open source involvement, which is you're working on tickets, you're submitting pull requests, maybe there's a discussion list at some point versus what you're doing, which is like turbo community where you're building kinship and you're working together deeply and learning to communicate. And really the code is a fusion of your two minds together rather than lots of people contributing lots of little things. I sort of wonder if more projects would be served better by that sort of work. I don't know if it's sustainable or even possible with a global, especially more than two people spread out across the globe. Finding a time for that is hard but I do wonder what the possibilities are there. LLEWELLYN: Like I mentioned just this morning, I was working on it and I was working with some people out in Switzerland. Most of the work that is reoccurring occurs with me and Clare, but both of us have paired usually one off instances. They're usually like half an hour to two hours, which seems to be the amount of time that somebody is willing to spend on their own problem where it is actually really beneficial if you have a problem with something that we have written, you bring expert knowledge about why you have that problem. Like you have expert knowledge about that problem. And sometimes that includes like what your environment is set up with so that we can even reproduce the problem which could take many hours to set up, whereas when you just share a screen, you're like, "Look, it's right here." So, we do actually do quite a lot of one off pairings with people around the world and that is so fantastic and I would encourage any open source project to do it. You gain so much from actually seeing how your users are using your project and fixing it with them. It is huge. I'm not sure if that's the same as sustainable, but I think it takes as much time to sit with somebody and fix a problem as it does to read their pull request. So I don't think we actually save much time through normal pull requests. I think that if you open up that channel of communication, you will be amazed at what you benefit from it. CLARE: For me, I want to read more about the paradigm framing effect. I'm really, really interested in understanding how changing the wording of a question can change, while I was going to say the response that you gave. But you talked about it even framing the thought processes that follow the question and I think that's a completely new thing to me and I want to understand it better and see how I can use it maybe for work, maybe outside of software development activities as well. I think it'sit seems as potentially really valuable. LLEWELLYN: For me, I don't want to say like I wasn't appreciating how lucky I am to have people like Clare in my life to pair with, but this definitely made me realize, like not only am I appreciative of that kinship and these people who enjoy programming with me, but like how rare that actually is. The fact that I have someone that like almost weekly we can get together and enjoy programming together. I think I was appreciating that that's great. I maybe wasn't appreciating how lucky and rare I am to have that in my life. I guess my main reflection is just, "Yay me!" I'm just so grateful to have this friendship and have this kinship. CLARE: Thank you very much. LLEWELLYN: Thank you. ARTY: I think the thing that stuck out the most for me is just how much taking the time to think about what you're doing for perspective of sharing affects what you do. So when you're writing code and you're thinking about someone else having to learn that code and consume the things you wrote, how much that affects what you end up doing and how you do it. And just by putting the documentation on things, I feel like documentation isn't even a good word for it because it's like we're going to write down information about the things, but that's not really what we're doing. We're crafting the experience for the person who is consuming our work. We're crafting the experience, we're designing the experience. And I feel like that's what Clare has brought to your work, Llewellyn, is this... LLEWELLYN: Absolutely. ARTY: Yes. That is sort of sharing oriented of let's actually think about what the experience of learning is going to be like and how we can take all the things that we're doing and build our work for other people to use in the world. Let's take it out there and build a story around how this should be used and help people to overcome those hurdles to get it out there. So, I think it's really brilliant to see how Clare brought that to you and created this team and these new skills, this new perspective from you two working together on this. So, I think that's really great. LLEWELLYN: Yeah, that's been really great for me. Like I have this little Clare that sits on my shoulder and I get to hear her voice every once in a while. ARTY: Well, thank you two for coming. This has been really great talking to you both. And thank you for joining us on the show. CLARE: Thank you very much indeed. LLEWELLYN: It's been fantastic. JOHN: Yes. Thank you so much.