REIN: Welcome to episode 65 of Greater Than Code. I am Rein Henrichs with my cohost, Jessica Kerr. JESSICA: Good morning. I am happy to be here today with Jamey Hampton. JAMEY: Thank you, Jessica. And I am also very happy to be here this morning with the illustrious Coraline Ada Ehmke. CORALINE: Illustrious, eh? Wow. REIN: Ooh. JAMEY: Yes. CORALINE: I’m not feeling particularly illustrious today. I’ve been out of practice as some of you may know. I’ve been traveling the world and I’m really happy to be back today. So, we have a couple of amazing guests with us today. First off, I’d like to introduce Marian Petre. Marian picks the brains of experts to find out what makes expert software designers expert, how people reason and communicate about design and problem solving, and how they use representations in their reasoning. She’s Professor of Computing at the Open University in the UK. And she holds degrees in Psycholinguistics and Computer Science. Welcome, Marian. MARIAN: Thank you. Thank you for having us on the show. JESSICA: Yes. And André van der Hoek is a programmer at heart who loves talking to and working with software designers and software developers to create new tools that help them be more effective and efficient. He is a Professor of Informatics at the University of California, Irvine. He holds degrees in Business-Oriented Computer Science and Computer Science. ANDRE: You make that sound a lot more than what it really is. But we’re really glad to be on the show. CORALINE: Our standard opening question, and we’re going to start with you Marian, is what is your superpower and when and how did it emerge? MARIAN: My superpower is picking brains. It emerged pretty early on because I had a huge curiosity and I could not stop asking people why they did what they did. CORALINE: Does that mean you’re a proponent of the five Ys approach to problem-solving? MARIAN: It is a good approach to problem-solving. I’m not a proponent of lots of things. I feel like one of the things I’ve had to learn over time is actually standing back from over-judging things and just – so a lot of the work that I do is about reflecting on what other people do. So, I’m leery of making endorsements for things or proposing things because I feel like what I really should be doing is eliciting from others and then reflecting back what they say. REIN: So, you’re more of a descriptivist. MARIAN: I think it’s more than description. I think part of what we do is yes, about describing but also about understanding, inducing the patterns from what we describe in order to understand the underlying processes by which people solve problems. REIN: So, trying to figure out the system models that people use that underlie their thoughts. MARIAN: That’s right. And it’s not just in terms of what models they employ but how cognition works. And how it works in the context of something as complex as software design. CORALINE: It sounds like I’m going to have plenty of opportunities to bring up my very favorite book ‘Surfaces and Essences‘ by Douglas Hofstadter who goes into a lot of deep thinking about how we think and how metaphors fuel our thinking. So, listeners, I know you’ve heard about this book before and I’m going to keep talking about it until every single one of you has read it and can have a private discussion with me at a conference or meetup about the book. So, you’ve been warned. How about you, André? What’s your superpower. And how and why did it develop? ANDRE: I don’t know if it’s a superpower per se. But people have described me as identifying and seeing patterns. So, there’ll be lots of stuff that’s happening and I’ll be the one who digs out from all the individual occurrences and go “Here’s the larger pattern that seems to be playing out.” So, a little different than Marian but stepping back, looking at it, and going “This is really what’s happening.” JAMEY: It sounds like that would make you really good at board games. ANDRE: I will confess, yes. So, I must confess there was a lot of board games in my childhood. I might have won a few in the backyards back home in the Netherlands. That eventually did translate also into almost an addiction to computer games on the computer to the point where I eventually actually had to toss them off my computer to get some work done. JAMEY: I can relate to that. ANDRE: There are some really good games out there. CORALINE: You both talked about patterns and pattern identification. And you’ve obviously worked with a lot of software designers and developers. And at least in my professional experience over the past maybe eight or nine years, I don’t see a lot of design going into software development anymore with the retirement of the notion of a software architect. I think a lot of that design thinking went away. And what I see a lot more of is people taking established patterns, and by established patterns I mean even new patterns that are proposed, we sort of adopt wholesale different methodologies and different ways of solving problems. I think the explosion of the microservice idea is an example of that. Where do you see design thinking happening in smaller companies or more agile companies or in the startup world as opposed to the larger companies with more [inaudible] roles between design and development? MARIAN: So, one of the issues is it’s not about the size of the company, it’s about the nature of the team. And this is where I draw the distinction between the high-performing teams and without any intention of deprecation, mundane teams. So, a lot of my research has been about understanding the nature of expertise and high-performance and what distinguishes them. One of the things it distinguishes is exactly design [inaudible]. And the design thinking runs not just at point of deciding what the architecture is going to be but all the way through the development and design [inaudible] process. Because there’s designing that’s needed throughout that process. So, a lot of this has to do not with – it does have to do with the nature of the organization. But in terms of what its culture of development is and what practices it embeds in terms of understanding what matters about [software], what values about software, how it incentivizes and prioritizes what work to do. A really good example might be things like how people deal with error. So, in a company that prioritizes, or an organization that prioritizes actions and that prioritizes resolution of issues as primary, you get very different behaviors from a company that prioritizes quality and design. Because in the former, somebody says “I see a bug. Let me smack it,” and they don’t take time to reflect because what they’re being assessed on is very often company issues [resolved]. Whereas in the latter, a high-performing team, what you get is somebody saying “Ah, that’s interesting. Why did that happen. Hang on a minute. That’s a really silly mistake. I wonder if we made that mistake anywhere else? Let’s pick up and see if there are other occurrences. Let’s see if it associates with anything else and whether [inaudible] something about this communication and [inaudible].” And by doing that, they very often come up with deeper issues. And it takes them longer to resolve the initial issue [inaudible], but they very often get much [inaudible]. So, a lot of this has to do with the culture designed to do this and the attitudes [inaudible] in the organization rather than the kinds of problems people are addressing or the phase of the design and development [inaudible]. CORALINE: And I’ve definitely worked at companies that valued things like lines of code and number of issues fixed and attached performance metrics to those things. And I do think that is indicative of the kind of culture that those companies have. And that kind of thinking that perpetuates systemic problems that become very long-lived. So, that’s very interesting. MARIAN: It ties into something else that I find really interesting right now. On the one hand, metrics are really powerful. The notion that evidence influences our design behavior and our design decisions is actually very powerful. The problem is when you get what I call Error by Proxy. And people start looking at the metrics as the goal rather than metrics as evidence for a greater goal. And then they start just [inaudible] metrics. So, there’s a really interesting dynamic there that says on the one hand, keeping track of things like [performance] metrics is going to be really helpful in development culture. But if you flip it and make metrics the focus, you end up with this kind of slightly misaligned, misdirected dysfunction, one that prioritizes different things. So, the Error by Proxy phenomenon is about not really understanding the value of evidence and how it’s supposed to be used and understanding its role in a greater process that has very clear goals [inaudible]. REIN: I’ve always seen this as an ends versus means failure where the end goal may be to increase customer satisfaction or to reduce error rates and the means is that we’re going to measure this specific thing and optimize it. And then what happens is optimizing that number becomes the end in and of itself and we forget about the original end. MARIAN: Exactly. That’s a beautiful alternative way of articulating it. ANDRE: So, I want to go back a little bit to the original question as well, because there was sort of a little bit of lamenting in there that design seems to have gone away. And I wouldn’t mind bending it around a little bit whether it really has gone away or whether the amount of knowledge that’s [seeding in] design has shifted where design might be taking place. So, one of the things I teach in my classes a lot is it’s perfectly fine as a designer to steal stuff within limits, within legalities. But if other people have had good ideas, it’s also possible for you to use those good ideas. So, why would you need to redesign? Good designers put the effort where it’s needed. So, if there is a pattern or an architecture that they could use, why would they go about reassessing? They will reassess it, the experts will. But it will happen much more quickly than going to the whiteboard and saying “We have a blank slate. Let’s see what we can do.” It’s a different conversation. One that’s asking, “Does this work for us? And if so, it’s okay.” CORALINE: I think that’s so contextual, too. We’re having discussions at my work currently about API design. And one of the things that a couple of the engineers on the team keep trying to do is remind people “We are not Amazon. We are not Netflix. We’re never going to experience the kind of volume that those companies do. So, the patterns that they have established are not necessarily the appropriate patterns for us to use.” MARIAN: And there are different kinds of problems that people are addressing with software in the same way – in physical, the architecture of buildings, there are very different problems that people are addressing. And somebody who builds residential housing will have a different task on a normal day than the day somebody comes along and wants to build a fabulous mansion that’s an architectural [estate]. So, there are some kinds of problems where we’re in a known problem space. We have a very good understanding of the problem space. There are very good solutions already. And what design demands in that context is recognition of that and pattern-matching. A lot of decision-making by pattern-matching. That’s very different. And then the design that happens is kind of incremental designs, step improvements based on small changes in that environment, in that context. But in radical design, it may well be that there’s a technological shift. And people need to rethink in a very board way. Or there’s a very new context of use that hasn't been considered before. And in that case, there’s a much bigger tendency to go back to the whiteboard for much more open thinking because there isn’t already a context of knowing. And it’s not incremental. It’s much more fundamental. And it’s not necessarily inappropriate to have elements of both in any given design process, because there are things that we know well. We might as well use appropriate solutions in this context. JESSICA: I think you said – did you say radical design? MARIAN: Radical versus normal. JESSICA: And radical is like greenfield, start at the whiteboard, no one’s done this before? MARIAN: Right. Radical is the notion that there’s some kind of disruption, that it isn’t just part of an understood problem and solution space. ANDRE: Yeah, although I would also say the whiteboard also happens in routine, normal design. I think separating the problem side of things from how you approach it; radical design will have more of a sense of we’ve not seen this before. There are some new constraints. There’s a really new technology that’s enabling new things for us to do. How do we go about it? And I don’t want to hop on the ML bandwagon here, the machine learning bandwagon, but it is in some ways being considered a radical change in terms of what we might and might not be able to do with that. So, you see a lot of companies grappling with “Hey, we should all adopt this. But then now we actually need to spend more time designing because how do we really integrated this into our systems? And is it actually the right technology for the problem that we are trying to tackle?” REIN: Okay. I’ve got a question. What drives the need for radical design? When does normal design stop being effective? ANDRE: That’s a [good] question. JESSICA: And by normal design do we mean incremental? Here’s a solution. Let me tweak it for my spot. ANDRE: Yeah, so I’d say for normal design, it’s something that’s familiar. It’s something where largely I know what – I’ve built a database for clients before. And now it’s a slightly different set of clients, right? But radical design is much more, “I have not done this before. I don’t know the domain as well. I’m not as familiar with what’s there. So now, my set of tools and methods that I need to bring to bear with that is going to change. Because I don’t know the questions to ask. I don’t know the constraints that are there. And so, how do I go about this?” So, I heard an interesting example just the other day where somebody who worked at JPL said “Well, if the spacecraft is in space, bits may randomly flip. If you don’t know this before you send the spacecraft up and you have to consider that, how do you go about that and what other things are happening?” So inside JPL, there’s a ton of knowledge. But if you give the same design problem of we’ll send a satellite into space to a different group of people, for them that’s now a radical design. But JPL has by now developed a more normal form of design because they know what happens in space. They know what happens in a spacecraft. They know how to send them up. But each spacecraft is still unique with its own constraints, its own weight, its own instruments, and a bunch of other things. So, it varies on who’s doing the design whether it’s radical for them. And also, the setting that’s there. JAMEY: That’s really interesting that it’s so dependent on the personal experience of the person who’s doing the design. How does sharing ideas factor into this for you? Like if people at the one space place that this was more normal for them was able to share ideas, how would that transform radical design into normal design for someone else? MARIAN: So, I’d like to twist your question a little bit instead of answering it directly. Because one of the interesting things that happens when you have a context in which you have for example people that are very well-familiar with the problem and people for whom it is a surprising new problem is the dialog between them. Because in a sense, the naivety of the people who are unfamiliar with the problem challenges the knowledge, the assumptions, that are embedded for the person who’s very familiar with them. And very often, that’s an incredibly productive and creative dialog. CORALINE: I’ve definitely seen that on my experience with mentoring. Mentoring is something that everyone should be doing and everyone has value as a mentor. But as someone who’s been doing this for over 20 years, when I work with someone who doesn’t have the same experiences that I have, I find that they will challenge a lot of the things that I do via muscle memory. They’ll say “Oh, why are you solving that problem in that way?” My first response is “Well, that’s how I’ve always done it. That’s what my brain immediately jumps to.” But having those questions asked makes me challenge my own thinking and wonder “Oh, is there a better way to do this than the way that I’ve been doing it repetitively for the past 10 years?” MARIAN: That’s right. And you don’t necessarily want to have that conversation every day, every time you face one of those problems. But to have it periodically can be incredibly refreshing. And one of the things that we see with really high-performing teams is there’s usually the guy we call the guru in the corner, a really experienced software developer, who takes that role, who plays the fool as we call it and asks those questions. “Why this? Why are we assuming that? What do we think we know? And how do we think we know it?” And does interesting things like saying “Okay, let’s relax the constraints. We’re struggling with this. What if we took away the constraint that’s really hampering us and see what insights that brings along?” The classic that I heard very, very early on when I began studying experts was somebody saying “Well, if gravity is the problem, let’s pretend there is no gravity.” And the design process proceeds from there in a way that opens up new options which can then be reassessed in the real context that includes gravity. REIN: Are you familiar with Virginia Satir’s change model? MARIAN: No. REIN: So, Virginia Satir was a family therapist throughout the 60s, 70s, 80s, and 90s. She has this change model that I’ve – every time I look at the revolutionary or radical processes where an existing status quo has to be broken, I think about this change model because it maps in my mind perfectly to what happens. And the idea in her change model is that you have an existing status quo and then a foreign element enters the situation that the existing status quo can’t respond to. So, your existing methods or ways of thinking aren’t capable of dealing with this for an element, whether it’s a new problem you have to solve. And then what this causes is first resistance, because it threatens existing relationships and power structures, but then a period of chaos and unknown where existing relationships shatter and all the expectations may not be valid anymore. And where existing reactions that we have cease to be effective. And then there’s a period of chaos that’s finally resolved by what she calls a transforming idea which shows how the foreign element is actually something that benefits the group. MARIAN: Sounds highly pertinent. REIN: Yeah. As I’ve been looking into systems thinking and things like the Toyota way which focuses on aligning processes with the underlying values of the organization. And these ideas from family therapy from 50 years ago just seem so relevant to me. Because they always focus on the communication. Relationships are defined by the ways that people communicate with each other, is sort of her core premise. And I’ve just always found it so relevant. MARIAN: It takes us back to that opening thing which is the role of the human in software development. And ultimately, the role of the human in software use, because they’re all socio-technical systems. And understanding that socio-technical context is really profoundly important in terms of us succeeding in the long run. CORALINE: I’m also interested in another aspect of this. And you also mentioned, Marian, the guru in the room and a different role that the guru plays. I think that power dynamics have an incredible influence over the way we solve problems. And the secret is that it’s always about power dynamics. So, with the sunsetting of the idea of the software architect, that leads to what I think of as a democratization of software design and architecture. How do we ensure that everyone has a seat at the table in those discussions and that we’re not simply relying on the guru to guide us to the right decisions? MARIAN: Well, in the high-performing teams, they are characterized by really valuing team cohesion and communication and the integration of people in perspectives. But I think the open source community has really shown the way in how to engage with this democratization. Now open source is not one thing. There are many, many different ways to play open source. But many of the communities have thought very hard about how to structure the communication so that new people are brought into the community, develop appropriate expectations, and can make valued contributions from the beginning, that they’re not put off because the entry cost is too high. And they work very hard in terms of articulating expectations, articulating things like standards for commentary, code norms, and so on, in a way that I think provides an interesting model about how to engage people of very diverse backgrounds and potentially of different experience and ability in generating good code. So, one of the communities that we studied for example has a really nice role for new people which is to provide documentation rather than to provide code as their initial activity. And that’s really powerful because on the one hand, they can do ti. It’s work that needs to be done that tends not to get prioritized. So, it’s important but often neglected work. It’s something that engages them with the codebase. So, they learn about what’s there and think hard about it in order to articulate it properly. As they document the code well, they gain prestige in the community, which makes it much easier when it comes to producing code and committing it. And it’s a much easier entry point and much less threatening entry point for a lot of people. So, there are interesting ways that different communities have been working. And I think, as I say, some of the open source communities have been very clever about this. ANDRE: Yeah, and back to the guru, a big part of it is, and this sounds wrong, but it’s so right, is hiring the right guru. So, there’s a guru who knows everything and wants their way, and there’s the guru who’s willing to spread the knowledge and the wisdom and the approaches that they take. And in the end, it’s the second one that for your organization is actually going to carry the organization forward, not just by contributing what they know, but also actually building a community of others in that organization who are learning from their ways in how to become better designers. JESSICA: Can you hire that guru? Or does the guru have to have such familiarity with the system that they have to grow into it? ANDRE: That’s a good question. There are two sides to it. So, on the one hand to your right, whoever the guru is or whoever the wise person is, they have to know the material that they’re working with and the domain and everything else. On the other hand, that’s learnable. And somebody who’s really good at this stuff can actually learn new domains fairly quickly. Not super fast, but they can learn it fairly quickly. That to me is a skill that you can learn. The other one is your own behavior and reflecting on how you act as a designer and how you interact with your teammates and how you bring up novices and how you give them command, even though you know the answer. And how you let them struggle for a little while and you point out where they were doing well and where they should have generated one more idea and where they missed the constraint, that’s almost a personality trait. Again, it’s something you can learn. And it’s something that we’ve certainly been trying to capture in many of the insights that we’re collecting, is what are the traits of expert software designers so that individuals can maybe pick one up. I ask all the students in my class, just take one, one of the lessons that we teach you, and bring it to the organization that you’re in. You will hopefully create a small culture change. And if every person does that, you’ll have a chance. JESSICA: And you’ve collected these insights into a book. ANDRE: Yes, we have. Painfully so. JESSICA: And it’s small and cute and yellow. ANDRE: Yes. And small and affordable. That’s a long story. In the sense that we clearly knew for ourselves that we were collecting things that resonated, on the one hand with the academic community that we were in, but also very much in our conversations with professionals. So, we would go to an organization, we’d talk about our research. And then they would actually be interested in what we’re learning, because often we end up talking of course to the kinds of professionals who are open to learning themselves more and bringing that back into practice. And so, we started looking for ways in which we could take what we’ve learned but also what other people have learned, and sort of distill that in a coffee-table-ish kind of book. And once that idea was there, fast forward three years, and then yes, we have the little yellow book in our hands. JESSICA: And yet has tiny little insights with pictures. ANDRE: It does. And in many ways, on purpose, the pictures actually try to make you think. So, the text is there. But then many of the illustrations actually are designed to make you think a little bit more. So, the whole book is still based on that notion of [you as a software design] [inaudible] architect is [inaudible] requirements engineer [inaudible] you reflect on your own practice with the help. The text is the entry but it’s actually often in the picture that you need to spend more time thinking about it. CORALINE: And that probably appeals to different kinds of thinkers too, right? Some people will process words better and some people will process imagery better. MARIAN: The intention of the book is that it’s a seed book. It’s not a book that you read cover to cover. It’s a book that you sample. It’s as André said, designed for people who want to reflect on what they do. And the hope is that any professional developer who picks it up will recognize some of it and go “Yep, yep. I do that.” Or “Oh yeah, I used to do that. I forgot about that.” Or “Wow. Hadn’t thought about that. Maybe I should.” It’s interesting that the response to the book has been divided between the people who are reflective practitioners and immediately recognize what it’s doing and are actually very happy and want it as a deck of cards so that they can shuffle the cards and pull out a card at random, and the people who actually want just how-to instructions and are not satisfied by it because they actually have to think. This isn’t just – it isn’t a recipe book. It’s a reflection book. I think the other thing that we tried to convey with it is that it is all founded in empirical evidence. These are not our speculations. These are well-grounded insights distilled from expert practice in research that’s been conducted over decades. CORALINE: I’m interested in that in particular because I think, and you mentioned this in the discussion that we had prior to you coming on the podcast, that there’s a disconnect between academic research and actual practice. And some people are convinced by footnotes, some people are not convinced by footnotes. Some people are open to the idea that our understanding of how humans work and how humans work in software development is evolving, and other people think that it’s very straightforward. How do we begin to bridge that gap? MARIAN: Well, in effect the book is one of our attempts to do that. I think we need to have the dialog between the people who study practice empirically and the people who practice. Sometimes that’s difficult because academia and industry work on different timescales, different priority sets, and so on. But it’s my firm belief that I don’t as an academic want to be prescribing anything to industry. I don’t know better than industry how they can do their jobs. But what I can do is provide the deep research that gives them evidence of what goes on and evidence on which to base their decision-making. And in order to do that, we need the dialog. So, trying to put the research results out in a format that is digestible that also connects them to other literature if they want to read more deeply, but ultimately that opens that dialog, is what we want most. And to avoid the assumption that we know better but rather to understand what works and play that back to people. JESSICA: To end the suspense, the book is called ‘Software Design Decoded’ in case you were wondering. MARIAN: Sorry about that. REIN: So why ‘decoded’? Why that word? MARIAN: Play on words. We went through probably a hundred different titles. We had no trouble generating titles we hated. We needed something that would work on the bookshelf, the shelf in the bookstore and captured people’s attention. We wanted something that was true to what was in the book. So, we were pushed to all sorts of things including the subtitle, but what we really wanted to convey was the essence of the book which is insight grounded in evidence. ANDRE: Yeah. And I think the ‘decoded’ in its own way is partly a play on the word ‘code’ is there, right? So, design ultimately [inaudible] the code, and code is designed. You’re still designing while you’re coding. But it’s also a play on the nature of the kind of research. It’s not like we and others go into an organization, spend a little bit of time observing, come back and write down what we saw and the answer is “Here it is.” As one example, there’s one of the lessons in the book is rotating subject pairs, so expert rotate subject pairs. Which really means that they take very short periods of time during which they talk about two or three subjects at the same time, then they switch through a different pair and then a different pair, and then a different pair. And so, they sort of shore up the design as they go and in all of its dimensions. And so, when we compare the behavior of experts there to the behavior of novices, novices will pick one subject and talk about that at length. And especially if they’re stuck on it, they will keep talking about it. Whereas the expert will say “I’m stuck on it. I’ll switch to something else. Maybe I’ll learn something that will shed light on where I’m stuck.” And so, this is not something that when you watch people work, you immediately grasp or see. And so, this was the work of a PhD student who spent time collecting videos, analyzing the videos, and spending a couple of years actually eventually realizing that this pattern was there by looking at the conversations, by coding the conversations, by essentially decoding what was happening there. And we wanted to include in the title a play towards that as well. REIN: It’s interesting because when I read that and I thought about decoded, it reminded me of an idea from the operations or SRU world which is that system failures and the alerts that system failures generate, you know for instance this application has too many errors right now, are heavily encoded messages where what they encode is the actual state of the underlying system that created the alert. And your job as someone who’s responding to that sort of an alert is to decode it to understand what the system is actually trying to tell you. MARIAN: But that’s one of the, fundamentally, one of the complexities about working with software, is that effectively that one level of the artifact is the code but at another level, the artifact is the behavior. And very often, what people are really reasoning about is the behavior of the system, not just how it gets captured in code. And it’s hard, because the behavior shifts. It’s often hard to observe. There’s an obscurity to the designed artifact in software that isn’t necessarily the case with physical artifacts. So, a lot of it is about, almost indirectly, understanding through some things that are manifested in the world, a design that is not explicitly manifest, a set of behaviors that are not explicit in [inaudible] CORALINE: I’m interested in how that correlates to the size and scale of the systems that we’re reasoning about, too, because one of the ways that I define what a legacy system is, is a system that has become too large for any one person to hold it in their heads anymore. MARIAN: That’s right. André was talking about this dialog between research and industry. And the whole business about how people deal with systems that are too big is another example of where research can make a contribution. So for example, I had done a lot of work over a number of years with a number of different teams before I began to understand the insights about how they were thinking about intractable problem, what strategies experts use for thinking about problems that are too big, notionally too big for one head, and handling those. And how they actually, some of them, manage to get oversized systems into one head. What are the strategies they use for making it tractable when it’s just too big and complex? There are a number of things that they do that help them mange that complexity without discarding their understanding of operational detail. And that whole dialog between the development of an accurate operational model of the code is part of what you’re engaging with, with legacy systems for example. You’re trying to build that operational model. Sometimes, with big systems of systems you’re almost trying to operate in the absence of an adequate operational model by taking certain things on trust about the components that have been assembled and how they operate. And some of the really interesting distinctions in terms of expert performance have to do with the strategies they have for handling the obscurity and the complexity of software systems. JESSICA: Ooh, ooh, can you give us an example? MARIAN: The trouble with some of these is they sound, when you articulate them they sound a little too simple. So, we already know that experts abstract. The ability to handle abstractions, to generate abstractions, to reason about abstractions, is a necessary part of computational thinking. The ways experts chunk information is different from the ways novices chunk information. And their chunking is more efficient and can encompass more complex entities. So, some of this has to do with the ways they divide up the world and think about it. Some of it has to do with the systematic focus on things of interest. So, experts are better able to analyze which parts of the system are crucial for their reasoning at any given point, and to focus in on that, and allow the things that are peripheral to that not to play into their attention. Whereas again people who are less expert very often are distracted by things on the periphery. ANDRE: And not just distracted but comforted, in their own way. MARIAN: Yes. ANDRE: It feels like they’re making progress on the design problem, but they’re actually pecking away at everything that’s surrounding where the essential part of the design is rather than actually tackling the hard part. And that’s where the experts will start. They will start with the hard part. MARIAN: And they’ll do things like transform the problem into another representation or into another space, in the same way that people will transform, very often part of mathematical proof is transforming into a different representation, a different space, solving it in that space which is [more tractable] then transforming it back again. Even though experts can reason about complex problems at a high level of abstraction, if they come to a question that requires concrete consideration where they really need to examine the behavior specifically, they can drill down through the levels of abstraction and reason about very specific components as they need to, and then climb back up again. They can do that alternation and keep track of where they are. ANDRE: So, experts are much more wiling to work with uncertainty and know that it is part and parcel of what they do. Novices will want to squash that uncertainty or just hide it. And so, they’ll either ignore it and thereby make wrong assumptions or they’ll end up trying to focus on it, trying to resolve it, and thus not making much progress with the rest of the design. Whereas experts are willing to deal with this uncertainty and understand that when the uncertainty is important must be absolved, and when it doesn’t. And that translates also into experts will go code up a piece of code if they think that it’s going to answer some of the questions that they have. And novices will stick at the whiteboard and they’ll try to sort it out in the abstract. So, there’s very small, subtle behaviors that the experts will pull in that will help them more effectively address the problems they face. JAMEY: All of these examples are super fascinating to me, because I think every single example you’ve given about the experts and the novices, it has occurred to me that you could use very similar analogies for things in all parts of life, not just design and writing code and work. Particularly, the example about novices who want to keep talking about the same thing even if they feel stuck on it and experts being willing to, if they’re stuck on something, leaving it alone for a while and coming back to it. I think that’s a really awesome representation of thinking in general, especially emotional thought when it’s like if you’re stuck on an emotion and you keep coming back to it, rather than giving yourself time to work through it without thinking about it actively. And I think that’s also a really good way to explain someone who’s a novice and someone who’s an expert in emotional intelligence. It’s really interesting and I really appreciate that you were able to pick out these patterns by watching people when they were at work, but it’s also relevant to all sorts of aspects of people’s lives. What are your thoughts on this? How do you think these patterns that people fall into without realizing are indicative of ways they act maybe outside of the workplace or outside of this kind of problem-solving atmosphere that we’ve been talking about? MARIAN: Yes, fundamentally this is about problem-solving. It is also about problem-solving in the context of complex software systems. But the fundamentals are about problem-solving. And one of the reasons that I came to be doing this research at all was not about computer science per se but about how people reason and how people use representations to help them reason, to augment their reasoning. It’s a much broader question. And I think it will take more interdisciplinary study to understand how that plays off in different contexts, but also to understand what the core is of problem-solving skills that apply across [inaudible]. I have no doubt that the kinds of strategies that people are using in general terms are applicable [inaudible]. CORALINE: I’m kind of interested, and I want to set up a couple of people to fight it out metaphorically as we have this discussion. Hofstadter in his book that I referenced at the beginning and promised you I would bring up, says that metaphors and metaphoric mapping is essential for problem-solving because it allows us to tie back into experiences that we’ve had in the past, probably in a different context. But to get hints as to what the problem space looks like and what the solution space might look like. And on the other hand, Dijkstra says that when you’re approaching a problem you should approach it with an open mind and don’t try to use a vocabulary of what has come before, because it’s going to let you down. And I’m kind of curious as to which of those two opinions you think is the best strategy for problem solving. MARIAN: Okay. I’m not going to play best. And I actually don’t think it’s an opposition. I think that both of them apply. And they’re not necessarily inconsistent with each other. So, the whole business of analogy and metaphor of trying to get out of the box [inaudible] are actually devices that experts use to try to “get out of the box of standard thinking”, is the way they would phrase it. They are also very conscious about the implications of using familiar language and carrying familiar assumptions along with it. It’s very interesting that there are all sorts of cognitive biases that people have that have to do with the way that our brains and minds have evolved. And one of the things that’s been interesting has been mapping expert practices onto our understanding of cognitive biases and seeing how experts either consciously or unconsciously tend to develop very clear practices that mitigate against bias and that are about things like not getting trapped into assumptions because you're using familiar language, that are about getting out of the box of standard thinking, through some of the strategies they use like abstraction, like reshaping the problem space, like relaxing constraints, and so on. So that what they do is they remind themselves over and over and over again to say “Am I in the right problem space? Have I understood the problem correctly? Is there a very different way that we could think about this?” ANDRE: Right, yeah. So, experts particularly in many ways don’t ascribe to a particular methodology or a particular belief of “This is exactly how I solve each of these problems.” And just to tie onto what Marian said, is that they’re in the room twice, almost. They’re in the room engaging in the design and actively participating but they’re also in the room observing what’s going on, where the overall process is leading, whether they're still on track, whether they're using the right methods to tackle the problem that’s there. That sort of [inaudible] overall I think enables them to step out of Dijkstra or Hofstadter. And each one of them has arguments for why what they’re saying actually makes sense in a particular context. And we can come up with examples for each of them, but it’s realizing in the moment which one is okay to go with. So sometimes, just using metaphor is perfectly fine. We have examples of people designing an educational traffic simulator and one minute into it they’re saying “There’s a cop” and that solved the design problem more or less. There was a lot more design to be done, but more or less just using that word had solved the design problem for them because they knew what they meant and what structure they were going to apply, and what the implications were of it. That was a perfectly valid metaphor. But they also knew to understand “Is it okay to apply this metaphor to this particular case?” and had they discovered that it’s not okay, they would have stepped out and actually engaged in a much more structural approach where they now would have started to explore the problem space in more detail to understand the characteristics that there are before they hop back into the solution space. MARIAN: One of the most important things about the use of metaphor in design, and Jack Carroll wrote about this years ago, it’s not just about why my problem is like something else but some of the real insights come from the interrogation of why my problem isn’t like that example. What’s different about it? So, the question is not just how is an apple like an orange but how are they different? And what do those differences mean? CORALINE: One of the insights that I came to; I gave a talk about metaphorical thinking and I dove into category theory a little bit, and one of the insights I came to is that I think historically especially in science, we focused on, as a basic problem-solving tool, breaking problems down into small pieces and trying to solve small problems and composing the solution out of solutions to small problems. And one of the things that category theory brings to the table is looking at how things are similar more than how they’re different and solving big problems from a more top-down perspective. And the realization I had which I think resonates with what you’re saying here is that we need both of those tools in our toolbox. And expertise comes from identifying when one tool is more useful than the other. MARIAN: And that’s right. The notion of the toolbox is incredibly powerful. And one of the things that distinguishes experts is they have a bigger toolbox, they remember what tools they have in their box, and even if they don’t know which tool is the right one to apply, they have the wit to try alternative tools. REIN: So Coraline, earlier you talked about legacy systems as being systems that are too big for any one person to fully understand. And just a minute ago you talked about how we generally like to solve problems by breaking them down into smaller problems. And I think that this approach to problem-solving in general is interesting because we’re targeting a specific sizing of complexity where it’s a sort of an anti-Goldilocks zone where the problem is too big to be solved in detail through analysis and it’s too small to be solved in aggregate through statistical methods. So for example, the two-body problem can be solved through analysis. You just do the equations for the gravitational effects of two bodies and you’re done. But you can’t use the same approach to solve for the molecules in the gas and how they behave. At that scale of complexity you instead use statistical methods. And I think what we’ve chosen for ourselves in building computer systems is exactly the space of problems that are in that middle zone, the middle number zone where we can’t use analytical or statistical methods on them. And what we have to bring to bear instead are these systems thinking methods. JESSICA: Yeah. It’s not just the size. It’s the path dependence. It’s the influence the parts have on each other. MARIAN: Yes. REIN: Right. ANDRE: Yeah, and it’s the influence that the details have on the overall. I think many a software system starts with the best intentions with a great architecture, much design thinking that happens upfront, to then much later on when the problem is of such a scale for many details to slowly but surely start derailing that initial carefully thought out architecture. But you don't know all the details beforehand. And so, one of the questions is how do you remove that level of uncertainty? How do you safeguard yourselves against when the project gets underway and you’re building all the software and then you start discovering a lot more, that now starts to reflect back on the design problem. And I think that’s been – the whole backlash against waterfall has always been that kind of observation. And I don’t think that in Agile we’ve completely solved that. Those small problems still appear. We’ve just made it tolerable and acceptable for us to talk about it. CORALINE: I started my career with waterfall, although I appreciate agile methodologies and the advantages that they bring. And I think agile methodologies do solve some important problems. I actually miss some aspects of upfront design. I know in those days we had documentation that you could hand to a developer that would explain how a system worked and forethought being put into how the system would work. And of course, those exceptions, those edge cases, would come back to bite you and it would be very expensive to heal. But I think the pendulum has swung too far in the other direction and now we’re only thinking in two-week increments and we end up painting ourselves in the corners, just with a different brush. ANDRE: Yeah. And I think the better organizations have recognized this. And there is some serious design that happens upfront before we go down into agile sprints. My thinking is with you, because I do miss the days of just saying “Look, let’s take a day and really hammer out this problem” or “Let’s take a couple of weeks to hammer out this problem” because the level of insight you generate on day one when you’ve barely played with it is very different than the level of insight that you’re going to have on day 10 when you finally realize that there’s a small bit that’s really important that’s actually changing a significant part of your architecture. If you don’t discover that in time, and you’re off to the races in sprints, that will come back. And it’s not that easy to refactor your architecture. So yeah, I miss the thinking part. MARIAN: And I also, I wouldn’t want to oversimplify that waterfall is god. There are certainly… ANDRE: Oh no. Oh no. MARIAN: And I certainly wouldn’t want to demonize agile either. Because it really depends on the application domain. Though different domains still have quite different practices. I know Michael Jackson has argued for years that we shouldn’t be talking about software engineering but we should be specifying software engineering of embedded systems, software engineering for aeronautics, and so on. Because there are particular demands of problem-solving from many of the domains. And there are domains that are much more amenable to this kind of emergent, evolving understanding of the problem and the cost of getting it wrong is not at the same scale. Though if we’re talking about a scientist developing scientific software for use by his or her team, you’re talking about something that can evolve with the scientific understanding. Whereas if we’re talking about software that’s designed to drive an airplane, we’re talking about something very different. You really need something that is extremely well-engineered and extremely well-understood and tested and so on. JESSICA: Exactly. We’re never going to solve tests versus types or any of the other great debates in software engineering until we divide the field into contexts where different things apply. ANDRE: Yeah. Yup. MARIAN: But also know that you can use almost any of the tools tactically. So, using formal methods – it’s not that people don’t use formal methods. They use them where they count, where there’s going to be a payback on them. So, in many of the teams that I observe, they'll use formal methods but only for a crucial piece of code where they really, really, really need to verify it. And they won’t use it for the whole of the software system where there isn’t the payback. And I think the notion that you’re actually trying to use methods that are fit for purpose for elements of the problem, not just one approach throughout for the whole of a complex product, is actually quite interesting. JESSICA: Ooh, I like that word. Instead of gradual typing I’m going to start talking about tactical typing. ANDRE: There you go. Design is hard, right? And actually admitting that and realizing that and working with it for individuals to get a level of comfort with that, I think is – if anything comes out of this particular podcast, if two other designers realize that “You know, my job is hard. And there are ways for me to reflect on my practice and hopefully instill some better practice in my teammates,” that to me is important. When I think about software, I’d say the hardest part is design. Because in some ways, the requirements are design, the code is design, the architecture is design, the interaction is design. And they all interlock and they all interplay. And to acknowledge that and figure out collectively that we are not going to be able to throw a silver bullet or a hammer at it and say “Now we have solved design for all of humanity,” is just not going to happen. And so, to give individual designers the leeway, the right, the understanding that they are doing a difficult job, they must be given time to spend on doing that right, is an important lesson for me. MARIAN: A great deal of my own computer science education was lessons in how hard I could wield the hammer, that if I had the right expectation about how difficult something was going to be to do, how hard it is to read code and understand it, then I could do it. But if I thought it was going to be easy, sometimes I failed. I faltered because I lost faith in myself and in the process. CORALINE: That reminds me of a tweet that has been going around lately. And that was something along the lines of “As software developers, if you tell a newcomer that software development is hard, you might discourage them. If you tell them it’s easy, they're going to get discouraged on their own when they struggle with it.” So, we need to be thinking about how we describe the work that we do and be a little bit more nuanced with it so that we’re not setting people up for failure. ANDRE: It’s interesting. It’s fascinating. It’s engaging. It’s challenging. It’s a lifelong job. There’s a lot of other words I would choose for [inaudible]. CORALINE: Definitely, yeah. JESSICA: It is the best job. ANDRE: And also, we also have a great responsibility as designers too. When you think about the hospital and the job that the doctor has to do, today’s medical systems have actually taken them away from performing surgery and talking to patients. And how can we actually design systems that rectify that, both software-wise and organizationally-wise? There’s numerous challenges like that in all sorts of domains out there that I think for software designers to really tackle and engage with is just an important responsibility we have as a discipline. MARIAN: I also think that relates back to understanding the role of the human in the system, all the way from design through implementation, through application and use. Because until we really respect that these are systems that work with people, we’re going to come up against the kinds of issues that André is talking about. ANDRE: And I also want to say that I have an incredible respect for the people out there doing design. MARIAN: Here, here. ANDRE: We’ve actually come such a long way. And to see the way in which they engage with the problems, how they like engaging with the users and how they like teaching each other and how they like sorting things out, seeing that in action is a privilege to me as a researcher. And then to hopefully give back with a book that spreads some of that knowledge and that spreads some of those insights beyond the people we’ve observed, that would just be great. CORALINE: We like to end each podcast by reflecting on the conversation we’ve had and highlighting the things that we want to take with us and think about a little bit more and some calls to action. So, I’d like to start. One of the things that really struck me, and initially I had a negative response to it and then as you qualified it my response changed, was talking about gurus. And I think, I have an instinctive knee-jerk reaction against relying on gurus because I think that cults of personality develop around certain people, especially in the open source world and it’s not always warranted. But you made an important distinction that there are two types of gurus: those that inject their expertise and see themselves as rulers over these knowledge domains, and there are those that share their expertise and are actually working to distribute decision-making and they’re contributing but they’re allowing space for other contributions. So, that’s a really interesting way of thinking that I want to sit with for a while. So, thank you for that. MARIAN: Great. ANDRE: Welcome. REIN: I’m going to have to try to pick a small number of reflections because there were so many things. One is early on when we talked about the democratization, Coraline to use your words, of knowledge, of decision-making, is that this is a seismic shift away from top-down command and control management by numbers, the hallmark of which is the decisions-making is held in the hands of a few of the managers and that the workers don’t get to bring their brains to work. They just do what they’re told. And the alternative to this is a more democratic way of working where the workers do get to bring their brains to work and they do get to make decisions. And this has a lot of important positive outcomes. Decision-making happens closer to the point where it’s needed, all sorts of things. Another is when you said that a defining characteristic of experts, and I’m paraphrasing, is that they choose among their methods and actions based on the results that they offer. And this is true about experts but it’s also true about certain organizations and those organizations are called steering organizations. And one of the things that I focus on is how do you get that to be an organizational competence rather than just an individual competence. JESSICA: There are a couple of phrases that I really liked. One was the part where if you count tickets and story points and hold people to those numbers then they’ll just squash the bug. But in high-performing teams, we don’t squash it. We study it or we follow it back to its nest. We learn about all its parents. Yeah. That’s way more fun. I like that. The other phrase that really struck me was experts are in the room twice. That we’re both participating and thinking about the discussion on a meta level. And that happens on this podcast. It’s rather challenging but it’s one of the nice things about having a good editor is that we can both be in the room and go meta and think about what we’re talking about at a higher level. That’s why I love that. JAMEY: I was really struck by something that was said almost right at the end about designers. I’m going to preface this by saying that I think a topic that gets brought up a lot in general in these types of discussions and that I do think is really important is that a lot of people seem to think design isn’t hard. And obviously it is. But I think this often gets framed in like “People should appreciate designers because designer’s jobs are really hard.” And I think that’s true, but I really like the way that you worded it. It was like “I hope designers can say you know what? My job is hard.” And I hadn’t thought about that as an internal thing rather than an external thing necessarily in that way. And that’s so important, because it’s important to be appreciated by other people in your organization or in your industry, but if you don’t have an appreciation for yourself and the fact that you’re doing something that is difficult, then you have no basis to be kind to yourself, I guess. I think it’s really easy to be like “I messed this up and I did a bad job and I’m so dumb,” or whatever. And being able to say “No, my job is hard and I do a good job on it almost all the time and sometimes I make mistakes because people make mistakes and it’s very difficult,” is a really important basis to have. MARIAN: Error is opportunity is a really good lesson that I’ve taken away from the experts. It’s true. ANDRE: For us to engage in a conversation like this, that’s what we do when we’re out there with industry. And time and again, and again you prove this right today, we learn from the conversation as well. We have several pages worth of notes here of literature to study, behaviors that you’ve hinted at, things that we can look at in more detail, segues into other places that, that is what makes our job as fun as it is – is the interactions with people in the industry and people who are willing to think about what they do. And so, let this be an open invitation, not just to you but to everybody who’s listening. Anybody who wants to talk with us, would like to have a conversation with us about what we do, or about what they do, and tell us lesson number 67, 68, 69, we are more than happy to engage in that conversation. It’s just too much fun not to. MARIAN: And I’d like to second that, because one of the reasons that this has been such an engaging conversation is because you are all providing a beautiful example of being reflective practitioners, of thinking about what you’re doing, of people who read around and understand, try to understand what the consequences are of Satir’s Change Model, software design [inaudible]. That’s the exciting part. And I actually think that that’s more typical of professional developer behavior than many people assume. CORALINE: Well, I want to invite you both to join our Slack community because you’ll definitely have the opportunity to engage with thoughtful people who are asking those questions about themselves and about our industry. So hopefully, we’ll see you there. And I want to remind everyone that if you support us on Patreon.com/GreaterThanCode at any level, you can join André and Marian and all the panelists and all the community members in these kinds of conversations. So, think about contributing to us. ANDRE: You’ll see us very soon. JESSICA: Yay. Yeah, Marian, André, thank you so much. I knew this would be fabulous. MARIAN: Oh, you’ve made it so fun. ANDRE: Yes, indeed. MARIAN: This format’s awesome. JESSICA: Yay. JAMEY: Thanks for coming on. It was really good. CORALINE: Yeah. Really enjoyed the conversation. Thank you so much.