JESSICA: Good morning and welcome to Greater Than Code 191. I'm Jessica Kerr and I'm happy to be here today with my friend, Jamey Hampton. JAMEY: Thanks, Jess. That's a lot of episodes that we've done. I'm really excited to be here also, and I'm introducing my friend, Rein Henrichs. REIN: I am not going to be able to keep up this level of excitement this morning, but I will try. I am here with my good friend, Amy Tobey. Hi, Amy. AMY: Hi, everyone. REIN: Amy has worked in web operations for more than 20 years at companies of every size, touching everything from kernel code to user interfaces. When she's not working, she can usually be found around her home in San Jose, caring for her family, practicing piano, or running slowly in the sun. Those all sound like nice things to do. AMY: They are. Keeping out an even keel. Well, less uneven. REIN: Even-ish. [Laughter] REIN: I'm sure you know what's coming. What is your superpower and how did you acquire it? AMY: This thing I do when I walk into big messes or jumbles, a lot of us walk into tech companies and see code bases that are sprawling and stuff. But being able to look at the mess and see the potential for what it could be. It's something that I developed really starting as a kid. My family was largely manual workers. And my dad worked in scrapyards for most of my childhood. And so, I spent a lot of time in scrapyards looking for possibility, looking at piles of junk and seeing, "Oh, I could build a go cart out of that," or I could build a bike thing that looks like this antlers and stuff. This is like when I was under 10. And so, I grew up and as I got older playing with bigger things like cars and stuff and doing the same thing is like walking through piles of scrap, looking for potential. JESSICA: Seeing potential in a big mess. That's a great superpower. AMY: It's a little bit much in this modern time, but... [Laughter] JESSICA: Yeah. But there's got to be potential that just absolutely couldn't have happened if we didn't go backwards this far first. AMY: I agree. That's the whole junk analogy. The stuff that I used to build things out when I was young and didn't have cool toys like computers, stuff that people discarded or used up and broken and threw away. And from that could be built new stuff. That was really cool. JESSICA: Amy, I'm curious. You mentioned coming into a new company, how many different companies have you worked at? AMY: Oh, gosh. Way too many. The last few years have been a journey, to put it in my therapist's terms. I usually use words like a mess or a disaster. I probably worked at 15 different companies in my career. Twelve, 15, something like that. I haven't counted because it's depressing. It's cool in that the perspective I've gotten from that has come in really handy over the years, being able to go out on Twitter and say, "Oh, that sweeping things about operations and SREs," and draw from that experience -- not just the experience, but the pain and the good parts, too. JESSICA: Right. I kind of feel like there's a couple prototypes that you need in people at a company. You need people who have that kind of breadth of experience to come in and then they don't stay forever. And you also need people who have depth in the existing system. AMY: Absolutely. It's a shame that so much of -- especially Silicon Valley. I live here in San Jose -- is so, even today, still very focused on the rock star. And it's not even like I'm a generalist or specialist kind of spectrum. It's just looking for this alignment with whatever they're trying to build so that they can exploit and get the code out as quickly as possible. It seems to be what the pattern is. But I wish there was more focus on both what I do, which is the generalist, being able to look at sideways across the system. And the specialist who looks down deep into the code and in the systems and makes them better. Which are you all? Are you t-shaped people or do you prefer to look broadly or do you prefer to look deep into one area? JAMEY: I'm actually kind of in the middle of a transition from one to the other. I've been a generalist for my career so far. I started in consulting and so I was doing even a lot of different frameworks and stuff at the same time when I was very junior. And now the past, maybe a year or so, I've kind of tried to make a decision about like honing in and specializing in something. And it's been really gratifying to almost take a step back and really spend the time to get to know something really holistically. And I've been really enjoying the learning aspect of that. It was kind of nerve-racking because I stepped into a job where I was suddenly the API specialist at my old company. I was interested in that. I was learning about it and I was excited about learning about it, but I was kind of like -- I mean, I'm not an expert yet. I'm learning. I'm going to be an expert. And so, that was kind of like a fun transition. JESSICA: Yeah, that is a thing. When you pick something, you're not the expert yet. But as soon as you declare, "I am learning about this," and invite people to ask you about it, yeah, you'll become the expert. Just from them bringing you their questions and then you going and doing the research to answer the questions, you'll get that expertise. AMY: That's why I like teaching so much is because there's no faster way to learn. When somebody says, "Hey, can you teach me that?" And you're, "Oh, no. I actually have to learn this." [Laughter] AMY: But you learn it in a way that you don't learn if you're just trying to apply it to some problem out in the world. JESSICA: Right. Because you have to be able to explain it. So, you have to go deeper and get a stronger mental model that goes beyond making it work this once. AMY: And tie it to your own experiences and knowledge so that you can draw from other parts of your expertise, especially in the early stages of learning something and teaching something new. JESSICA: Yes, so you can craft a story around it both for yourself and then others. REIN: I was thinking about whether I'm a generalist or a specialist. And I think that probably largely because of ADHD, I've tried to be both. So, I've become something of a specialist in various things that I've managed to hold my attention for more than one month. And it's sort of an eclectic group of things that seems to have only a tenuous connection, like programming language theory and distributed systems and resilience engineering. JESSICA: Those are all abstract things. REIN: Yeah. JESSICA: You can learn a lot about those things and also be pretty broad technologically. REIN: Yeah. I think the connection -- and I think I mentioned this when I did my episode is I think the connection is that they helped me to see the connections between things. That's also why I like category theory, for example, because it gives me a vocabulary for talking about how things are related. That, I think, is really powerful. AMY: I heard two things that made me think of, which was you made me think of the t-shaped person analogy again, which I guess [inaudible] popularized it. It might've been around before. But the idea that you're broad in some areas, but deep in a couple. I don't know how common that terminology is, but I really like that concept, especially for ADHD or neurodivergent folks like ourselves who tend not to stick to one topic and want to move around a lot. JESSICA: It's important to have one thing that you're deep on so that you can draw analogies to everything else that you learn. That one thing becomes kind of a structure. AMY: Right. REIN: Yeah. JESSICA: Like, for me, everything that I learn about becomes a software analogy. But that's because software is a complex system and it's my favorite complex system. So, I can draw analogies to any other complex systems like relationships, and social structures, and biology, and, and. AMY: Now you do that. And I've been fascinated by this for a while. Is there like a type of mind that's more holistically oriented? I've noticed this in people. There's some folks that seem to be holistically oriented. They tend to see all the connections among the things more than the actual nodes. And then there are people who are very focused on the nodes, or even particular subgraphs of knowledge or whatever. But not kind of trying to see the big picture. And I wonder if that's more of a diversity of human experience thing or our career paths. Do you have a feeling about that? JESSICA: Is there a difference? I mean, between ourselves and our circumstances? But yeah, you're right. Different people get satisfaction from knowing webpacker inside and out and writing books about it versus, I mean, I'm kind of fascinated by dependency management in general. AMY: Right. Yeah. There's a lot to dig into there. JESSICA: And we really need both of those people. AMY: You do. But one seems to thrive a little bit more in industry, I think. I don't know. JESSICA: Well, one of these people tend to be on Twitter more, and on generalist podcasts. AMY: Oh, yeah. [Laughter] AMY: I don't know about those people. really. [Laughter] JESSICA: Because I think a lot of the people who do work at one company for ten years and really get to know that system, other people don't get to know about them. And I think there's a lot of people probably in our audience who are that person who's kind of a linchpin or at least a very important, productive part of a team that they've been on for a long time, and they really understand this one thing, and they're always feeling bad about not being in on the latest trends. But no, people. You're the ones getting the work done. AMY: Getting it done, yeah. JAMEY: One thing that I've said before is like whenever I bring up something that I don't like doing and then someone else is like, "Oh, really? I love doing that. It's like my favorite thing ever." I'm like, "Great." [Laughter] JAMEY: [Crosstalk] who likes to do this and then I like to do these other things that people might not like to do. And so, that's what I was thinking about when you're talking about having different kinds of people on the team. AMY: For sure. That's the whole idea behind a team. If you're going to hire the same five people then -- I don't really get the point of that, actually. I've never really understood it. JAMEY: Have you ever played D&D before? AMY: [Laughs] JAMEY: You have to have party coverage. It's important. AMY: Right. Nobody wants to play the healer, right? JAMEY: Yeah. JESSICA: Then you wind up with a trickster cleric. Yeah, and we create job descriptions often by saying, "Who are the people successful on the team? We want more people like them." No, you don't. You already have them. AMY: The best quote I've ever heard and I've stuck with since is that the job description should describe the capabilities that the team needs, not what the individual needs. There's nuance there, because there's the whole problem with you put all the things that, say, an SRE needs for my team on a job description. And most men will look at it and go, "Ah, that's all me." And there's numbers out on this, right? JESSICA: Yeah, they're like, "I have 10% of that. Good enough." AMY: Right. And most women will look at it and go like, "Well, I only have about two thirds of these," and then not apply. And there can be a huge discrepancy in skill there. But the bias comes through first. JESSICA: Yeah. I almost want to ask the existing people in the team, what do you not like to do? AMY: That's a really good one on one question. JESSICA: Yeah. How can I get someone else to do what you don't want to do and just generally make the team better? AMY: And you can answer Jira. JESSICA: [Laughs] I don't know. AMY: I'm tired of that one. [Laughter] JAMEY: Yeah. JESSICA: That's what I like the project lead/manager to be. Just that. You take care of the API to the rest of the organization, which is usually Jira. JAMEY: I was in an interviewer recently and I casually mentioned that I did and liked doing hiring at my old company and the person who was interviewing me was like, "Oh, I can't wait to tell everyone." [Laughter] JAMEY: Like that's a good sign. AMY: Great. I actually like interviewing. I hated it earlier in my career. But as I've gained experience, I've grown to love it more and get into it mostly because there are times when there were the bad interviews early. Why am I here? And I felt my time was so important to be elsewhere. And so, I was frustrated, being in these interviews that were not exciting or anything. And so, I would complain about it. And then I had a really good manager who tried to patiently explain it to me. But it still took a couple more years for it to sink in really what it was that I was trying to do there, which is to bring in peers that I'm excited to work with. And so, once I finally made that mindshift, it made a big difference in how I approach interviews and how much I enjoy them. And it drives other behavior too, things like when I complained about all the bad interviews, they said, "Well, great. Here's your HR business partner who's going through the resumes. You can sit with her and help her sort these out a little bit more." And I was like, "You got me." And it turned out to be a great exercise because I looked through hundreds of resumes, sat down with them and we went through hundreds of resumes and thin the herd and talked about it as we were going through while I was making a pile for myself, for the candidates I was looking for. I was also talking to my HR partner through what I was looking at and what I was seeing. And so we got to go back and forth and discuss what we were each looking for and getting close to alignment, which helped downstream everybody else in the department too. It's really cool. JESSICA: Yeah. To transfer know-how, sit down and do it together. AMY: I often tell people that when I say, "I'm going to have all these bad interviews." "When was last time you sat down with a recruiter and went through resumes with them? If you haven't, then that's probably the next thing you need to do." JESSICA: Amy, when you say you walk in and you find a beautiful mess, is that in the technical side of the system or does it include the social part? AMY: I have to say I'm probably more gentle on the technical system, which has none of its own autonomy. And when I walk into a mess and I see a mess in the people's systems, I'm a little less into seeing the hidden beauty. [Laughter] AMY: It's a little harder to find sometimes. But I walk in and see a technical system. I've started to look at it more in terms of why, trying to get myself into the shoes of the person who built it to understand why they made the decisions they did, which is why I usually am screaming at the walls. I sit in this room with the door closed and complain about why there are no comments in the script. I have no idea what they were trying to do here, because that's really what I'm after. I don't even care if the code sucks. I just need to know what it's supposed to do. That's what usually I'm looking for in systems now, and it makes a little easier when you find a directory full of shell scripts that have terrible names and almost no comments. And you got to figure out what they're doing because they're load-bearing. JESSICA: And they're all in one big commit that says 'stuck'. AMY: Yeah, usually. [Laughs] JESSICA: Load-bearing shell scripts. Ooh, right. Because they're part of the structure that's holding the system together? AMY: Yeah. You look at most of the deploy systems out there, I bet it's in the high 90s of percent or higher up in the point nines, deploy systems that would not exist without shell. They [crosstalk] either all written in higher level languages. They do all kinds of stuff up in Go and Python and stuff. But when it gets down to the pointy end, down where the actual real stuff happens, it's usually a shell script. REIN: My favorite are the load-bearing bugs. AMY: Yes. JESSICA: Oh, yeah. JAMEY: Those are really interesting use of the word favorite. JESSICA: [Laughs] AMY: They're fascinating. REIN: They're always so fascinating. You'll learn a lot both about the system and about the people that built the system. AMY: I started learning about that in the [OTS] when I still had enough surplus energy to run Linux on the desktop full time and would want to run games in Wine. And so, I was, "Ooh, I know how to code. I'll get involved and figure out how to implement these APIs," not really understanding the complexity of the problem I was trying to launch myself into. And a lot of what the Wine folks do in their Windows layer for Linux is finding the bugs in the Windows APIs and implementing them exactly the same because they end up being lynched in the entire architecture of some of these games and things. JESSICA: Oh, wow. Because our systems, they grow around each other and everything new is broken because the rest of the system hasn't grown around it yet. But then, yeah, they lean on each other and hold each other still. The biggest impediment to change is your users. REIN: If you've ever seen those trees that grew up through a crack in the rock or like they got split by lightning, and other intertwine, like the trees intertwined around itself. That's how our systems are built. AMY: Yeah. REIN: That's a lot of local growth with a local perspective on the system that leads to these really fascinatingly messy systems. JESSICA: Well, and it's how we are built. "Why are you so good at that?" "Well, because we had a lot of problems with it." AMY: Yeah. My favorite in biology is the vagus nerve. JESSICA: Is that the one that makes hiccups? AMY: It is, I think, related because it's part of your regulation of your sympathetic nervous system. But what's cool about it in terms of legacy wrapping around stuff is -- I might have the wrong nerve, but there's one that runs from your brain down into your chest cavity and then back up for some reason. And it's common across mammals. We should probably look this up because I'm going to get the terminology wrong. But it's really interesting. Like even in the giraffe, this nerve runs all the way down the giraffe's neck and then back up. And it's basically just like a biological loop that just never got revised. REIN: Yeah, it's the recurrent laryngeal nerve. You can find a video on YouTube of someone dissecting a giraffe and showing you this. It's fascinating because obviously the giraffe evolved from something with a shorter neck because why would you ever route traffic that way? [Laughter] AMY: I mean, have you driven in LA? [Chuckles] REIN: Yeah, but LA evolved from a giraffe with a shorter neck. AMY: True. [Laughs] REIN: The reason that I really like load-bearing bugs and weird tangled technical systems is because if you just show a modicum of respect for the people who built those systems and the journey they went through to get the systems to where they are, you can treat those systems as almost entirely an intellectual exercise without hurting anyone. It's really easy to just have fun thinking about these systems. AMY: I'm in a hybrid state right now where if I'm by myself in my home with my door shut, so it's beautiful in this time of working from home where I can mutter to myself as I'm going through a new code base that's unfamiliar to me in discovering things that just set off my spidey senses and go, "Oh, what the heck is that? Why is that there?" If I did that in front of the author, it would be incredibly cruel. And some of us do this with open source. We go through it and we go, "Oh, this is crap." And then we put it on Twitter. And I think what Rein was kind of hinting at a little bit is forgetting that there is a human who is doing the best with what they had, the knowledge and tools that they had at the time. JESSICA: And in open source, not getting paid for it. AMY: Right. I've certainly made an effort to try to be less 'this is crap' and more, "Wow. Those are some interesting choices." JESSICA: [Laughs] REIN: I'm a white dude so I can make anything an intellectual exercise if I want to. AMY: [Laughs] REIN: The question is to what extent it's hurting other people. And poking around in messy code is one of those places where you can do it in a way that doesn't need to hurt people. JESSICA: Yeah, we can curse at our computers in our home offices as loud as we want. Well, it does disturb the rest of my family, but it doesn't disturb the authors. AMY: That's the goal. Although I have to be a little careful because I'll notice if I have a day where I'm sitting in here where I work, and if I get a little shouty, like I had a couple of incident a little while back, and I was still cleaning up aftermath for a couple of days. And I was getting a little frustrated and letting it out just by myself in my room. And then I noticed a little later in the evening, I was sitting out and I could hear my son on Skype with his friends and started to hear him get a little bit shoutier. I'm like, "Oh, jeez. No." He picked it up. Kids will do that, too. JESSICA: Yeah, my kids do that. But I curse so much that cursing isn't cool because it's something I can't do. AMY: [Crosstalk] JAMEY: That's like a life hack. Oh my, goodness. JESSICA: [Laughs] REIN: The other thing that's nice about these systems is, Conway's laws that you build systems that represent the relationships of the people that built the system. There is the 'don't ship your org chart' thing. The systems that we build are microcosms of the relationships of the people that built them. And by analyzing those systems, you can start to understand not just the present day relationships, but historical ways of being in your company. AMY: Well, we can see it better from the code than necessarily, say somebody could from the boundaries of the system, like the APIs and so on. REIN: Yeah. And like digging into old files. You can see changes in, for example, the style that was prevalent on the teams at the time and styles that differ across teams, or things like that. JESSICA: And you can see the dysfunctions of when Conway's law does not apply because, for instance, there's Linux developers who are re-implementing Windows bugs. That's because the game developers did not have connections to the people in the Windows APIs. And the Linux developers do not have connections with the game developers. And so, instead of negotiating a useful API between and like a flexible API that you can actually evolve and change, you can't. So, you do the weird tree around the side block thing of growing in bizarre ways because the relationships that you need to parallel the technical system don't exist. REIN: The fascinating thing about human artifacts -- and I'm not the first person to have this thought, I just can't remember who to credit it to at the moment -- is that they're all indelibly stamped with the experience of the people that made them. Even a door, every designed artifact has that property and even some [crosstalk]. AMY: I mean, I'm looking in this room. I'm sitting in at a door -- JESSICA: Which is probably not made by someone eight feet tall. AMY: But I'm picking up because my eye is kind of a [inaudible], is it looks like it was probably made on a mill by a robot. So that door, the design of the door maybe carries some of that, but it's missing that craftsmanship part, I think, that I at least heard a little bit of what you were saying, Rein. REIN: Yeah. Some of this comes from a design of everyday things. But creating an artifact, designing an artifact is making a statement about not just how you perceive the world, but about how you believe other people perceive the world. When you make something that's designed for other people to use, you have to have a theory of mind. JESSICA: You have to believe you know what they need. AMY: I mean, that's probably not how modern doors get noticed. [Laughs] I don't mean to throw it out. It's just to say that most things are probably designed, what can we sell a whole hell of a lot of. REIN: Yeah. JESSICA: Which is determined by how modern houses are made. Why are the doors that shape? Because doorways are that shape. REIN: Simondon, who I can name drop, has a theory of technology that talks about the distance between fabrication and use. And that this is a sort of hallmark difference between craft or artisanal modes of production and industrial modes of production. If a craftsperson makes a door, they are much more likely to actually know the person who is buying the door. I have an uncle that [crosstalk], and they have relationships with people. He has relationships with his clients. He makes things specifically suited for their needs. And he is also involved in the lifecycle of that thing. If it breaks, he goes and repairs it. That's an artisanal mode of production. But industrial modes of production are, the door was created in a factory from materials assembled at another factory and shipped on a truck to some store and then bought. And there's this huge gap between the fabrication of the product and the people who use it. And I think about that in terms of software delivery too. AMY: Because we're a little closer than a door manufacturer might be, or door designer. REIN: If you analyze the agile manifesto from this perspective, they are trying to reduce the gap between fabrication and use. Listen to your customers. Get feedback from your customers. Incorporate that into what you build. Ship working software to customers as often as possible. Those are all attempts to minimize that distance. AMY: Right. Yeah. I really like that. It's so common today for what we call agile, at least as done in the world. There's usually like a product team that insulates the organization from the customers. Often, I'm still not really sold on it myself. REIN: Well, the thing is that the industrial mode was necessary to achieve the goals of the Industrial Revolution. You can't make a million doors with the artisanal approach. AMY: We see that in social media moderation, I think. We're making billions of doors, but we are not carrying the responsibility. JESSICA: Oh, and maybe the part where we can just buy a door at the hardware store because we don't care that much about doors. But we need a door. And the less time we spend on the door, the more time we can spend on the kitchen, because that is really important. So, it's like by having frameworks and stuff that are generic, we can spend more attention on what really is important to our particular customers. AMY: Right. More people to spend less time on doors and more time on putting really good bathtubs in every house. JESSICA: Yeah. And then, once we get really good at really good bathtubs, maybe we'll start caring about the doors a little bit. Amy, how does this distance between design and use affect for liability in software? AMY: You know, I see it a lot, is the disconnect is a big part of why a lot of orgs don't have a good reliability story is it comes down to -- I don't know. I'm still trying to figure it out because the solution seems to be that the folks who care about reliability need to spend more time with the folks who define the product. And those are ostensibly the people who are spending time with the customers. But what I find is, is that the way the product world has moved over the last decade, at least from my observations, is they very much are not so much representing the customers' work as they need to do. It's some imagined version of what they -- they go interview a customer. The customer says, "Oh, I'm trying to do this and this is my goal and this is what I'm trying to do," and they write down and make notes and they go back and design something and hand it to software engineering. And I think if you sat through that first conversation and looked at the output, it would be hard to see the connection between the two in a lot of product pipelines. JESSICA: Because they've got a lot of work as imagined happening. AMY: I think so. Because it's always going to be a balancing act, because you can't just implement everything the customer asks for or whatever your customer needs. You've got to find that common ground across customers that allows you to build something that's valuable to more than just one. You get to see the economy of scale and so on. But I think it gets too abstract. Things like, "Hey, our customers really care about reliability." When our product doesn't work, they get really angry. They go yell at their boss, who is the decision maker. And this is usually the line of reasoning I'm trying to tell decision makers is this is the value of putting reliability, but it's too abstract. It's too late. And so, it doesn't get designed in. But I think if that relationship was tighter, it would create more of that empathy loop or something where the value of what customers experience from reliability would be more present in our product designs. JESSICA: Yeah, because when I used, yesterday, it was Dropbox. It pops up its app and it's got five things it needs to tell me about. Look at this new feature. Look at that new feature. I'm like, "Get out of my way. I just want you to work. I don't care. I just want you to do the few things that I count on." And the new features are not adding to my experience. AMY: You're a fancy FTP server. Get back in the box. [Laughter] JESSICA: Right. So, as a product owner or something, I want to get excited about new features. But as a customer, I just want the old features to work smoother and smoother. And if I want more of anything, it's integrations. AMY: That touches back to what we were saying earlier about this kind of more at a product scale, but the investing really deeply in being the best file distribution service there is, which Dropbox is pretty far up there. And then that desire in the industry is [inaudible] to say, "Oh, but you've got to land all these features to keep bringing in customers and growing," and all these other things. When most of us, we just want our files to be where we expect them to be. JAMEY: This whole conversation is making me think of like every time Twitter adds a new feature that nobody wanted and everyone is like, "Cool. Have you banned all the Nazis yet?" REIN: There's a subtle difference here, which is that the reason they haven't done that is not because they're confused about what their users want. We are the ones who are confused about who their users are. AMY: Right. So how does that play into [inaudible] and policy that they've started demonstrating with blocking a few more people and stuff? REIN: They don't want to lose $50 billion like Facebook just did. AMY: Right. [Laughs] JESSICA: Right. Because a bunch of companies just pulled advertising from Facebook for the month of July. A bunch of really big companies, for context for the listeners. AMY: It was weird to have good feelings about Verizon. I'm still working on that. JAMEY: That is weird. REIN: Processing? AMY: Yeah. REIN: Can I go back to the whole convincing executives to care about a thing? AMY: Yes, please. REIN: So, there are two things there that I notice that are sort of stereotypical of how organizations function, which is that the hardest problem for every organization seems to be -- there seems to be two of them. One is now versus later. And two is where do we spend our attention to maximize value? Because organizations are decision making organisms. And so, which decisions you make is the first thing to figure out. But one of the things that's happened in the sort of the structure of the corporation is that the part of the organization that makes these now versus later tradeoffs that translates strategy into action, that translates the there and then into the here and now is subordinate to the part of the organization that is future looking. So, the C-suite is almost entirely future looking, oriented around strategy. The parts of the organization that actually translate that into performance, the day-to-day work of the organization are like VPs and sometimes directors. So, corporations have made the translation of what is to be done with what do we do into a subordinate role. And I actually think that's the most important role in any organization. AMY: The what do we do? REIN: The how do we translate our strategy into action role? AMY: Okay. What I was hearing in there is you said the most important things. And for both of those, it feels to me like especially if you get past a group of more than a few people, boils down to a communication problem. Because these guys or the folks at the C level and so on are communicating about [crosstalk]. REIN: They're guys. AMY: Okay. [Laughter] AMY: Basically, yes. Trying to change the world here a little bit. They communicate about strategy. But very often, that breaks down even at their level. So that as it comes down to that subordinate role, as you mentioned it, it's already watered down. So even if the system above them had been working, it's not even working, that model. REIN: Yeah. Let me give you an example of an alternative. Stafford Beer's Viable System Model is a model that's based on biology that is both recursive. So, viable systems are constructed of viable system, and it's also layered. So, the model has five systems. I'll just skim through this really quickly. Systems one through three are focused on the here and now. System one is the people doing the work, the central organs of the company or the organization. System two is the communication channels between system one and system three, and between system one and system one. System three is the make sure all of the individual local workers can grow into other larger goal system. System four is actually the forward looking in strategic organ. And system five is the one that translates between system four and the now focused systems. So actually, the sort of the top level of the sort of hierarchical system is the system that does translation. And his model is based on biology. AMY: Yeah. It just kind of reminded me of the legends of Google at least 10 years ago. REIN: System four is like the prefrontal cortex, the highest level abstraction machinery in the brain. And then system five is the one that translates that into what your body does. AMY: Okay. REIN: When Stafford Beer went to Chile in the early 1970s to head up this thing called Project Cybersyn that he was asked to design, he put himself at system five. He wasn't interested in being in the strategic organ of the system. He wanted to be where the plans for the future are met to the day to day because he felt like that was where he could make the biggest difference. AMY: I feel like a lot of software engineers would like to be there. REIN: Yeah. My sense is that the more experienced or the more senior, I guess, engineers becomes, the more they begin, like the way it's often put is they're more interested in business needs and so on, and having more leverage. But a lot of that comes from understanding why the work I'm doing today is important to my team and then to my division and then to the company as a whole. JESSICA: System five is designing change? REIN: System five is translating the strategy that comes from system four into the short-term goals that are achieved by system three. So, it's translating the five-year plan into the six-month plan or something like that. JESSICA: Which means understanding not just where you're going, but also where you are. AMY: Interesting. REIN: I'm trying to -- do I want to just name drop Heidegger here? I guess I do. [Laughter] REIN: Heidegger, who is an awful Nazi, points out that the present doesn't exist. The present is actually composed of two parts. One of those parts is anticipation of the future, and the other part is the result of the past. So, our mood in the present is the result of the past. JESSICA: And the anticipation of the future. REIN: But our actions are about anticipation of the future. AMY: So, that bad burrito I ate yesterday is why I'm in a bad mood today. REIN: Yeah. So, it's past dependent. If you had had a different yesterday, you would be in a different present today. AMY: Right. JESSICA: Although your anticipation of the future has a big effect on that, too. And your past anticipation of the future even more because if I ate a burrito from Taco Bell and it tasted exactly as bad as I expected it to, it's fine. If I thought I was going to get a good burrito, then I get really cranky. [Laughter] JESSICA: But either way, tomorrow's going to be bad. [Laughter] REIN: We'd like to take a break in the show to let you know that today's show is sponsored by strongDM. Managing your remote team as they work from home? Managing a gazillion SSH keys, database passwords, and Kubernetes certs? Meet strongDM. Manage and audit access to servers, databases, and Kubernetes clusters, no matter where your employees are. With strongDM, easily extend your identity provider to manage infrastructure access. Automate onboarding, offboarding, and moving people within roles. Grant temporary access that automatically expires to on-call teams. Admins get full auditability into anything anyone does: when they connect, what queries they run, what commands are typed. It's full visibility into everything. For SSH, RDP, and Kubernetes, that means video replays. For databases, it's a single unified query log across all database management systems. strongDM is used by companies like Hearst, Peloton, Betterment, Greenhouse, and SoFi to manage access. It's more control and less hassle. strongDM. Manage and audit remote access to infrastructure. Start your free 14-day trial today at strongdm.com/SDT. JESSICA: Amy, you mentioned something about a story of Google in the heyday. AMY: Oh, I was trying to connect what Rein was talking about. There was the legend that Google didn't really have -- the product managers were not in charge of the software engineers, but were more subordinate. And it was that subordinate world that actually got me into this. And they didn't really describe it as subordinate. But the engineers were so privileged within Google's system that product people could suggest what they wanted them to do, but they didn't have the kind of authority that they have in most organizations. And it kind of made me think of that pattern. I don't know if it's still that way, but that's the way it was described to me and said about a decade ago. JESSICA: Because the software engineers have an idea of where things are and how the system currently works. AMY: And how best to fit, "Hey, we need this feature because users are asking for X," but that also needs to be matched with [inaudible], so what can the system do today and what is the right change we can make to support that behavior? JESSICA: Yeah. AMY: Netflix does it that way too in a large extent with context over control. People can come and ask a software team in Netflix and say, "You need to do X," and they can say, "Well, we really don't, and no, because demands of our system and our mission for whatever we own on this team is X, Y, Z, and that's not part of it. And so, you're welcome to go do it yourself. But we're not going to take it." And that's kind of how it works in a lot of ways. There is that direct communication without authority. That reminds me of my other superpower, which was sniffing out authority or misuse of authority, which comes from my childhood dealing with misuse of authority. But now, when I see it in the world, it doesn't have to be like somebody smacking down somebody who's lower than them. It could be even small things sometimes, where somebody approaches -- let's say one recently was somebody who is a leader, came into a meeting and had already decided. It was clear that they'd already decided what they wanted the outcome to be. And they basically used their authority, without really thinking about it, to talk nonstop for 20 minutes and push through to where they wanted to land instead of creating a discussion where we would organically land at the best place as a team. And it wasn't so much like nefarious or anything like that. But it was because of the authority that he carried, that he was able to get to do that and then that people would be drawn into it without a lot of critical thought. Whereas because I've got this weird sense, I'm just like, "Ugh, gross!" [Laughter] JESSICA: Instead of being drawn into it? AMY: Right. JESSICA: That's really useful. As a leader, I would really appreciate having your superpower on my team because authority is a curse in the sense that it keeps you away from information, because people don't bring you information that you don't want. And it's hard to convince people that you really do want that information. It's really hard. REIN: Actually, a whole bunch of effort has been put into trying to make that happen. So, if you look at high reliability organization research, if you look at just culture, a lot of this stuff, one of the main focus is how do you actually get people to tell your bosses stuff? There's a huge amount of emphasis placed on whistleblowers, for example, and their importance because of how toxic most cultures are to that sort of thing. There needs to be a sort of commensurate push to try to make it possible. AMY: Whistleblowers should be the default, if we really want safe systems. We want people to say, "Hey, this isn't right," as soon as possible. Instead of what we get in -- well, let's look at what's going on in America. Letting it go for hundreds of years until finally things boil over. JESSICA: Yeah. You want your whistleblowers to sound like a bunch of birds in the backyard, not like a train. AMY: That would be nice. Yeah. REIN: One of the challenges with a viable system model is that it's based on this assumption that the people at different levels of the system all want the same thing. And that is often not true. In the VSM, there's this idea of what he calls algedonic signals, which is just a fancy way to say pain and pleasure signals. Like in your body, you have nerves that transmit pain signals up to your brain. And so, if you touch your hand on a burning stove, the very first thing that happens is that the local reflex pulls your hand away. But then at the same time, it transmits all the way up to your brain. And so, one of the questions in organization is how do you get signals that matter, these salient signals, to the high level apparatus that needs to look at them. And the problem is that this is assuming that the high level apparatus wants to know about those signals or has the same idea about what is to be done about them, as the people who are experiencing them. AMY: And isn't dissociated. [Chuckles] JESSICA: Yeah, isn't like, "Eh, I didn't like that hand anyway. I'll just get a new one." REIN: Yeah. It's a sort of utopian thing. It's like, I have some issues with it. But in terms of aspirational, it's pretty good aspirational system. AMY: Have you heard of people trying to build organizations around it? REIN: Not recently. I mean, if you put viable system model into Google Scholar, you can find a bunch of papers about this. Like I said, Project Cybersyn was explicitly his attempt to build this thing. And it didn't go great, although a lot of that was not because of problems with the system per se. Although arguably, its inability to respond to the environment was the problem with the system per se. But yes, Cybernetics as a whole is a thing that kind of fizzled out in the 80s. AMY: You're kind of singlehandedly trying to bring it back, though, I've noticed. JESSICA: [Laughs] REIN: There are dozens of us. Dozens. AMY: Dozens. This is going to be a revolution. REIN: The thing about cybernetics is that it's sort of wedded to the information processing systems model of cognition, which is that brains are like little computers and so on. You know, the Shannon model of communication. But I think that there is a way to get some of the good stuff out of it and to synthesize it with sort of more modern research and to join cognitive systems and more modern ways of thinking about cognition. The models that I've been able to pull out of it have been very useful to me. AMY: I've got to try some of that someday. It would be fun, just to see something different. I think that's more what I'm -- that sounds cool. But mostly, I just want to see some models that are a little different from the same stuff we've been trying over and over and over and over and over...oh, please stop. [Laughs] REIN: The interesting thing about his model is that for him, this is not the way things ought to be framed. This is the way they must be to be viable. So his idea is that this system should describe, you should be able to map any viable system onto this model or vice versa. And so, it's interesting for me to look at real existing systems and to compare how they seem to function. And when I do that, yeah, I sometimes find things that I think are incongruent with that model, and that helps me to think about areas where I think something could profitably be changed. JESSICA: In a company, you could have R&D separate from the C-suite. REIN: One example that shows up very frequently is that directors are sort of the bridge between -- the VP and up are in the sort of stretched strategic mode in an organization. And below the director is mostly day to day operations. The director is often sort of the bridge between those two worlds, but they very often don't see themselves as having that role. AMY: Oh, interesting. REIN: Because they learned how to manage in an industry that doesn't teach people how to manage people. And so, they often think of their job as being managing people rather than managing systems. AMY: Yeah, I've actually heard that before from folks, transitioning through those layers of organizations. They talk about, when you're a manager, you're managing a team. When you're a director, you're managing managers of teams. And then the VP, you're managing teams of teams. REIN: The directors are the lowest level in the organization that sit in on those leadership meetings very often. AMY: Right. REIN: But they're not generally not doing a great job of even disseminating that information down. They expect it to come through other channels, like the product management structure of the organization. Because I don't think it's their job. AMY: That's another thing that Netflix gets really right, I think, is its directors especially and managers do view their jobs entirely as being more about context and alignment than about control. Because they take it dead serious. And so, these folks, that's what they do is they go around talking to each other, which to a lot of, especially young engineers, looks like, "What are they doing all day? They're just sitting in rooms, yapping at each other." What they're doing is creating alignment. And then they're supposed to bring that back to their teams, and then discuss it again. And that's how they create the system of constant realignment that keeps up. JESSICA: And that's why the software teams are able to say, "No, we're not implementing that because we know what we're trying to do, and that's not it." AMY: Yes. REIN: One of the things that I learned from this model that really helps me to attack this problem is we generally think of communication in hierarchical organizations as either going sideways or vertically. So, either directors talk to other directors or they talk to their reports. But actually, diagonal connections between people and especially organic ones are really important. So, me talking to some other director in a team whose work I'm interested in from time to time or directors who have their hands not to try to tell people what to do, but to understand the work that's being done in various places of the organization. It also helps bridge their gap between work as imagined and work as done for those other teams because they know what other directors think that those teams are doing. JESSICA: Oh, yeah. And those people who aren't directly on their team are more likely to give them actual information. AMY: Ooh, yeah. Because it's outside of the normal authority chain. JESSICA: Yeah. I mean, you still get some deference but not the same kind of deference as if it were your director. Amy, I want to hear more about you. What are you doing now? AMY: What am I doing now? Oh, I'm doing a lot of different stuff. As you mentioned earlier, I am doing some DevRel stuff, which is largely, my role is hybrid officially. I'm Staff SRE by title. But the part that I think I get the most fun out of right now is the DevRel stuff, which is doing talks, working with people. I do a lot of drop in, like professional services stuff now that's been added on. We produce a bunch of content. It's hard to even put my mind around how much we're doing. In my SRE role, I own our availability program at Blameless. So, I'm training my fellow SREs how to run that process, basically setting up, make sure all our incident reviews happen and making sure that SLOs happen, make sure that things get reviewed and touched on, on a regular basis. So, we have a process that we're running that basically encodes all of that into a durable process that's repeatable. So, we put that together and then I do incident command. I'm the only incident commander right now, which is kind of cool. But it's a startup. And then I do regular SRE stuff that most people would think of, like breaking Kubernetes and stuff like that. REIN: So, you moved from some large companies to a startup. What is the difference? SRE, I've historically thought of it as being how Google does things. JESSICA: [Laughs] REIN: And whenever I've tried to do it, when I've tried to do what I've always had to, I adapt it pretty heavily for a different context. So, what do you do to make SRE work at a startup? AMY: The biggest thing is I tell people all the time is Google didn't tell the whole story in the books and they couldn't. That's just being fair. And so, I take the ideas out and look at the goals. So, like SLOs, the goal is to create a feedback loop that is kind of chill. It's on a day or week or month kind of cadence that you review. But what it's doing is creating a feedback signal from production to the product, is the goal. And so, instead of looking at all the ins and outs of how Google does it, I focus on what is the business problem that we have? What are our customer journeys? And we're building a product around this, so I'm talking about this in fancy ways all the time. But the idea being that we can measure things happening in production and feed that back to the product folks so that they have the information to make better decisions. Now, it takes months and years sometimes for this to all settle in and start to work the way that it's sold as in the [inaudible]. And that's part of what they leave out is that they worked on that for 10, 15 years to figure out that model. And you've got to get people attuned to the signals and to start understanding what the signals mean and what kind of decisions to make based on them. So, that's part of the work is one, I focus on that durable process, which sets us up for this change over a long period of time and then really just establishing the feedback loop. So, mainly it's incidents and SLOs from production back into the product. And so, my team, the SRE team is the conduit for that. And it can work in a small org or a large org. Just in small orgs, you deal with things like really low statistical significance in your signals, stuff like that. REIN: One of the really interesting things about SRE for me is that in terms of how it was originally designed and framed, at least the way it's presented in the books. It's an axiomatic system, by which I mean they started from a premise, an axiom which is operations ought to be done by engineers. We ought to take an engineering approach to operations. And then they sort of constructed a system from first principles. AMY: I don't think that story is true. [Laughs] REIN: That's how it comes across in the book. So, like the very first paragraph of the book is, we had to take an engineering approach to operation because we didn't have enough people and so on. And it also does a good job of like you were saying, presenting the sort of hierarchy of needs. We need to take an engineering approach to operations. And so, in order to accomplish that, we need X. In order to accomplish that, we need Y. And so, in order to accomplish that, we need SLOs. And if you can reconstruct this sort of chain of reasoning and apply it to your organization and understand the goals at the various levels, I think that's what made it easy to or possible to adapt it for me. AMY: Yeah. Another one of those things that I thought about while you were going through that is they tell that story, and I actually feel like it's a little misleading. REIN: I mean, it's a just story, right? AMY: Right. But what really happened was, is they decided to invest in operations. And the way that Google did that was by throwing engineers at it, because that's what Google's really great at, is hiring tons and tons of engineers without really caring what kind of people they are. And so, they had engineers to put out the problem. And when you have tens of thousands of engineers, everything is an engineering problem. But what really needed to happen at the end of the day in every organization is invest in the operations, code, infrastructure, people, instead of just treating it like this garbage dump where your code goes and magic happens and customers get to use it. That's the old model. And really, that's I think the big sea change of SRE is investing in operations, having people who are professionals, who develop expertise in how production systems behave. And that leads to behaviors like, "Oh, we should automate this," because automating, say, distributed systems requires absolute ton of context. So, you need the expertise and stuff like that. So, I really think it's just about investment. REIN: It gets back to where do you spend your attention. I'm interested how you adapt the practices or the mode of the SRE team to a smaller company. So, for example, when I've done this, I took a much more sort of consulting approach. I didn't have 100 engineers lying around who could just become SREs. I had like three people. We literally couldn't own everything. And so, the way we adapt to this was by taking on a consulting role within the organization. AMY: That's super common approach. I'm actually tossing around. So, the Netflix folks, they call their core SRE team, The Core SRE. And so people have been bothering them for a while to say, "Oh, you need to write down this core SRE model." But I really think there is a centralized SRE model that isn't very well explained out in the world. That is what you're describing, right? There's that consulting arm of it. There is what processes do they own? What are they responsible for? And how do you tie it back to the business goals? Most people think, "Oh, well, infrastructure." But if you pull back a little bit, it's stuff like what I describe. These processes for product feedback from production, these processes for resolving things going on in production and feeding it back in the large system. I guess I said the same thing twice. But the consultancy, because we can't keep hiring SREs as these teams grow and you have, 15, 20 service teams and you can't put SREs on all of them essentially. So, you really want to look at a centralized SRE team because we're some of the hardest people out there to hire right now, and then figure out how to get the leverage out of it. JESSICA: Oh, yeah. And team topologies, they call that the enabling team. The team that doesn't take responsibility for all of, in this case, your SRE, but works with other teams to enable them to do responsibilities for their own ops and SRE. AMY: Yeah. And I add on top the enablement, but also the pulling of accountability. That's what most people think of SLOs or incident management. That's where you have that ability to see, "Hey, this team we've enabled is veering off course and we're here." Then we can recognize that even if they don't know it and they go and engage with that team. So, it's back to enablement but you need that layer of detection of when things are going sideways that you know when to reengage. JESSICA: So, not only do we give you help when you need it, but we tell you when you need it. AMY: Yeah, it's kind of how it goes a lot. REIN: When I've been trying to sort of explain at a sort of executive level what the goal of SRE is, I came up with something that I like and I'm wondering how you feel about it, which is that the goal of SRE is to change the way your organization relates to the systems it runs, to production, and so on. AMY: I like that. I like that a lot because that is the goal. REIN: So, for example, SLOs are changing the way you relate to the health or the success of the systems. AMY: Right. REIN: Incident response is changing the way you relate to failure. AMY: Right. And both are creating that opportunity for people to experience what their product does in the real world. So, as software engineers, a lot of times, you were so disconnected from -- you write the code, you put it in a deploy system, the tests run, and eventually a feature flag gets put and your code is in production. But largely, you've lost direct touch with it by the time you've committed the code and it goes off into abstract land and eventually ends up in customers' hands. But that part where we bring you into, say, incident management and incident analysis is connecting you back to what your code is doing in the real world. So, I like that. REIN: Yey! JESSICA: Oh, it's about time for reflections. Reflections is the part of the show where we usually each say something that we're going to follow up at, something that particularly struck us. Amy, in lieu of our reflection, can I have one more question? AMY: Sure. JESSICA: Why do you hate the word "matured"? AMY: Oh! [Laughs] As a neurodivergent person, a lot of my outright behaviors took longer than my peers to develop. Even well into my career, I would get described as immature very frequently because I would see things that I thought were wrong or something like that and say, "Hey, that's wrong." JESSICA: [Laughs] AMY: Or just say things more bluntly than a lot of people normally would. But after a couple years of acclimation into the industry -- JESSICA: They would squash yet. AMY: Right. I don't think I have been squashed yet, although it feels like that sometimes. And so, that word I hear all over the place, use immature, immature, immature. And I'm going, in my experience with it is that it just doesn't really mean anything. JESSICA: [Crosstalk] behave the way I want you to? AMY: Right. And so, I just don't like it. And I try not to use it too much because most of my experience with it is using it to dismiss people. JESSICA: Super judgy. Thank you. REIN: I'm thinking back to how Amy started this conversation by talking about messes and the way she thinks of messes as both being beautiful and emergent. Beautiful things can come out of messes. This reminds me of Russell Ackoff, who uses mess as a technical term. A mess for Russell Ackoff is sort of like what the folks call a tangled network. It's an overlapping and interpenetrating system of problems. It's the way things relate to each other. It's all of the hidden dependencies and hidden effects in these systems. Russell Ackoff spent a lot of his career trying to figure out how to deal with messes. And he has a quote that I think contains -- it's sort of one of those very dense quotes, sort of like do the simplest thing that could possibly work is where all of the words are very meaningful, at least to me. And the quote is that a problem is an abstraction extracted from a mess via analysis. AMY: Can you say it again, please? REIN: A problem is an abstraction extracted from a mess via analysis. The things that we think of as problems are actually not real things. They are things that we have invented through an analysis of some total system. AMY: Right. REIN: And they are abstractions. They're not concrete things. There are ways of understanding some facets or aspects of the larger mess, the larger system. And in some sense, our understanding of a complex system is largely about how we have analyzed it into distinct problems. One of the things that this allows us to do is to treat problems as being independent of each other. This is useful to us because we couldn't solve them otherwise. But it's also not true, and that sometimes bites us. But whenever I think about trying to solve problems, I always try to do it with some understanding of the context of not only what I am valuing by reifying it as a part of the problem, but also what I'm dismissing and why I'm choosing to do that, and some understanding of the relationship between the problems I perceive as a part of the system. To get super political for a second, race and class in America are two problems that we like to often analyze separately, but they're a part of one mess. AMY: Right. Wow! I mean, we're all staring at that mess a lot right now and trying to extract problems from it. So, I imagine that's something a lot of people are going to relate to. REIN: Yeah. It is really interesting to me not only to see how I understand the world in terms of problems, but also how other people understand the world in terms of problems. So, the way they describe problems will tell you a lot about their understanding of the mess, what they value, what they think is important, what their goals are, what they're striving for. AMY: But I like the flip side that says that I can look at a mess and choose not to extract problems from it and just see it for what it is, which I hope people see me sometimes when I'm a mess, instead of just the problems. JESSICA: Yeah, and there's value in seeing myself sometimes, and not expecting anyone else to see everything, because sometimes I'm the only one who can just fully appreciate and sit with this particular mess. AMY: I picked up on a couple of things, especially later in the conversation. One is myself. I really want to read up more on this cybernetics thing that my friend, Rein, has been bringing up pretty frequently, at least when I'm paying attention. And really, I'm interested in these new models. That part of the discussion really resonated with me because we keep trying the same patterns and keep complaining about the same outcomes. As engineers, we're like, "Okay, so we've got to change the approach." We've got to change what we're doing. And I would love to learn more about these. I'm going to reflect more on that and look at more of these models and learn more about them. REIN: There are a couple things I can recommend there. One is an old book called Thinking by Machine: A Study of Cybernetics. And this was published originally in 1957. So, it's really early in the journey of cybernetics, but it has a forward by Isaac Asimov, if that's a sort of thing you're into. Another one is Cybernetic Revolutionaries by Adina --. JESSICA: Eden Gallanter. REIN: No. JESSICA: No? REIN: Eden Medina. JESSICA: Eden Medina. Oh, yeah. Gallanter is the Tarot cards. REIN: We actually interviewed her on the podcast, which was one of the highlights for me. It's this one and that one are my two favorite episodes. I did really like this episode. But that book situates cybernetics in terms of the historical struggle of Salvador Allende's Chile in the early 1970s. AMY: Oh, gosh. JESSICA: It's a very pleasing hard cover book. I have a signed copy. AMY: Nice. JESSICA: [Laughs] REIN: It also gets into the history of cybernetics. There are sort of two largely separate streams of thought, one in America and one in England. And it talks about the differences there. I really enjoyed that book. And also, the stuff that happened in Chile is incredible. JESSICA: Mostly not in a good way. REIN: Indeed. There were some things that you can take away from it. But it was not, yeah, not a fun time. The last one I would recommend is if you want more about the viable system model in particular, Stafford Beer wrote about that in his book called the Brain of the Firm, which sort of gives the game away in the title. JESSICA: [Laughs] AMY: I appreciate that. REIN: Yeah. JESSICA: Amy, thank you so much for joining us today. AMY: Thank you for having me. I had a great time. JESSICA: And thank you to all of our listeners. Please support us on Patreon at Patreon.com/GreaterThanCode. And then you can join us on Slack and chit chat with all of us. And yeah, keep hanging in there. We can still work.