JOHN: This episode is sponsored by Pantheon. Pantheon is the platform of choice for more enterprise Drupal and WordPress sites than any other platform. A platform with superpowers needs to be run behind the scenes by superhumans of all diversities and backgrounds. Pantheon actively supports non-profit initiatives throughout the Bay Area and beyond such as Techtonica, the Tech Equity Collaborative, and Lesbians Who Tech. Learn more about career opportunities at pantheon.io/GreaterThanCode. ASTRID: Welcome everybody to Episode 141 of Greater Than Code. My name is Astrid Countee. I am a doggy mama, soon to be a little human's mama. And today I'm here with my great friend, John Sawers. JOHN: Thank you, Astrid. I am at a loss for words to concisely describe myself. ASTRID: [Laughs] You have amazing hair, John. JOHN: Thank you. I have amazing hair, okay, we'll start there. And my work is focusing on helping people be better at who they want to be. That's it. Better than they want to be. And I'm here with Jessica Kerr. JESSICA: Good morning. Today I am at UberConf in Denver and tonight I am speaking about Principles of Collaborative Automation. So I am really interested in talking with a group about OOPs. And I am here with my friend, Rein Henrichs. REIN: Hello everyone. I am Rein Henrichs and I am a person who cares about things that are viable, like people and teams and software and infrastructure and making things more viable by making them more adaptive. And I am here with my friend, Chante Thurmond. CHANTE: Hello everyone. Chante here. I am going to deviate a little bit and say that I am present. Today, I'm feeling very much so in my body and maybe it was because we had a full moon yesterday or something or overnight, I don't know. But I am feeling present and in my body and today I'm showing up as a human and thinking about humanity and really getting to the sentiment of the show, Greater Than Code and what we can contribute to the conversation today for me will likely be very [inaudible] morning right now. JESSICA: Today we have a show with just the panelists. That's why we all gave a tiny intro. And as we were chatting about what we were going to chat about, we came to the interest in thinking as a group. John, you just posted something interesting about interpersonal neurobiology into the chat. What is that about? JOHN: Well, I'm hardly an expert. I really heard one of the people who's part of this institute on a couple of different podcasts talking about his research on different types of meditation that you can use to expand your awareness and understanding of others and yourself. CHANTE: See, that was so fitting for how my vibe is today. Wow. Love it. JESSICA: Interpersonal neurobiology emphasizes how both the brain and mind are shaped by relational experiences, especially by emotionally meaningful ones. CHANTE: So how does that relate to what we talk about on the show? JESSICA: Well, do you recall any emotionally meaningful experiences when you're working on software? CHANTE: I don't work on software, but that's a really interesting question. JESSICA: Most of [inaudible] college is frustration. CHANTE: Or has maybe an emotion evoked some action from you that would have maybe jump-started a project or changed the code in which you're using, having frustration or anger or whatever. "You Know what? Forget this. We're going to try it this way." JESSICA: Oh, yeah. We talk about code smells like triggering us to refactor, but isn't it more like a feeling? JOHN: It's definitely a feeling. CHANTE: So, good that we know that folks who code have feelings. So now, let's dispel that myth because people think -- like seriously, this is a thing people think, "Hey, I'm going to hire somebody to go in this dark room over here and code [inaudible] work on this so we can crank this out and get to market." JESSICA: Oh yeah. But if we didn't have feelings, we couldn't think. Humans can't make decisions without emotions. CHANTE: So they say. ASTRID: I think some of the best programmers that I've met are actually really good at working with other people. They're really good at teaching and also learn a lot at the same time. And they can do a lot and they can communicate really well. It's really rare to find people who are kind of naturally gifted like that. A lot of times you have to learn it. And I know that some of those emotional team experiences that I've had have been where people are kind of not open to learning from certain people. There's a hierarchy that's kind of not talked about and they don't want to learn from someone who they think is below them on the hierarchy. So it can be kind of hard to try to be in that group setting with people who are not open to that. CHANTE: It's a really interesting thing. I'm curious coming at this from a non-technical, non-coding background. So all four of you have the programming element to your day job. ASTRID: I don't in mine anymore, but I've done it before. CHANTE: Do you feel like now in 2019, it's okay that we're talking about some esoteric, like even having this conversation about code, I don't know that this would have been welcomed 10 or 15 or 20 years ago, am I wrong about that assumption? JESSICA: I wouldn't have understood it 10 or 15 years ago. CHANTE: Why is that? JESSICA: Well, because I didn't have the context of all the other discussions that are happening these days. And we are talking about being fully alive in your team, and we talk about relationships and communication between people and between people and computers. UX, for instance, is a big conversation about how do we make computers and people able to communicate with each other. And what Rein was talking about earlier, he was talking about blamelessness and the whole post-mortem discussion is another way that our industry has started talking about the way we talk to each other. CHANTE: Did you say post-mortem discussion? JESSICA: Yeah, like the post incident. Sort of what happened in this incident because incidents are like super fascinating. They're like, you know how psychiatrists like to study people who've had like a rod shot through their head and missing part of their brain. There's so much we can learn about how things actually work when they don't. CHANTE: Yeah. That's interesting. Is it in your vernacular then [inaudible], when you say post-mortem, you're talking about like a product or a service that you're turning off? REIN: I've been using incident review to emphasize the fact that no one died. CHANTE: [Laughs] JESSICA: Yeah, that's a better term. Post-Mortem is colloquial [inaudible] term for incident review. CHANTE: Okay. It's interesting you're saying because I'm coming from the HR world right now, but also, I have a nursing background, so post-mortem to me is truly [inaudible]. JESSICA: So you've been in real post-mortems. CHANTE: Yeah. Or like in the incident at the hospital would be different than the person who is building the architecture of the software that we use. JESSICA: What is that like a real post-mortem in a room, in a hospital? CHANTE: It's cold, it's cold. It's non-personal. It's probably very similar to what you may experience in your role that pays very objective and nothing's personal necessarily, even though you're preparing of body for the morgue and their family. But you have to compose yourself in a way that you take your emotions out of it and get the job done. But you still make sure that you give people their dignity. So you have a body, you're covering it and respecting the body and everything, but you're not really like allowed to have too many emotions. If I cried, I was like, "Stop crying, you've got this job to do." There's nothing like being in a room when you hear somebody's last breaths which really can be traumatizing for people. JESSICA: Can we complain that they want us to not cry as programmers? REIN: Anyway, Amazon was down. Isn't that terrible? JESSICA: [Laughter] CHANTE: Hey, what was that you just did there? JESSICA: Rein, you were talking about the things that let us not blame people. How does that relate to being non-personal? REIN: I think one of the reasons that we blame people is that there's our culture, most cultures I think have this sort of hero myth that says that you can assign responsibility for things to one person, to the hero. It shows up in so many stories. It's a fundamental part of the most myths. And I think this idea that things happen because individuals do them, that progress happens because some person, the great person myth and so on makes it easy for us to want to blame someone when something goes wrong. It's also difficult to understand complexity and systems. The system [inaudible] are often so complex that there isn't one simple obvious cause for events that happen. But we want to make meaning out of these situations. And one way that we can do that is by blaming someone, by assigning responsibility to some agent. JOHN: Then it's all sort of squared away. It's handled. We know what happened. It was just this person was terrible at their job. REIN: Yeah. JOHN: Therefore, nobody has to think about it any further. REIN: And if you look at human error, the study of human error, what you find is that when you prescribe something to human error, you basically just stopped asking questions. That's the catchall explanation. Someone did something bad. JESSICA: Because there's always something a human physically could have done differently. But why would they have done that day? ASTRID: When you talk about blame it makes me wonder if one of the underlying issues is respect. Because I was reflecting on the very first time we had a post-mortem at my job, it wasn't a programming job I was at the time, but it was like an operational job. And we had this rush order. It didn't make it out on time. That was a big issue. And our upper-level bosses called the team into a meeting and we tried to just do an outline of what happened. This happened at this time and this went to this stage. And what happened? We never made it through the whole meeting because people started yelling and screaming because the idea of their name being attached to a task was so overwhelming for them that they felt the need to completely discharge the blame from themselves and say, "Well, it's not my fault. That's because your department doesn't do this." And I was like 23, so I was sitting in the room like, "Oh my God, this what work is like?" I have no idea. And I think part of that problem in that meeting and then in some other experiences I've had is that people on the team don't know how to respect each other. And so the minute that they get a chance to kind of call somebody out and be like, "It's your fault," then there's a backlash. And I don't know that I've had very many experiences where you can actually do that sort of incident review where there are people who don't already have that underlying respect and it goes well. REIN: I think it's a litmus test for whether your team can conduct incident reviews in a blameless way. The litmus test for being blameless is, can you talk about who did what without blaming people? So that's why I think this idea that we need to be blameless if it's not applied with some understanding of how blame works will lead to maladaptive behaviors where you don't do things like ask people questions about what they did because you know it'll lead to blame in your organization. Another thing is, even if your team can actually be blameless, what do you put in the incident report? Who's going to read it and make judgments and blame people even if your team doesn't. CHANTE: This could be like a step backwards a little bit, but calling it an incident report to me means somebody may be chopped off their finger. Why are we calling it an incident report or [inaudible] something that went wrong or right with tech [inaudible]. REIN: The really interesting thing for me is that even calling it an incident means that we miss out on other opportunities for learning. CHANTE: Right, yeah. REIN: There are significant events that happen that aren't necessarily something failed in an obvious way. But something was very surprising where we were confused by what happened when we turned this thing on. But customers didn't seem to be impacted but we didn't understand it. And that's a signal that there's something you can learn. CHANTE: Because I feel like calling it an incident, the word incident in post-mortem outside of a software team, even for people who work at a software company who are maybe on the sales team or marketing, they might not even know that that's like the vernacular you're using amongst you on your team. REIN: Yeah. When you call it an incident, there's already the presumption that something bad happened. CHANTE: Yeah. And then does that prohibit people from actually, because we're still conditioned socially outside of this world to know that an incident is bad. But, hell no. I'm not taking blame for that. ASTRID: I feel like that incident kind of makes it sound like it's a person's fault because it seems like there's this kind of expectations that systems are benign and they're supposed to just work. And if they're not just working, somebody somewhere did something wrong. REIN: Yeah. The reality is the complex system are failing in all sorts of ways all the time. And most of them don't matter. Complex systems are undergoing a huge range of different behaviors at any one time. And so, what we do was we make a classification where we say, "This behavior, we consider it to be a failure." And that's the human construction. JOHN: Yeah. I think I've seen some people sort of talking around the idea of also doing some of these incident reports for when things go well. Like if 19 steps need to fail before the actual customer impact happens and 18 of them failed, but then the last one doesn't and nobody notices something went wrong. Analyzing that situation is also really useful. It's like what did we do right here? What parts of our systems are actually doing what they're supposed to be doing? And also having that as part of the context of the discussion, I think, is helpful. JESSICA: Yeah. That's called a near miss. REIN: Even when things fail. For any one failure, there are often many things that didn't fail that could have failed. JESSICA: Wait. So that's the Safety-I, Safety-II distinction. Safety-I is about don't fail. Safety-II is about how does this ever work. It's about studying why things do work. REIN: Yeah, exactly. JESSICA: Astrid, I loved what you said about there's this expectation that systems are benign because, yeah, we want to boil it down to a person. Actions may be carried out by a person, but those are in response to a decision that was made in a system. ASTRID: Yeah. I think that there's like once a system is in place, we forget that humans created the system and decided what goes first, second, third and why? Because once it's in place, it's as though we act as like it doesn't matter anymore. Now it's just follow it and when you don't, or if you can't, which I think often doesn't get really delved into, this thing was created this way, but this is actually not the way we work or not the best way to work or doesn't meet the expectations for what you ask of me. So if you can't, then it's your fault because you're not meeting the benign system's overarching rules. And it's really hard, especially when you're like joining this bigger organization that has all these systems in place to get that changed. It directly affects the morale and the ability for the team to work together because there's too many things boxing you in and you don't feel like you have the room to make the adjustments that are necessary to just be creative in your job or see new opportunities. It's really hard. JESSICA: That is emotional trauma there. ASTRID: Yeah. And I think that's also like a lot of what people complain about in their work in general. Like, "I love what I do except..." And then it's usually something like that. And there's even all kinds of workshops that managers can get sent to to try to like break these systems or make these systems more open. But I think it would be nice if we can remember that humans made them, that if humans made them then humans can unmake them or they can be changed. Or maybe the problem isn't the actual things you want to get done, but who decided that that's how you do it. Maybe the decision's in the wrong place. That seems to be a much harder thing to try to get enacted as an individual contributor or even as a team lead sometimes. JOHN: Yeah. Things sort of hardened into stone once they're written out, like they become part of the laws of physics and this is just how things work. ASTRID: [Laughs] It's like, "What do you mean you want to change this system? That's just impossible." But it is possible, you just don't want to. And I think it does kind of circle back to what Rein was talking about with blame because it's like once this thing is in place and we've seen it work, then you don't want to be the one who changed it and now something doesn't work. Even if it's a different something, even if it is a progression towards something better. People are very afraid of being the one who, because you did this thing, you broke this chain, now it's your fault that stuff is broken and that stuff is not working. CHANTE: This makes me just think a lot about just being very intentional about your organizational culture and how culture should influence the systems that you build. But if you're not careful, you can build a system without that and this could be a result of it. It's really hard to go back and insert compassion and treating people like humans. It's worked for you and built the systems in which you're telling or operating from, which is very interesting to hear this conversation. ASTRID: I think some of it is we have to get better about equalizing humans in what they can do to what our systems and our machines can do. Because it's a lot of times the reason why these systems are made. It's like we want to take out the ability for humans to have errors, so we're going to make this system, which should be error proof, even though it was made by humans in a way that humans think. But because we codified it in a machine language then we think that it's no longer going to be affected by humans. And I think that kind of thinking is already the root of the problem because then it kind of diminishes what a group of people may see and say, "Well, this doesn't work. And we think there's a better way." And if you can't come up with enough evidence to kind of overrule this previously voted upon perfect system, then you're just not going to get in. JOHN: Yeah. I always feel like the ability for a group of humans to come together and create like a supra human organization that's basically just made out of thoughts. It's legal requirements, it's code, it's all these things floating around that this is now an entity that can perpetuate itself. You can swap out all the humans eventually and the new people are in there, but it's still behaving exactly in the same manner. I think that's very powerful because it allows you to create an institution that will last hundreds of years, but it also has this downside of that it's so hard to change those things once they've gotten into place. And what you were saying, Chante, about how the culture is important. Like if you can build in a culture of regularly reevaluating these things of doing experiments to try out new operations, new procedures, new ways of doing things so that that can feed back into the system and keep it evolving, I think is incredibly important because otherwise you'd end up with that, "Well, in 1986, this one person made this mistake and now we can't ever change this one variable." CHANTE: Right. And it also makes me think of the fact that we have this terrible habit of, it seems though that we have built these machines and we in some way have expectations as humans. We had expectations as humans to behave like machines with the industrialization of the world because we couldn't [inaudible] computers. And now we're getting into this new era where if we're not careful to remember that computers are simply a resource and a tool versus it's a true extension of you or better than you, you run the risk of doing it again of basically saying that we're failures if we don't work to the same capabilities as our supercomputers. At the end of the day, the supercomputers cannot program us but we can program them. JOHN: Yeah. I've always found it interesting that there's sort of two different popular views of technology and computers in general that I run into and sometimes there in the same person for people that aren't developers, people aren't deeply embedded in the tech industry and don't really understand how software is made. They tend to either think that computers are infallible and that whatever the computer tells you is automatically correct. By the same token, they're also very fragile. And if you click in the wrong place, you can break it. And then you have to be very careful about what you do because your email will stop working at some point if you click the wrong button. I think those are both very harmful attitudes because if you think it's perfect without realizing there were 53 people involved in this project and a couple of them made some poor decisions about how to calculate this number that you're supposed to base your big tasks on, then you're just going to be like, "Well, the number says 53, so that means I have to do 53 today." And then at the same token, sort of could be in completely afraid of the technology and saying, "I can't do anything because it's just going to break and it'll be my fault that it broke," rather than, "No, it's Windows, like 53 million people wrote it and there's a lot of bugs." JESSICA: Ooh, ooh, ooh. I heard a great phrase the other day. It was precarity training. It's when you teach people to dance around and be super extra careful not to make a mistake or to go to great efforts to work around the system instead of fixing the system. JOHN: Yeah, that's a great phrase. REIN: One of the things that happens with these institutional things, cultures, teams, we learn these rules that we think we need to be viable. Virginia Satir actually calls them survival rules. Well, you can check that box off on your bingo cards, everyone. JESSICA: [Giggles] REIN: But that might be a rule like I can't ever be wrong. And we learn those rules because they help us survive. They're adaptive. We're in a situation where that rule made life easier for us. That rule was necessary for our survival. But then we move into a new situation and some of those old rules no longer apply. So, you'll find sometimes even in teams that have high psychological safety that are capable of being blameless, people will still not talk about things that could be helpful because they have these survival rules that they haven't been able to evaluate in their new context. JESSICA: Yeah. Because like growing up, you might get an example of a lot of parents feel like as a parent, you should never be wrong. I did not understand that. [Crosstalk] opportunity to say, "Oh, I was wrong. You were right," to my kids. REIN: So you might learn as a kid, if you come home with an A on your report card, you get treated well. And if you come home with an F, you get treated poorly. And so what you learn is don't come home with an F on your report card. And if you do get an F, make sure they don't see it. So that survival rule sticks with you. JESSICA: And then when you look at, like if you go to grad school for instance, you need to publish a paper in such a way that no one can say you're wrong. You have to justify everything and therefore your academic papers are not very readable. But then yeah, in software, it's the opposite. CHANTE: Is it? It's the opposite in software though? REIN: It can be. JESSICA: If it needs to be. CHANTE: Yeah. JESSICA: Like being wrong is exciting. It means I just learned something. Then it's painful because you have to deal with the regret. Because if I learned that what I did was wrong and I have to deal with regretting that. I was dealing with a child the other day of I corrected her in something and she was like, "You're saying I'm a bad person!" And I'm like, "No." But there is that. If you learned something, then is your past self a bad person? CHANTE: Absolutely not. I think we have to be careful not to judge the whole way around. That's the thing we should be teaching people and definitely trying to make sure that we welcome into the workplace. I think we will see that. But it kind of gets back down to this blame or blameless game that Rein was talking about. JESSICA: Yeah. That attribution of who did what, that's the past you who was in that situation. And clearly now you who is in this meeting is different and has different information. CHANTE: Yeah. A day in the life can make literally one day at work, you can learn a lot and hopefully you would come back to work the next day a little bit better and apply whatever you learned yesterday to the work you're doing today and make it better. And if it's a product, we see that quickly. I mean, that's one thing I do appreciate about the tech world is that we see change happen. It seems more so than like in non-profits, for instance, much quicker. JESSICA: How so? What kind of change? CHANTE: Oh, for one thing, I mean folks who are operating in like agile environments and you're doing your two week sprints, that basically sets you up for conversation to have to talk about what's broken and what's working and to give people the expectation that we're going to be talking about this regularly versus, "Oh my God, we had this huge kind of campaign for a nonprofit. It's a yearly kind of gala or something. We fell flat on our face. We didn't meet our goal. And now, we have to kind of re-strategize for the next year and spend the next several weeks kind of learning from that," versus when you're building a technology, you're doing it in two weeks. But in the non-profit world, we have no tech or low tech. It might be that you're talking about your mistakes or your shortcomings every six to 12 months. JESSICA: Yeah. That's way too long to remember what you did and learn from it. CHANTE: Yeah. Or to get fired for something you did six months ago or a year ago. [Chuckles] JESSICA: Wow. JOHN: Yeah. That's also one of the downsides of management work is that the feedback cycles are so long that it's really hard to get that great, "I'll do a thing. Two weeks later, I will know whether my team is doing better." No. CHANTE: Yeah. Management sucks. [Chuckles] JESSICA: Yeah. That's really hard work. CHANTE: Rein, curious in terms of the talk you did or that you're building on blame. Are you thinking about this from a managerial leadership perspective or from an individual contributor perspective? REIN: Yes. CHANTE: Which one? Or to both? [Laughs] REIN: To both. Leadership has an opportunity to model behaviors in a way that I think can be really effective here, but blame happens in everyone's brains. There's a path that you go down to come from an event detection that leads to blame. And it happens in all of our heads, and we all have the opportunity to do something different. JOHN: I liked that way of framing it rather than as a way -- like again, you're not blaming people for blaming people. You're saying this is a thing that's going to happen and so you just find the ways to short circuit that process. REIN: If any of you are familiar with the Agile Prime Directive, which is always assume that everyone is doing the best they can with the information they have. The reason that works in this context is because it short circuits blame. It short circuits blame at the decision point where you decide whether someone did something intentionally because if they were trying to do their best and the bad thing happened, then they didn't intentionally caused the bad thing to happen. It's the thing about blame that makes it different from these other things we're talking about is that blame is a moral judgment. Blame isn't just that you did something wrong. Blame is that you did something morally wrong, that you did something that's incompatible with our cultural values. And that's actually the adaptive value of blame is to police norm violations. If someone punches me in the face, my blaming them is good and appropriate. In these situations, we're not talking about anyone doing anything morally wrong. So, blame is a category error. It's a type error. And if you are talking about someone doing something morally wrong in an incident review, you're going to need to do a very different incident review. [Laughter] REIN: The incident reviews we have are all predicated on the idea that no one did anything morally wrong, which is why blame is inappropriate. JESSICA: It did what made sense at the time given the information that the system provided them. REIN: And if you find out that someone is malicious or you think you have reason to believe that they're malicious, then the incident responses that, like strategies that we have aren't effective because their preconditions are violated. JESSICA: Right. But that's such a rare case. REIN: Right. So we assume that people aren't morally wrong. And if that assumption holds then blaming is just a category error. It is the wrong type of judgment to make. ASTRID: So I have a question. What happens if the leadership is blaming in these different types of post-mortems or reviews, how are you as an individual contributor supposed to kind of do your part to turn that around? REIN: Then you're screwed. [Laughter] REIN: The thing you need to do is whatever you need to do to survive. CHANTE: That just makes for such a bad work culture. REIN: The team isn't viable, but you can continue to be viable within this circumstance and then you can go find somewhere else and so on. ASTRID: I asked because I know a lot of times, managers are managing people on teams. But then inside of that whole managerial structure, they are the individual contributors. And so they feel like they don't have power either, so they kind of like stick it in and throw it down. REIN: I don't know of a way within the hierarchical management structures to deal with this other than finding good managers. There's managing up, but we have ultimately not a lot of opportunity to affect change upwards in hierarchical structures. And if you can, it's because you have a manager who probably isn't doing the bad thing in the first place because they can learn and because they have empathy. JESSICA: Hierarchy is not a problem. Hierarchy is part of every system from biology on up and the problem there is the command and control hierarchy. The idea that directives and instructions go only downward. Any healthy hierarchical system, every component in the system also has an influence on the next level up. CHANTE: And down. JESSICA: And down, absolutely. Down, we can see in the command and control but we think it goes one way and that's completely unhealthy. Every organism in an ecosystem has an effect on the ecosystem. Every ecosystem has an effect to the neighboring ecosystems in the environment. REIN: Russell Ackoff actually has a really cool management structure where every manager has a board of directors, and that board of directors includes their boss and all of their direct reports. And their direct reports on the board get firing privileges over that manager. CHANTE: Ooh, that's interesting. REIN: In that circumstance, you can have power and effect change upwards and use command and control hierarchical structures where managers blame and blame goes downwards and decisions go downward. There's just not a lot you can do. JESSICA: And the coder goes to jail because the company violated emission standards. REIN: Yes. Those structures are designed to combat attempts to change their structure. JESSICA: In any system, including code, the first question I want to ask in system design is how do we change things? The US Constitution, it's designed to slow change. It's designed to be preserved as a default, but also to adapt gradually, and to have systems underneath it that can change at a faster pace. So, a good manager will let the team change at a faster pace than the wider system can. And that's where you can get your agile teams and your retrospectives and your team work this way. CHANTE: And I think this is particularly true that we see definitely on the tech team. But on the non-tech teams at a software company, for instance, I don't know. I wish we could hear from somebody who've worked for an agile organization but clearly is not on the tech team and how they see a lot of progress happening in terms of the software product teams, but like that's not necessarily translating to the marketing and the sales team. JESSICA: To the listeners, if that person is you, please send us an email at panel@greaterthancode.com. ASTRID: I've been on some of those teams where I'm not on the tech team, but we're in a very tech-focused kind of company. And I think one of the problems is it's not so easy to speak up when you don't have something tangible. Because if you're working on software, there's an actual tangible issue that you can bring up and other people can see and there's some objectivity to that. But when you're not, when you can see something and maybe you have experienced that somebody else on your team doesn't have and you're trying to convey that, it's a lot harder to say maybe this is something we should be discussing because then it becomes your opinion. JESSICA: [Inaudible] is especially valuable because we can see when it breaks. Whereas in a more socially composed, less technical system, it's more subjective and easier to blame a human. ASTRID: Yes. And social systems are inherently more complex. And so maybe some of it is like you have to have some experience to see this. You've seen something like this happen before. You can see what's going to happen next. But other people may not have that. And all you can do is tell them your experience and hope that they listen. But it's not so easy to kind of like isolate a thing and point it out to them and then turn it around and look at it from different angles because you have to have the ability to see it. So it's not a tangible thing and that makes it harder. JESSICA: [Inaudible] names for these patterns. JOHN: Jumping back to something you were saying, Rein, earlier about having those sort of survival rules that you build up at some point in your life and then continue applying when they're no longer relevant. Aside from just spending one of time in therapy, are there any other techniques for teasing those out in less than hours and hours of deep personal work? [Laughter] REIN: I mean, this comes from family [inaudible]. So I think that if you can do this yourself, it's because you can be mindful enough to understand something about yourself that has been hidden to you for maybe most of your life. That is one of the reasons I think that therapists can be so useful is because they can often see things about you that are hidden to yourself. But it is possible to do a sort of assessment of yourself and say, "This is the way I'm being right now, really the best way for me to adapt to this situation I'm in." JOHN: Yeah, and I think a lot of that gets into being aware of the feelings you're having in response to a situation. And there may be multiple deep layers that you have to dig through to get to what that core feeling is that's actually what's driving your behavior. So you can say, "Oh, I'm just frustrated [inaudible] in this situation." But spending the time to dig down. "No, it's not frustration. Oh wait, no, I'm actually afraid of losing my status," for example. That's the actual core. REIN: It's interesting that you said digging down because Virginia Satir has a thing called the Personal Iceberg Metaphor where behavior at the top is at the top, and then the water line is how we cope with our environment. And then below that, we have our feelings and our feelings about our feelings. And then our perception, then our expectations and then our desires and yearnings, and then our core self. ASTRID: I feel like what you're saying, John, is kind of what we started out talking about, which is how do you gain more EQ. JOHN: Yeah, and I think for me at least my understanding of that is it starts with you. You have to understand yourself before you start understanding other people and digging into your own reactions to things and those core emotions. At least that's been my path certainly is going inward and learning those things about myself, and then realizing that this is actually a pretty universal experience about having this feeling in response to this situation. And suddenly now, I understand this part of what's motivating other people in similar situations. CHANTE: See, I told you it's an esoteric kind of day. [Laughter] CHANTE: The moon does something to us. The inward exploration is highly undervalued. Something that I've learned personally and professionally. I get so excited when I hear conversations kind of circle back to this. And some of the work that we do, like for instance at my organization, The Darkest Horse, we focus on health well being and human potential, and what the expectation and intention of getting back to this very thing about going inward and being mindful. If the more we can have that in our interpersonal everyday lives, we show up more of doing more of that at work because work is a part of our ecosystem and who we are every day. So, just love that we're landing here. REIN: Okay. Good talk, everyone. JESSICA: Maybe we should go to reflections. JOHN: Rein, the way you were talking about blame was really interesting to me. I've been super lucky in the organization I've been in for quite awhile. It's been very psychologically safe and we've handled blame in a very healthy way like you were talking about where you can talk about something an individual did without saying it's all their fault. And it's also interesting to hear you talk about other ways that this could be treated within an organization because that's not something I've had an experience with. And so in some sense, I'm kind of naive because I ended up in a somewhat ideal situation as far as how this is handled. But it's also good to know about what the other dysfunctions can be, so I can watch out for them if they do start appearing. And I also liked the way that you specifically are approaching this, as I was saying earlier, about not blaming people for blaming people which is a great sort of feeding back of the system into itself in a way that I really enjoy. REIN: Yeah. The interesting thing about that is that under this model, blaming someone inappropriately is a norm violation. And so it could be appropriate to blame them. And so you might ask, "Well, why don't you then?" I think you have to get back to what are their reasons for doing what they're doing. And if you believe that their reasons for doing what they're doing are to help others, then you can short circuit even this potentially justified instance of blame to say, "Well, you're engaging in blaming behaviors, but I think it's because you're trying to help. So what if you did this thing instead that I think would help even more?" CHANTE: All this makes me think about one of my favorite books and I'm going to take it back to being a Yogi. This book, I'm going to make a recommendation here. Are you all familiar with The Four Agreements? No? The Four Agreements are be impeccable with your word, don't take anything personally, don't make assumptions, and lastly, always do your best. And so when you're thinking about blame or this particular conversation we're having as it pertains to being like on a technical or a software product team, I would highly recommend reading that book. It's very easy read. It's really cheap. There's lots of activities you can find and journaling companions and things like that, but I actually incorporate that into my work. It started with my yoga practice and my teacher very wisely suggested, "If you know nothing else, if you walk away with nothing else, I want you to always remember this," and kind of gave me a card from my wallet and I carry it around. It has led to lots of things, but I just think it's appropriate and it made me think about that throughout this conversation. JESSICA: Astrid, there's a point where you said something about how software teams are sometimes better able to push back against the system because it can point to something tangibly not working. That feeds into my hope that as we learn more about complex systems and software as our software gets more and more interconnected and connected to us, that maybe we can notice these patterns that happen in systems, these unhealthy patterns in software in a way that's more tangible, and also at the same time see those in humans and give people names. For instance, here's my bingo card of the day. Gregory Bateson named the double bind in family therapy. He kind of like helped initiate family therapy as a thing by pointing out that you really can't psychoanalyze a human isolation, that's not a thing. And then when you can recognize the double bind, you can be like, "Oh, you're expected to do this and this, but you can't possibly complete both so you're never going to be happy." And that's something we can push back against. So I hope this software industry can actually help with that in social systems indirectly. ASTRID: Yeah, I agree, Jessica. I think there's opportunities there to operationalize a lot of things that are kind of hidden in a social system because we don't really think of them as systems. We just think of them as like how we live. But I was also thinking, this whole conversation had me thinking about how important it is to have this type of conversation in some form or fashion with your team. Because to your point, Rein, about people having responses based off of their past and how they've seen things. We didn't talk about this, but there's also the different cultural views that people may hold from wherever they come from or the cultures that they were raised in that may be very different from other team members. And you don't know that until there's some misunderstanding or you realize there's a misunderstanding. And I think we don't do enough to just get everybody on the same page of understanding how we expect to communicate and what means before we started doing it. We tend to kind of jump in and start working, assuming that everybody kind of gets it. And then it makes it much harder to kind of recognize that you may not get it for awhile. So hoping that there's a little more of that happening. I've noticed that with some like smaller companies, they're trying to do that. Start off with like, "This is what our team values and this is how we express those values and this is the process for you to be able to speak up and say something," which I think is really positive. JESSICA: Can I have a bonus? There's one thing that I feel like I want to say to our listeners because on Twitter the other day, I tweeted something about [inaudible] a team that's based on mutual learning and that's how we grow together. And someone responded with, "I have never been part of a team like this or seen one." And we talk today and the people that we happen to be around like this is completely normal. Like aiming for blameless incident reviews and beyond blameless incident reviews is ordinary for software team, but we are kind of at an edge and this is not the common case. And if your team is not like this and none of the teams at your company are like this and you don't know anywhere that you could work that is like this, this is normal. It's not you. If you really want to, then ironically it [inaudible] tech skills. It's something [inaudible] so that you can get a job on a team that is like this because no, as an individual, that's not your job to change your whole company. REIN: Yeah. Actually, that makes me reflect on what I said to John about how do you cope with a manager who blames. And I basically said, "Well, you don't. Good luck." But I think that there's more to be said. I think the way that you cope with that is through solidarity with your coworkers. You can, to some extent, build the sort of team you want by treating your peers the way they deserve to be treated and the way you want to be treated. JESSICA: That's true. Your manager is a common enemy. That totally works. REIN: It's not that your manager is a common enemy, but you can show your manager a different way to be. So, because of the power dynamic, this is going to be more difficult and it's going to be much more of a struggle for you because systemic forces will make the change you're trying to achieve much more difficult. So you're going to have to work harder and it's not going to be as likely to succeed, but it's a thing you can try to do. And at the very least, you can build a better way to work with your peers. JESSICA: Oh dude, I was totally part of a team like that once where the rest of the system totally didn't get it. But we had a good solid team and we had like some leadership within the team that was teaching us XP and pairing and we had a team room and so we built a little solid knot. And that was awesome because we all learned stuff and we all got better jobs after that. REIN: Your manager might be your boss, but you're all still humans and you are all still worthy of respect. And you can talk to your manager on a human level even if they're your boss. JESSICA: Thank you for the bonus reflection. We are really lucky and we're really lucky to get to have this conversation today. I'm happy about this. Thank you, Astrid and Chante, and John and Rein. CHANTE: Thank you. ASTRID: Thank you all.