JANELLE: Hi, I’m Janelle Klein. Welcome to Episode 31 of Greater Than Code and I’m here with my co-host, Sam Livingston-Gray. SAM: Hello everybody and our guest today is here among other things, to school lesson the art of doing Agile Retrospectives. We apparently got a few things wrong on a previous episode and she was here to make it right. Diana literally co-wrote the book on Agile Retrospectives. It’s called Agile Retrospectives: Making Good Teams Great and it’s published by the Pragmatic Programmers. In lieu of the usual bio that we do, I want to start with a personal introduction. I first met Diana in the summer of 2006 when I took a class in extreme programming at Portland State University. That was the first and almost certainly the last time that PSU ever offered a class on XP and I was so lucky because they brought in people like Jim Shore, Arlo Belshee, Ward Cunningham and Diana Larsen. Diana came in on the last day to facilitate a retro for the class and when I got home that evening and my partner had asked me, “How was your day?” I was just telling her for a good long while all the great stuff that we had done that day and at the time, my partner did a lot of presentations and some group facilitation for her work and she said, “She must’ve been good because you hate group exercises.” I stopped and I said, “Yeah. She’s good. She’s that good,” so there you go. Welcome to the show, Diana. DIANA: Thank you. I hope I haven’t degraded over the last 10 years but you’ll never know. SAM: Well, I was working on the Gilded Rose Kata recently so I think that it’s more like fine wine or brie that your quality improves over time. DIANA: All right. Thank you so much. That was great and Andrew Black was the professor at PSU that got that class going and he invited all of us. That was a really cool thing. A great way to end the summer. SAM: Diana, why are we here? DIANA: Well, it’s difficult for me because I am so devoted to the idea that particularly software work is learning work. For years Peter Drucker introduced the idea of knowledge work and so on. What I find that companies and people often do with that is they think of knowledge as something that you archive. It’s not active. It’s something you get and then you hold onto. I don’t think that fits for my experience with software work and working with people who are doing software development, whether they’re programming or testing or being the product people or whatever they’re doing. It seems like it’s a very intense learning situation. When I hear people talk about retrospectives as we made this list and then we made that list and then nothing happened, it makes me a little crazy. [Laughter] SAM: Understandably. DIANA: Because there’s no learning happening there. It’s just listing that is happening there. Many people, it’s not just software but many people have this idea that if I make a list about things, then things will change and learning has happened by virtue of making the list. That’s just not how it works. Working with humans a lot over the last several decades, I’m very clear it’s not what I have in mind when I talk about a retrospective, which is really getting clear, getting a common sense of what happened in the past, what impact that’s having on us now and how we want to either carry something forward or ship something in the future, what experiment does that suggest to us, what hypothesis do we have about how we might get better and then how are we going to do something to test that hypothesis and then learn from the outcomes. You need all those pieces to have an effective retrospective and if we’re just listing or we’re just having fun games like the kind of activities and exercises that used to drive you crazy, that doesn’t do the trick. You really need to stay focused on the work and you need to stay focused on learning about the work and how we’re doing the work together and all those kinds of things. JANELLE: You haven’t said anything that I’ve disagreed with at all. I’m like listening and going, “Yeah,” so in general agreement and frustrated by a lot of the similar things that I see kind of in general going on in the industry with respect to retrospectives and making lists and things being ineffective, you clearly have a very structured, deliberate approach of how to make retrospectives significantly more effective. Yet, by your comments, I got the impression that you had a lot of dissonance ideas but I don’t feel like I’m hearing a lot of dissonance right now and finding myself very much in agreement with you. I’d like to hear a little more about and specifically, structure-wise what kind of things we’re different or if there’s anything specific, I feel like one of the things that you brought up with Agile Fluency model, I guess you didn’t bring it up at the show but I will start reading — DIANA: I will. JANELLE: — And one of the first steps in there was focusing on value and your specific comment in Twitter was with respect to focusing on ways so I’m guessing with respective dissonance that might be something that we see things a little bit different way. That might be fun to talk about. DIANA: Okay. All right. Also, I’d like to hear again about the kind of retrospective that both of you are doing with your work. Maybe there’s an opportunity to do a little helpful critique in there. Always focus on the positive on where I see you being effective and where I see you may be going off the rails a little bit. Yes, we can do both. What starts from is that teams of people trying to accomplish something together in general are complex, adaptive systems. One of the things that we know about what works in complex adaptive human systems is an ongoing repetitive cycle of understanding the current state of things, understanding what the implications of that are for us and then creating some hypothesis and plan for action and evaluation out of that. Now, in the retrospective framework that Esther and I put — Esther Derby, that’s the co-author of Agile Retrospectives — in the framework that we propose we start with something facilitation because people in groups don’t naturally know how to learn about something, think about something and make a decision about something without some coaching and help. Our framework starts with set the stage. We get people in the room, get their heads in the right place to begin doing this work together. That may be as simple as a very quick check in. It may also include going over what the agenda is for the time we’re going to spend together. It could include a lot of things but it’s usually pretty fast. I almost always — I never say always — but almost always, I actually include a focus for that. I think one of the places where people get tangled up is if every retrospective is about continuous improvement and that’s all, then every retrospective starts looking the same and they becomes boring. I look for some thinner slides of what is it that we’ve just experienced that we might want to focus on? It might be increasing our adherence to our agreements about engineering practices. It might be improving our connection with our customer. It could be any number of things but in some thinner slice of we’re going to just continuously improve because that just opens things up way too much for a meeting that is generally an hour or an hour and a half. SAM: How do you choose what to focus is? DIANA: Generally, I talk to people on the team, to see what’s been bugging them. Maybe there is an incident that happened just in the last iteration or if they’re doing Kanban in the last chunk of time that we are looking at in this retrospective, there may be something that has come up. What I have discovered is that if that turns out not to be the most important thing to talk about, the team will shift very quickly. But we still end up with a thinner slice so either way, it works. We just get people’s heads in the room ready to do the work. Then we do that what we called ‘gather data, generate insights, decide what to do.’ other folks have call that ‘what, so what, now what?’ There’s various ways of talking about that but that’s really the meat of the retrospective and we can dive deeper into what each of those parts are if you like. Then we do another sort of facilitation wrapper at the end where we say, “Okay, we’re going to close the retrospective now. What is it we want to remember? What are our agreements about action?” That’s also a good time to do just a general round of appreciation and to take a short piece of the time to continuously improve our retrospectives so what in this retrospective do we want to keep carrying forward? What happened here that we want to maybe adjust for next time as their feedback for the person who facilitated it? That is just the rap but ‘arrange-assert-act’ is also an interesting way of thinking about that. Lots of people have come up with a way of talking about that. OODA Loop is another one. Build-measure-learn is another one. Plan-do-check-act is another one. There’s lots of folks have noticed that really assessing what just happened before you jump into analyzing what happened is a really good thing, particularly in groups because otherwise, we end up in this place where I’m talking in terms of my own experience and you’re talking in terms of your own experience. In spite of what we may think, those may not be matching experiences. Until I understand what happened for you, in this last iteration and you understand what happened for me and all of our other teammates, we don’t really have the full picture so you really want to spend time getting that full picture. That’s the part that’s most often left out. SAM: Yeah. One thing that really jumps out at me still about that retrospective that you did for our class in 2006 was that you started off, maybe not immediately but one of the first things that we did was we went and we built a timeline of the thing that we’ve been through as a class and we put a post-its about our feelings. “I was feeling anxious. I was feeling satisfied,” and that really still jumps out at me as something that was memorable and effective. We don’t do a lot of the further retrospectives that I’ve been and even the good ones. DIANA: There are two parts to any event. There’s the sort of factual nature of the event, here is our effort data, here’s how much we got done. Then there’s how we, as humans respond to that data, that fact. Both of those things are useful information because if our effort data makes us embarrassed or guilty, that has one set of impacts on us. If our effort data makes us feel proud about it, that has another impact so it’s useful to call that out. Jim and Michelle McCarthy and their core protocols, they start every meeting with a check in at MAD, SAD, GLAD, AFRAID. We’re humans and humans have an emotional component and if we ignore that or try to sweep that under the rug, we’re ignoring a big part of our motivational drivers so we really do want to get those out on the table as well. JANELLE: With respect to emotional drivers, you’ve also got factors of the things that we measure and the things that we defined as good or best practice-wise drive a lot of those emotions. For example, if discipline TDD is an expectation socially on the team, failure to comply with that expectation, independent of whether the things that you’re doing are helpful or not, makes people feel good or bad or whatever. Same thing with code coverage metrics where people feel bad for the little progress bar not being in the right place, independent of whether the activities that they’re doing are actually helpful or not. DIANA: Right. I hear a lot of stories of things that go on teams. Recently, I was hearing a story about a team that got a new team member who came from a culture that the original team members weren’t as familiar with and people persisted in mispronouncing the new person’s name and one of the existing team members, that was a very troublesome for. He had worked very hard to figure out how to pronounce the new person’s name and all the names of the people who were already there, which was a little easier and he was troubled by the fact that his teammates weren’t doing this. That got in the way of working together. It got in the way of pairing. It got in the way of all kinds of practices that they were trying to do together and making agreements. It can be something that some folks think is small that can really trigger that emotional response. I don’t know if the emotional response was triggered in the person whose name was being mispronounced but it sure as heck was in the person who was noticing the mispronunciation. Teams can get tripped up by really interesting things that may seemingly be small but if there is an emotional valence to it, it can have an outsized impact. JANELLE: Yeah, I can think of a number of uncomfortable emotional situations on teams that I’ve been on. I mean, I can totally respect that situation. It’s disrespectful, I think in context of a team to not take the time to learn how to pronounce some of these names. It’s just kind of an overt statement of ‘I don’t care’. “I don’t care enough about you to learn how to spell your name.” DIANA: Yeah, it would think. But you know, how many people do you know that didn’t come from mainstream American culture into a team whose names get shortened or for whatever reason, at their full presence, their full name isn’t honored. I mean that just happens. JANELLE: You know, I also got the other side of that, just I would think embarrassment with people not knowing how to say a name and not wanting to try and feel embarrassed. Then that becomes its own thing. DIANA: Right. Well, it happens. Of course, we see a lot of things online about happens with women when people are throwing around the term ‘guys’ like ‘all you guys on the team. You guys.’ SAM: For those who are listening at home, I have a little printed card. I have a couple of them. One says, “Glitchy audio. One says, “There’s a mike rubbing on fabric,” and the one that I hold up, probably the most often is one that says, “You guys,” that I hold up whenever somebody says, “You guys,” to remind them that we’re not all guys here. DIANA: Right, yeah. I mean, there’s just lots of things like that and response to the work. We have an emotional response to the work as well so feelings are facts and we need to include them when we are looking at the full body of factual data about what went on during that period of time that are retrospective is looking at. What we noticed is that people with an engineer’s mindset or an artist’s mindset are problem solvers. They tend to want to jump right into the middle of the retrospective and just start with analyzing and problem solving before they’ve really done that really clear picture of what actually happened. SAM: Totally guilty. DIANA: Well, this will make you feel a little better. Very often, business people want to jump into the decide-what-to-do portion — [Laughter] DIANA: — Before they’ve done either of the other way, before they’ve done the collection of data or the analysis. Let’s just move into action. That’s what I’m always alert for. When I hear people describing their retrospectives, are they really giving attention to all three of those parts? It just makes it a better retrospective. You end up with better hypotheses, you end up with better actions, you end up with greater likelihood of follow through and so on, which by the way, is the biggest complaint I hear about retrospectives. No matter where I go is we hold this retrospective meeting and then nothing happens. We don’t follow through on the action plans. When I hear that, I say, “Well then, stop holding a retrospective because that sounds like a lot of waste.” Or figure out how to do your retrospectives in such a way that you will follow through on your action plan. Those really are the two choices there. I’m not for people just sitting in a meeting because they were scheduled to have a meeting. That seems kind of silly to me. JANELLE: The other thought I see a lot of is retrospectives where people have an action plan but the action plan doesn’t actually solve the problem is largely because of the dysfunction of the focusing step that you mentioned. DIANA: Right. We may have found the problem to solve but it’s not the right problem or it’s not really going to get us the most benefit. On the other hand, if people have not been doing retrospectives and are just beginning to do them, I don’t care how small a problem they pick to work on. If that’s something they think they can accomplish and get some traction on and get some feedback about how did that go, that is like building a team muscle. You may start with smaller weights and get successful with the smaller weights before the team starts taking on the big weights, like influencing other people in the organization. I always counsel to start with things that are well within the team’s control, something they can actually make happen and that they can actually analyze without having to have reference to a lot of other outside input. As they build that learning and improvement muscle, they will be able to take on bigger and bigger things that are more complex or more difficult, that require more organizational influence or all those other kinds of things. You really want to stay away from starting out with the actions of the sentence begins ‘they should.’ SAM: I hate the word ‘should.’ DIANA: But we could try this, right? I want to keep teams on ‘we could,’ because I think control is really important, particularly in the beginning. JANELLE: Specifically, the thing I’m hung up on is kind of the focus in step aspects. I’ve just seen so many patterns over and over and it well pick some problem that is largely like, “I think it should be implemented differently. It’s not scalable,” or some arbitrary kind of thing or that, “We should fix this because of X. It feels wrong to me,” kind of thing. Once the team has the discipline to back into the questioning with respect to value and usefulness, make it faster or accomplish anything, a lot of these ideas that engineers come up to improve things, don’t actually make things better at all and they’re just best practice-y kinds of things or ideal vision and what is in front of us being different than that ideal vision. Then these things pop out as problems and often get a lot of intention and you end up with actionable improvements coming out of these retrospectives but because we have no good anchors for defining whether something is valuable, whether something is better, I mean it’s kind of better is sort of like this fuzzy notion that makes it really easy, regardless of what experiments you run, to have a lot of confirmation bias effect. What kind of things you do to mitigate that? DIANA: Well, I suspect that possibly what I reacted to in the other episode was the idea of list of actions. Teams have a lot of work to do so if you’re using more of a throughput Kanban flow and you’re doing your retrospectives or if you’re an XP team and you’re doing one-week or two-week iterations, taking on more than one or two if they are small improvement actions, sets you up for failure. Even if you do generate that list, there needs to be some conversation about winnowing that list down. Sometimes a very small action can have an outsized effect. We call that the ‘Butterfly Effect Pattern.’ Looking for what’s the smallest thing that we could shift in this next, say iteration that we believe would give us the biggest beneficial impact and if you want to create an improvement list backlog or something, that’s fine. But you’re only going to pull one thing or possibly two things and people are really enthusiastic about them to do the learning and experimentation on in the next iteration. Otherwise, big list [inaudible] any of it done because you’ve got work as well and that’s important. There’s that. The other thing that we were talking about the focusing, the other thing that that helps is our improvement action this next time is going to being linked to our relationship with our customer or [inaudible] rotation. We’re going to experiment with a new pairing rotation or whatever it might be. I’m kind of allergic to the term best practices because I think there are good practices that work for some people in some instances and very often, those don’t translate into whatever that instance is. There were worthwhile things to try but to try with a curious and open mind, is this going to work for us in the way that work for somebody else. Then I think about companies who are just trying to adopt, say this Spotify model, [inaudible] without really noticing that Spotify is a very different kind of business than they have, etcetera, etcetera, anyway. JANELLE: I made a comment about making big lists of to-do items ending up with these new management problems of managing things and [inaudible]. In terms of in a typical practice of filling up a big backlog of technical debt items, I would totally agree on that aspect of it. At the same time, with respect to being able to make intelligent improvement decisions, what I found from practice, considerably more data to improve the quality of decisions just because of the sheer amount of complexity and variation and interaction that goes on the project itself. As opposed to creating a big list and then trying to make intelligent decisions just based on instinct of reading through these things and winnowing down what’s important, what we’re basically doing is creating a graph-structured based database of past historical experiences and then taking all of our knowledge and continually grooming it in a software developer insight system. Then when we go into the retrospective meeting, we’re not just talking about the last week. We’re looking at the patterns across the last six months, year of type of trends that have happened on our [inaudible] for improvements or in choosing a focus topic based on complex historical patterns that we’ve seen emerged. I feel like that level of rigor is needed to be able to intelligently choose a focus. DIANA: Well, certainly a laudatory. I don’t see a lot of teams putting that much into it. I still would say, “Take them off one at a time.” As you would address of those issues, you want to make sure that you are taking something. In the five rules of accelerated learning book that I wrote with my son, Willem, we talk about focusing on the flow of learning and talk about bite-sized pieces. What is it that your team can easily take in, consume, digest before they take on the next thing? What you’re talking about really looking at that history, for one thing I think you’re looking at a longer retrospective than something that you would use to just look at a couple of weeks’ worth of data. Retrospective things and sizes and numbers of people involved and all of those are kind of complexity factors. But I think that kind of analysis that you’re talking about doing is super. That would really help to focus you in the right direction for where can we get the most beneficial impact. People talk about what’s most important for us to do all the time and I try to [inaudible] an official impact because importance is kind of vague and fuzzy but if we really want to get the goodness out of this, let’s stay focused there and I like what our colleague, Woody Zuill talks about when they were creating mob programming. He talked about every day just figuring out how can we turn up the good. That’s what we’re looking to do, how incremental improvements toward more benefit to the team and to the product that we’re creating into our organization as we go along. I think that sounds great and I don’t see a lot of teams doing that. JANELLE: Yeah, I don’t either. [Laughter] JANELLE: At the same time, in the context that we started inventing tooling where our project was going off the rails despite doing all the best practice things and it was a ten-year old project. The things that we were doing weren’t working to solve our problems so we resorted back to gathering data as a method to figure out where our problems were and pretty much, as you were saying with respect to best practices, we basically abandoned all best practice wisdom at that point and said, “Okay, let’s see what the data says and start [inaudible] good software.” One of the other things as opposed to doing kind of bi-weekly retrospect is as we do per developer, per task reflection, before and after each individual task as a pair and because we have a lot of the thinking and reflection at such a fine [inaudible] model, two-week reflection where we’re sort of looking at culminating things as a group, the dynamics of that discussion change significantly when we’ve already got a good bit of context on reflecting on the things that happened over the last two weeks. I think that’s the another significant reason for — DIANA: Oh, that’s awesome. That is just awesome, Janelle. The idea of maintained culture of reflection and inquiry as just a part of your ongoing work, has to stand you in really good stead. That has to benefit you because it makes such a difference to be in that space. That really is continuous learning and I congratulate you for it very much. That’s awesome. JANELLE: Thank you, one other thing that is specifically with respect to value versus waste and I wanted to get your thoughts on what is value in the context of software development. How do we define that as a team? DIANA: That’s interesting. That’s a conversation I was just having about an hour and a half ago with somebody else. I think that’s one of the toughest nuts actually in software development. I think that’s a thing we really have to where product owners, in particular or the product management folks are tasked with prioritizing according to value but nobody has figured out what’s valuable in their organization. There are some rough ideas about that. Do we prioritize things that might increase or we prioritize things that are going to increase our revenue over the short haul? Are we looking at things that are going to protect our current revenues? Are we looking at things that are going to reduce our current cost? Are we looking at things that are going to avoid costs in the future? Are we completely focused on what our customer tells us as most valuable to them? [inaudible] or is that some particular feature? That’s a very complex set of stuff to look at and I don’t see enough teams and organizations doing the work to sort through that whole pile of potential value to say, “For us right now, which of these things is going to give us the most beneficial impact, either for the team or for the organization, whether we’re IT or whether we are a company that is producing a product for sale in the marketplace, all of those things all go together and we don’t spend enough time looking at that for sure.” It could be any of that. JANELLE: That’s interesting. A lot of time focusing on measuring waste as anti-value just because it is something that is easier to define. I think value has a lot of challenges with defining things. We’ve got a feature that we want to add to the product but generally speaking with product sales and our product sales model, the revenue from a feature that you’re putting in the product. How is this feature going to ultimately affect customer behavior kind of things when it comes to trying to associate these things with money? One of the things that we started focusing on was optimizing for joy and that our customers having a joyful experience with our products matters and is it [inaudible] to us build in a shift in revenue. It’s part of our purpose as an organization. DIANA: Well, it’s a much sure path than many other [inaudible]. Yeah, I like that a lot. SAM: I want to go back to something you said a little while ago about taking things in small bite-sized pieces and picking the things that maybe will give us the biggest bang for our buck. I guess the entire idea of doing Agile Development is that you focus on the thing that’s going to deliver the biggest value to your customer first for the smallest amount of effort. But when you said that I saw that in terms of the way I do refactoring and the way that I approach refactoring is different in a way a lot of people talk about what they think of is that word. What I like to do is I start with the tiny things like renaming variables until they make a little bit more sense to me, inserting white space between sections like little tiny things where I’m cleaning up the tiny mess, removing a little bit of noise so I can free up the mental capacity to see the next smallest mess and so on. That sounds to me a lot like what you’re saying about taking things in small pieces and focusing on just one thing at a time. Does that match? DIANA: Absolutely. I’ve been for many, many, many, many years but I recently went to a mob programming workshop. I was part of a mob and we were programming together. I had the experience that I have had other times when I observed our colleague, Jim Shore doing some kind of public [inaudible] or those kinds of things, where you can actually see the code get more beautiful: adding in those white spaces, making the names more consistent, all of those kinds of things, all those small things that you’re talking about really do make a difference. I have just developed the belief in taking the time with that detail of I can make this small improvement and that small improvement and that small improvement ends up having a big impact. I totally agree. That’s right there. SAM: All right. I’d like to take this time to give a quick shout out to one of the new supporters of our show. We would like to mention as noteworthy, Tim Gaudette who describes himself on his Twitter bio as a web developer and cynical idealist and I’m not quite sure how that works but I like it. Anyway, you can check out him on Twitter at @IAmTJG. If you would like to support the show, you can do so at Patreon.com/GreaterThanCode and donation at any level will get you into our Slack community where we have a bunch of wonderful people talking about interesting things. We also wanted to talk on this episode about Agile Fluency. I think on the previous show, Episode 28, Rein had been talking about a Capability Maturity Model of some sort and then I jumped in with Agile Fluency because I love the model and you were clear about mentioning that it’s not a maturity model so I don’t know what that means but I’m curious to find out. But I’d really like to jump in first with just this idea of what is fluency and what is the sort of multilevel model of fluency. I’ve played with that a bit myself. I ran across it with your son, Willem and I absolutely love it. Can you start by introducing that to our listeners? DIANA: Jim Shore and I had been working together on a number of different projects. We were looking for the next thing and taking what we had been working on to the next level. One of the things was a workshop that we were doing and we were having some difficulties. We were getting really good feedback on the workshop but we were having difficulty sort of sequencing the content. We just like the best for sequencing the content. As you do, when we were dealing with a hard problem, we started talking about something else entirely — SAM: You think it’s about something else but really it’s not. [Laughter] DIANA: Yeah, but really, it turned out it wasn’t because both of us had just been engaging with Willem around the idea of language fluency and what was going on there with the nonprofit that they had called Language Hunters. Those epiphany is about what if we applied that idea of fluency to the things we were trying to teach about software development. That was kind of the genesis. It definitely was a feeder idea and the piece of that is fluency is what you can do routinely. It’s the thing you automatically go to. It’s like you are a pianist. It’s the piece you would always go back to the simple piece that you can count on, you can play all the way through. Generally, if we use the language example, we try to match fluency to me so if I’m taking a vacation in foreign land, to me where they speak a language that is not my native language but I’m going to be there for a week and I need to be able to get on the bus and I need to maybe understand how much things are going to cost, I need one kind of fluency. I need to be able to ask how much is this cost with routine [inaudible]. I don’t want to fumble around with that but I probably don’t need ‘or open a store’ or any of those kinds of things. There’s a certain level of fluency that is my sweet spot for that use. On the other hand if I’m going to stay there for a few months and I plan to attend social events and things, I’ll need to be able to have small talk and chit chat and maybe tell a story about what I did yesterday of fluency level that I would need to do that. My friend, Steve tells me a story about a time when he thought he was fluent in a foreign language but he was riding a bike and got stopped by the police because as it turned out in the end, he was riding in a place he wasn’t supposed to be riding. It wasn’t going too fast or anything else but he very quickly found the limits when the police were in his face shaking their finger at him. Jim and I formulated this idea of software development fluency being what can you do and what will you do as a teen, even when you are under pressure, even as it comes in and tells you need to be able to deliver this tomorrow? What are the things that you fall back on? What is kind of your native tongue software development in that situation? As we began working on it and talking to different people about it and about what their experiences was, as we got to a new idea about what this model should look like, we would take it out and test it with folks who were really doing software development with real teams and being successful at it. We came up with this concept that we could identify at the time, for sort of proficiency. That sets of proficiency, of fluent proficiency that seemed to go together and that’s how we made up the model. While each set of proficiencies builds on the previous one, you don’t always need to build on the previous one. It just depends on your need and that’s why we don’t call it a maturity model. A maturity model tends to imply that you want to get as mature as you can possibly get and what we’re saying with the Agile Fluency model is you only need to get as fluent as matches the need that you have for fluency so that’s why we are where sensitive around the maturity model. SAM: Fair enough. Thank you. That’s a useful clarification. DIANA: Yeah, it has been a very useful model to us over time. First, we just wrote an article and Martin Fowler, very kindly offered to publish it and then we thought, “Well, that’s interesting and that’s done,” but then people kept asking us questions, particularly about how to operationalize it in their organization so a couple of years ago, we form something called the Agile Fluency Project and brought in a few more people to help us with that and we’re looking at all the ways in which the model can be used inside organizations to help them find their appropriate level of fluency, make sure they’re not over-investing in some things and under-investing in others and so on so getting the appropriate level of investments and training and attention and all those kinds of things that the team might need. SAM: But I’d like to take another moment to mention that as we announced on a previous show, we are now accepting submissions for blog posts. If you would like to write something and have us put it up on our blogs, you can maybe reach a little bit wider audience than you would on your own blog, feel free to email Mandy@GreaterThanCode.com and picture ideas and your submissions. I did want to mention one thing that I really got out of the fluency model that came out of Language Hunters, which was prior to encountering this model, I would see people talking about how, “Oh, I’m fluent in Spanish,” and I would always see it as this like unattainable goal of, “Wow, you’ve been like living in this culture for a couple of years and you can carry on a conversation with Charlie Rose in that language.” That was really transformative to me in making this idea that a level of fluency at a low-level of proficiency is accessible, it gives me somewhere to start, which somebody with ADD that I often have trouble digesting this things all at once. I just wanted to mention that I found that really appealing about fluency model in general and I was thrilled to see the way that you adapted it to Agile Adaption. DIANA: Oh, thank you. I think it does link to the things we were talking about earlier about the sense of inquiry and learning and the things that we’re talking about value because as we put the model together, one of the things that we discovered, the great thing about creating models is that at a certain point is they start teaching you back. SAM: Mostly in the ways that they break down, right? DIANA: Yeah and it’s been a really fascinating thing for me over time as I’ve contributed different models to the Agile world but in this case, in particular what we started noticing was that fluent sets a fluent proficiency that we were identifying as a part of this model, tended to be very much in relation to how the team was connecting with value. The first fluency that we identified was this idea of focus on value, which means the team can be counted on. Assuming they’re getting good information from the product people in the business side, the team can be counted on to always work on the next most valuable thing, that whatever they are producing is going to be the next most valuable thing from the backlog or wherever they’re getting their work assignments from. The next part of what we call the path through fluency or the bus drive — sometimes we call it bus zones — is delivering value which means not only is the team producing the next most valuable thing but they are delivering that on a certain cadence and they can be counted on to deliver that on a certain cadence. Then the third one — I won’t keep going through the whole thing — is that the team, we call optimize value which means not only can the team respond to the customer and the business and give the next most valuable thing on a certain cadence but also they can begin to contribute ideas of value that the customer may have or need and they can anticipate what will be valuable to their customer because they understand them so well. That has led my inquiry into the ideas around what is valuable that we were talking about earlier. It’s like how do we talk about that and how do we feed that information into the team so that they can continue to learn about value in their organization and what they need to be responding to. I just think that’s an interesting link back to our earlier discussion about what really is value. The fluency model ended up that we didn’t necessarily start out wanting to build it in that way but it ended up telling us it needed to go in that way. We are now actually working on some ideas around, is there a fluency progression for people and product management or product ownership and in that role because that seems to be really important. SAM: Yeah, interesting. We’re hoping to see that. DIANA: Yeah, fluency models are really fun to create, by the way. [Laughter] SAM: I’ll keep that in mind. Thank you. JANELLE: We usually finish the show with a few comments and reflections on things that stood out during the show that you’ll take with you, maybe something somebody said. Probably the main thing that stuck with me was thinking about that focusing step and how much effort goes into those things versus the benefits of focusing on things from the recent past. It really got me thinking about, especially with taking human emotions into consideration as what you focus on. I’m thinking about that a lot now so thank you, Diana. DIANA: Oh, you’re welcome. I’ll just throw mine in here. I was really intrigued by, certainly the things you were talking about in terms of the way your team is managing and learning and collecting data and stuff. That’s an impressive story. I’d like to learn more about how you do that and where you do that at some point in the future. And your questions about value, I don’t always get to hear from other people that are as passionate about a particular little arcane bit of whatever that is involved in Agile like that so getting that question and being able to really talk about those issues around value, that’s clarifying to me actually. SAM: I mentioned earlier the idea of small-bite sized pieces and I talked about how I saw that in terms of refactoring but really what stood out to me throughout this entire podcast is that we basically just keep talking about different forms of feedback and different cycles of feedback, which is as I see it, sort of the fundamental idea in all of Agile Development. It’s interesting to see how that can be applied in various different contexts and again, it was a really useful reminder to me that even though sometimes I really want to start with the big boom, maybe sometimes it’s better to be able to take tiny steps because if you don’t know how to take the bigger step, at least you can take a smaller one and get yourself that much closer. Thank you for that reminder. DIANA: You’re welcome. It is my pleasure. SAM: Before we go, I want to mention that we are, as mentioned listener supported but we would totally love to have a company sponsoring the show as well, if there’s a good fit for us. If you are somebody in a company that might be up for that, please talk to folks there and get in touch with Mandy. JANELLE: Thank you, Diana for joining us. I had a lot fun. DIANA: I had fun too. This episode was brought to you by the panelists and Patrons of >Code. To pledge your support and to join our awesome Slack community, visit patreon.com/greaterthancode. Managed and produced by @therubyrep of DevReps, LLC.