NOEL: Hello and welcome to Episode 63 of the Tech Done Right podcast, Table XI's podcast about building better software, careers, companies, and communities. I'm Noel Rappin. Our guest this week is Chad Pytel. Chad is the CEO of thoughtbot, which is a design and development firm known in the Ruby world for its support of open source projects like paperclip and shoulda. Chad and I talk about how to make short consulting projects work, the importance of hiring, why thoughtbot makes their internal guides public, and how they continue to be able to support open source. It's a great conversation about how thoughtbot approaches the world. Before we start the show, one quick message. Table XI offers training for developer and product teams. If you want me to come to your place of business and run an interactive hands-on workshop, I would very much like to do that. We can help your developer team learn topics like testing, or Rails and JavaScript, or managing Legacy code. Or we can help your entire product team improve their agile process. If you're in the Chicago area, be on the lookout for our public workshops including our How To Buy Custom Software workshop. For more information, email us at workshops@TableXI.com or find us on the web at TableXI.com/workshops. Now, here's the show. Chad, would you like to introduce yourself to the audience? CHAD: Hi, I'm Chad. I'm the CEO and co-founder and developer at thoughtbot. We are a web and mobile design and development company. We're the creators of a lot of open source for Rails - paperclip, shoulda, factory_bot. Usually, most of our products that we work on are going from concept. People come to us when they have their concept. We help them refine that concept and design and build it and launch it, usually in under 12 weeks. And we also work with much larger companies doing the same thing or improving their existing products and making them better. NOEL: There are a lot of similarities between what thoughtbot does and what Table XI does, although thoughtbot I think is somewhat larger. And certainly, I've spent a lot of time working in consulting companies that did that kind of work. And so, that's what I'd like to talk to you about - what it is to develop a client project and to go from concept to execution in 12 weeks or a very short timeframe. So let's start there. What is different about software development in that kind of environment versus like a product company's kind of environment? CHAD: One of the biggest things that allows us to work the way that we do and as quickly as we do is that we're an integrated design and development team. Designers at thoughtbot don't just do visual design. They do research and user experience. They do great visual design and then they're also frontend developers. And developers at thoughtbot care about usability and user experience. They participate actively in the design process and then we meet the designers in the frontend and take everything back over. So, our typical team size is three people, a designer, and two developers, or two designers, and a developer. And they're working directly with a founder or a stakeholder to refine the concept that they have. So, that integrated small team is able to work really, really quickly, and that's the thing that's at the core of why we're able to do that. A product company that's starting out just with that same sort of team structure is going to be able to also work quickly. And if they can't, what's typically missing is that focus on figuring out what customers will actually use and love to use and what can be the basis of your business going forward. So many people get excited about the idea and then pile on feature after feature after feature and delay launching until they think they've got it absolutely right. And we fundamentally believe in finding the core of that idea and getting that in front of real users as quickly as possible and then iterating from there. That doesn't mean you can't go to beta test instead of production, but it does mean getting in front of real customers. NOEL: Yeah. That doesn't describe every Table XI project but it is very similar to the project that I just finished, my last project. And I think that one of the things that helps a consulting company in that respect versus a product company is you often have people who have done this over and over again. CHAD: Yeah. NOEL: And so therefore have some sense of like common pitfalls and things like that, whereas a product company might not have the people who have just been doing this for the last two or three years. CHAD: And not only that, but having worked together before, too. So, our regular team that's forming around a new product, they're going to have the normal team sort of storming and norming that you have to go through. And because we're a cohesive team already, people have typically already done this before and they've already worked together before. And so, that's a big thing. The other is, we see it on our own products. If we have an idea, and we've built about eight products over the course of the 15 years we've been in business of our own, and we do the same thing, we delay launch. We think it needs one more feature in order to -- so, having an external perspective where we are being hired to bring that sort of structure and expertise and be the ones holding everybody accountable for shipping quickly and iterating from there, is a benefit. And the fact that they're paying for our time is the forcing function. NOEL: Yeah. Shipping is definitely a habit. And it's a habit you can fall out of surprisingly quickly. You think you're shipping all the time and then suddenly it's, "Oh, we need one more thing, and one more thing." And then you haven't moved anything into production for two weeks and it really can get surprisingly bad quickly, even for people who know better. CHAD: Yes, even for people who know better. For some of these things, as a developer, I see everything we do as a liability and it's almost like, I'll be working and I'm like, "Oh my God, I've done this. I got to commit it. I've got to make up a small branch. I've got to do a pull request. I've got to merge it in as quickly as I can." Not compromising on quality but I have this sense of urgency that everything I'm doing until it goes to production is a liability that I'm creating. And so, that feeling, that tension, I've always had it and it just causes me to like, "OK," and then I deploy it to staging and I'm asking someone, "Can you look this over?" And then as soon as they look it over, I'm pushing to production. I just have this sense of urgency around everything that I do because I have a sense of the liability that I'm creating when code isn't in production. NOEL: That makes a lot of sense to me. That's a really good way of thinking about it. One other thing I wanted to back up on was the idea that the initial phase of a project, the discovery phase is a skill that, I think, a consultant team can bring to bear that I think a lot of -- it's a step that's really easy to kind of skip because people think they know what they want. How do you handle that initial exploration at thoughtbot? CHAD: A common pitfall of thinking you know what you want is different than knowing that that's what your customers want. The other common pitfall is people hear research and that kind of thing and they think, "Oh, here's the design companies saying we need to do three months of research before we do anything." And we found that there is a beautiful compromise and it's a process that we've run for a while and then a few years ago, independently, different entities all came up with a name and a little bit more formal structure and it's called a product design sprint. NOEL: Yeah, we do those. CHAD: It's typically a one week process but the important part is it's five phases. It's typically done in five days and you go one day of research, one day of brainstorming, one day of converging, one day of prototyping, and one day of testing. And testing means putting that prototype in front of real customers and watching them use it and getting their feedback. That process is a really nice balance between spending way too much time researching and just talking to real customers, showing them what you're going to do, and being able to adjust based on what you learn. NOEL: What I find is that people are skeptical of research but if they see quickly in like a design sprint kind of structure that one of their assumptions is wrong or they get a correction like early on, then they see the value in it. And putting a context in place where you can rapidly find something that can show that some course correction is worth doing early can really get people onboard. CHAD: Yeah. And having done design sprints, you probably know there's this really awkward phase of feeling in the first two, three, four days where it can be a little harrowing because there's a lot of uncertainty about where you're going to land. And then almost all the time, you could do the prototype and you can see like the breath being left out and then you do the testing and then it's like, "Oh, I can't believe what we've found here," and they're just fully bought in. NOEL: Yeah. That's very much one of the roles of the facilitator in something like that, is to keep everybody from looking down, so to speak. I have to say here that the previous episode of this that's going to come out on the feed is about design sprint thinking being applied not to software but to Tyson Foods Innovation Lab... CHAD: Yeah, cool. NOEL: Where we help them design products for waste food. So, the process like sort of goes beyond. CHAD: It totally does. And we've been hired to do that for other things, as well. It's just the structured process of getting everyone in a room together who has something to offer and who are the decision makers, along with the product people or the designers and going through that phase of we're going to take some time, we're going to understand, and we're going to brainstorm or diverge, and then we're going to converge on what we think is the right thing to do, and we're going to prototype and then test it. So, we've worked with some retail stores as well where we do that same thing. They have a new idea for either a retail concept or an improvement that they want to make in their store, and we come together on the idea. We prototype out how it's going to work and then we test it on that final day. It's a really powerful structure for accomplishing a lot of different things. NOEL: Once upon a time, I worked at Motorola which was a very strange company in process for many, many different ways. But one thing that I was told there by a manager that has always stuck with me no matter what the process is, the idea is that you have to do all the steps and it's better if you do them in order. If you try to short circuit the first steps, you're just going to have to come back and do them later, and it's just going to be more painful. CHAD: And we know when we're working on an app, the sooner we can identify that we've done something wrong or should be improved, the better that is. And so, we're not talking waterfall here. There is a difference between designing everything upfront and everything so that nothing ever changes at the end, but it's about validation. And so, the prototype that we're doing isn't necessarily even a prototype of the full app. It might be that we've identified through the first few phases that something is a risk or we don't know how people will respond to this one piece, and then we're going to build a prototype around that so that we reduce the risk. When we talk about what a sprint is for, we say, "It's for reducing the risk inherent in bringing new products to market." NOEL: What's something that can happen early in a project or early in the process of dealing with a client that starts to raise a yellow flag? Like what's something that other people might not notice but that can cause you to start to react that the project needs a little bit of help or that something's not going right? CHAD: You mean in terms of that we might be building the wrong thing? NOEL: That you might be building the wrong thing, that the client might not be a fit. CHAD: Okay. So, the client not being a fit. My approach there isn't super special, I don't think. I'm a developer; I'm a designer. When I walk into a room with someone, I could just as well be working on the project as anybody on my team. And we have really strong values and principles at thoughtbot. And so, my approach has always been walk into the room like we're already working together and if it doesn't feel right, then it's probably not right. It's sort of a gut level thing but because everyone at thoughtbot has similar values in what makes us fulfilled in our work, it's a pretty good guide. Even if I feel like we should bring in a little bit more concrete than that gut check level thing is, if when we start working together or when I'm acting like we're working together and I have ideas, good or bad, but they are different than their ideas and I might even -- don't tell clients this -- but I might say something different just for the sake of saying something different, to see how they respond to having their ideas pushed back on. And if they get defensive, if they don't 'Yes, and...' it, if they don't build upon ideas, then I know that we might not have the collaborative relationship with them, that they're just looking for someone who's going to do whatever they're told. And that's not going to be a good working relationship for thoughtbot. There are plenty of development companies that you can work with that are a lot cheaper who will just do what they're told. So, I know that if we work with a company that is just looking for someone to do what they're told, not only is it going to be not a good working relationship, but they're really not going to get their money's worth. NOEL: I think that... we often tell clients that a lot of what they get out of the relationship depends on what they put into the relationship, and people who are willing to have their ideas questioned, to really participate in that process. And even into things like testing, checking things out on staging and being really active in that process, really is a multiplier. One thing that you said that I wanted to follow up on is that you talked about people at thoughtbot having a shared set of values, and it occurs to me that that makes hiring really important. How do you use the thoughtbot hiring process to help, to encourage people who have those values to sort of make it through the process? That's kind of the backwards way of putting it. CHAD: We have a pretty rigorous hiring process. We get a lot of applications to thoughtbot, and we only hire less than 1% of the people who apply. So, being able to screen people out is important and to find the right people is important, but we've always believed what's fundamentally broken in hiring is that people don't really know how to hire or interview people. And so, they do things like whiteboard interviews. They don't replicate what it's like to actually work together. And so, that's our guiding principle for our whole hiring process is what we're trying to do is replicate what it's like to actually work together, not ask anybody to do anything that we wouldn't be asked to do ourselves or be OK being asked to do ourselves. So, the hiring process at thoughtbot starts with a non-technical conversation with someone, and what you're talking about is how you've worked at previous companies. We have a set of questions that we ask that are primarily centered around those values questions. For example, one of our values is continuous improvement. And so, we ask a series of questions around asking for examples where they improve themselves or improve their team. Another one of our values is self management, meaning you don't need someone to tell you what to do and you take initiative left to your own devices whether that be on your product like you're proactively taking the next thing off the backlog and doing it and pushing it forward and pushing it to production. Or on our non-product work, like you see something that could be better and you take initiative. It doesn't mean that you always know what to do or never need to ask for help. It means when you don't know, you then take initiative to ask. And so, we ask questions around that. One of the example questions is can you tell me about a time where you saw something that could be improved and even though it wasn't really your job to do it, you worked to improve it anyway. And then from there we do a technical interview, like so much of working together and that kind of thing is communication and it's not necessarily writing code. And so, our technical interview format, actually, a pair of thoughtbot people just did at RailsConf, they did a mock technical interview on stage for their talk and it was recorded but they actually did a real live technical interview on the stage and it's filmed. So, you'll be able to watch that. It's in two parts. We're going to talk through how we would architect something together. And we do it in the technology that you're comfortable with. And then we say, "OK, then it needs to change in this way because we're going to add this other feature. How might we change it given what we just talked about how we would architect it?" And then we say, "It's having this performance issue. How would you investigate the performance issue?" And the second phase of the interview is we give you a real pull request on a real internal application that we have. And as if you your co-worker had sent that to you for a review, we ask you to leave some comments on it. NOEL: I increasingly see "leave comments on this pull request" as something that team is doing a technical review. And it's got a lot going for it. CHAD: Yeah. It's a real pull request from one of our things and it had some good stuff in it but then we also put additional things in it. So, it's a really good opportunity without asking someone to write code on the fly to get their perspective on code. NOEL: Yeah, because it's a little bit less nerve wracking than being asked to write code on demand in an interview situation, although being asked to -- I don't know. Interviews are stressful, there's sort of... CHAD: Yeah, they're stressful. And making them less stressful is important to us. So then, the final stage is you come in and you pair for the day usually with one person in the morning, have lunch with the team, meet the team, another person in the afternoon. We used to do a week of pairing and we paid people for the week, and that's a big commitment to ask. So we decreased it to three days. And what we found after doing it for three days still paying them was that really after the first day, we mostly knew, almost all the time. And so, we decreased it to the one day and that was more sustainable for everybody. NOEL: I used to do a lot of the initial phone screen... usually, it's a lunch screen interview and you actually do get a -- at that level, with what that bar is, it's terrifying how fast you actually do have that go-no-go instinct. CHAD: Yeah. What I'm focused on now is making sure that, we know that bias is in the hiring process. Everyone has unconscious bias. It's something that exists in the world. And so, you have to do things to combat that and then make sure it doesn't creep in your hiring process. Because when you hire for sort of "culture fit", you really run the risk of building a monoculture. So, we focus on values. But the other thing that we do is, studies have shown that if you just allow people to write open-ended feedback or make an open-ended judgment call about a candidate, that's where bias can really creep in because you're using factors to judge that person that aren't actually important to the job. And so, it's helpful to just have a guide, a scorecard, or a list of: here are other things for the non-technical interview, here are the things that we are looking for. Just rate the person on those, don't worry about all the other factors. Don't worry about what you think they might be technically. The technical interview is going to have its own card of 'here's what we're looking for at this stage'. And giving people the opportunity to objectively have a list in front of them about the things that they are looking for, really goes a long way to making sure that bias doesn't creep into the hiring process. And then the other thing that we do in that regard is when you're doing a technical interview, you don't see the feedback that the person who did the non-technical interview before you, left. You only know the person deserves to be at this stage because they got through. And so, you're not being influenced negatively by some random comment that someone might have made earlier in the process. You're giving each candidate a blank slate for the things that we're looking for at that stage. I think that helps quite a bit. NOEL: The phrasing we use in our interview scorecard is 'did you observe' which really I think has the same effect of like 'we're trying to base it on objective factors' as much as possible. One of the things that thoughtbot is most known for, thoughtbot is known, first of all, for their open source contributions and also for the thoughtbot Playbook. Do you want to explain with the thoughtbot Playbook is and why it's out there as a thing in the world? CHAD: The Playbook is a book that we've written that just describes how we work, what we believe about how products are successfully designed and built, and also how we work as a company. It even includes like lists of the vendors that we use for payroll, and everything. And it exists because we write everything down at thoughtbot. We're big believers in documentation. Not so that we never change, but so that we can be very intentional because we're changing so much that by writing it down and being able to make pull requests against that, we can be very intentional about the change and transparent about the change that's happening on a team. So, we've always been that. The other thing is at thoughtbot, we believe that there's a better way to work. And what we're trying to do is find it and share it with as many people as possible. So, sharing is always part of what we do. Same way that we do open source, you can't stop us from doing it. The Playbook, it was natural to make it public. It's just the way that we work. NOEL: So, do you find that it increases your level of project discipline to have the Playbook to have the sense that like we have a set of processes and tools that we use and if we're going to make a change, then you need to either adjust the Playbook or deal with it in some way. CHAD: Yeah. Because it allows you to be intentional. You can say, "We're doing this differently on this project and here's why." You're not justifying it to anybody. It's not like we request that you justify all of the things you're doing differently. But to yourself and to your team, you can say, "Here's an experiment we're trying. Let's do this differently here." Or, "We learned about this for this project and that has influenced us to do this different thing." And so, what we do is we have a trello board where people are tracking the things that they're doing differently from the Playbook. The reality is almost no project at thoughtbot adheres exactly to the Playbook, but that's because we are continually improving the process. So, the Playbook is sort of the documentation of the best things we know now. And then every project that's ongoing is trying new things. NOEL: You were kind of anticipating my next question there which was, what's the process for bringing learning back into future projects? You said there's a trello board. Do you have like a retrospective processes at the end of projects? CHAD: So every project does weekly retrospectives, but then we wrap up with a final retrospective. And then the trello cards about the things that we're doing differently on that project, get tracked and they go through like sort of a similar thing of like research where we're talking about the idea and then discussing and then we try to declare it a success or a failure at the end. And if it's a success or a failure, then that might lead to a Playbook PR. It might lead to a blog post. It might lead to a video or something like that, or we share that knowledge not only with thoughtbot but with everybody else. NOEL: What are some of the things that have been changed in the Playbook in the last -- what are some the most recent improvements that you've made to the Playbook? CHAD: We had been doing like a sprint zero, that kind of thing. When we formally changed and said like, "OK, we're doing product design sprints now. We've tried them on many projects and the structure is even better than what we're doing," that was a major change to the Playbook. That was pages and pages of refinement to the design process and a whole new section about product design sprints. That is the biggest change. That happened about six years ago now. So, it was a while ago. But that was a massive change. Other changes are usually not so massive, the small things. One example of a recent smaller change is we used to do -- we never did scrum. But we used to do story points, like part of our default tool set for how we approached a project was building a backlog and estimating the story points and doing those kinds of things. And we went through a period of time where we -- using pivotal tracker, back then we even built a tool of our own that is no longer alive -- that we believe worked even better with design process. And we started a series of experiments where we said, "You know what? Don't worry about any of this stuff. Just work together, communicate well. You can take all the other principles that we have about how software is built, but don't worry about story points and that kind of thing." And it was a small change but what we found is that we could dramatically reduce the amount of ceremony needed for the majority of projects that we were doing, and move all of those things that we were doing formally before into like a tool chest. So, our default process is very little process. It's daily stand-ups and a weekly retrospective and working with a trello board, typically, of the broken down cards of what we're doing. And then we start with that. When we have a retrospective and maybe there's a problem that the team identifies on that team, then we pull something from the tool set of things. And so, adjusting the Playbook to remove a lot of the stuff around points and that kind of thing, and sort of moving that into the tool chest that we have, was not a huge change but it's an important one. So, that's an example of something that's changed. NOEL: How do you manage in that context clients that want to have estimates of how much they're going to pay or when things are going to be ready? CHAD: The reality is that because we have set the expectation that the majority of what we're going to do is launched within 12 weeks, that's a really nice upper bound that everyone is working towards. And so, we're in constant communication about what each thing that we're working on is, that clients often sit in the office with us. We give them a desk in the office. And so, we know what we're working on. We're having a stand up every day. We're having a retrospective and planning meeting where we plan out the upcoming week and we know that we're working to launching in eight weeks or 12 weeks or whatever it is and wherever we are in the project. And so, on a weekly basis, we're doing another gut check about how we're tracking and where we are. On some project, one tool that we use when there starts to be uncertainty -- by default, we just plan out the upcoming week. We have a backlog of everything that we believe we know about, and a column of the current week. At the start of the week, together as a team, we move items off the backlog into the current week. That's the default process. When there starts to be uncertainty there, we might create a column for each of four weeks or we might create a column for every week remaining on the project. And then, we take everything in the backlog and we move it into those individual columns. But we do that as a team. And that's a way to give a little bit more certainty to the whole team that we've planned out all the weeks and everything still looks good in terms of how it's fitting. We're pulling from that week when we're working on it so that at the end of the week, we're left with one card in there, we know we didn't accomplish what we had originally anticipated for the week and there's going to be ramifications for that. And we do what the best solution to any software problem like that is, is we talk about it. So, that's how we manage that. On a rare case, we will then get to the point on a long-term client project that we're working on. Particularly, a lot of what I'm talking about is for when we're building the first version of a product. But that's not the only thing that we do. We often work with companies scaling or improving their existing product. We'll often be working alongside an existing team. And so, if that existing team does story points and is tracking velocity, there's nothing fundamentally wrong with that as long as they have arrived at that through need. And even if they haven't, we don't join the team and start tearing everything apart. That's not the way to build a working relationship. The way to build a working relationship is to slot into that team and help them be even better at what everyone's trying to accomplish. So, there are projects that we do story points on either because it solves the real pain that we have or because it was already being done. NOEL: When I was at Obtiva, we used to sell -- not all of our projects -- but we used to sell some projects two weeks at a time on a flat fee basis and so you'd buy two weeks of two developers' time or something for a flat fee. It had some interesting effects on projects because the planning meetings at the top of each two-week period where you would sort of decide what they were going to get wound up being sometimes kind of fraught. But on the whole, it was not a bad model. And I think that given some time to refine it, it could have gotten a lot better. CHAD: We do weekly iterations by default because there's a natural rhythm to the week. And we also only work on client work on the client's product four days a week. So, the fifth day is what we call investment day. There's a natural cadence and pause to the way that we're working. So, we bill by the week. And we don't start a project before we know enough about it to make an estimate and say, "We believe we can get to launch in eight weeks. Does that work for your budget?" And if it does, then we move forward. And we're very good at working and launching within that because at that point, it becomes a budget. Every conversation beyond that point is saying, "That's a great idea but can we fit it within that scope." Or, "Let's tweak it this way in order to be sure that we hit our budget." NOEL: One thing you said, there is a nice segue to one of the other things I want to talk about. You talked about an investment day and that leads to thoughtbot's support of a number of open source projects that you listed at the beginning. CHAD: Yeah. NOEL: I've been at consulting companies that have never quite managed to sustain significant open source tooling on top of their client work. Why is that important to thoughtbot and how do you think you manage it? CHAD: It's important to us as developers because we believe in open source. We use open source, so to give back and to create tools that's fulfilling for us. The other is it goes back to that value of continuous improvement. We work on a lot of products and we see a lot of patterns and things that could be better. And so, to have time set aside every week for fixing those and making that better, and sometimes that means creating open source, and then having time each week to be able to work on that is driven by our value of continuous improvement. So, when we created shoulda, for example, which is a testing library, it introduced contexts and helper methods for your tests. So you could say like, "It should have many posts and it would run a series of five specs for that." NOEL: As documented in the fine Rails Test Prescriptions series of books. CHAD: Yeah. We created that because we believe in test-driven development. And back when we were getting started in this community -- so, we started in 2003. We switched to Rails in 2005. When it hit 1.0, we formally switched to Rails. And we believe in test-driven development and we were saying no to clients that didn't want us to work that way. Back then, test-driven development was not an established practice. People actively said they didn't want us to do it. People would write blog posts to be like, "It's slow. We tried it." But we believed in it and we knew that working that way was better. What inspired us to create open source around testing was we wanted to prove people wrong. So, things that we could do to make it better and faster allowed us to work the way that we wanted to work. And so, that's why we call it investment time is like we're investing in the things that we do, we're investing in ourselves by learning new things and creating new tools which make us better and faster at our work. And we're investing in thoughtbot, the company. And so, that's why we call it investment time. But that driver of wanting to be better and faster at what we do is where our open source comes from. NOEL: How hard is it to maintain that investment time in the face of what I would imagine would be some pressure to not? CHAD: Surprisingly enough, we get almost no pressure from our clients. And when we first switched, we were nervous. But we do a really good job through the Playbook and that kind of thing of setting expectations. But the reality too is that we're just as productive in those four focused days as another team in five unfocused days. Because Monday through Thursday is focused on client time. We're not doing a lot of meetings and that kind of thing. Then there's a certain amount of wiggle room, like we couldn't push it to three days and be just as productive, but really how much stuff was getting done at 3:00 p.m. on Friday. Not a lot. And so, by eliminating that, by pushing it and saying, "Monday through Thursday is super focused time," we're just as productive. And so, we don't get a lot of pushback on it from clients. I'll be honest, it hasn't been difficult for us to maintain it financially over the years, but as we've gotten bigger and also have lots of different sort of pressures from the market, that's where only recently it's gotten a little bit where we see it in the financials. And we can say if we were to bill five days a week on this, could we get a little bit more. Thoughtbot, we've been around for 15 years and a lot of people have been here a while and we do annual salary reviews and raises and all that stuff, and we need to make sure our rates keep pace with those salaries that are being increased. And so, that creates pressure financially on investment time. Just being honest, that's where the pressure for investment time comes from, not from clients. And if a client ever objects, the argument that I have is the proof is in the pudding, like we're going to work and you're going to see how quickly and how productive we are. And usually, people do see it. NOEL: I can see that. If a client comes in knowing what they're getting and they've already gone through the process and they already trust you, then they're likely to keep going along with it unless they have some sort of maintenance concern or something like that. But I could see that pressure coming internally, just as easily as I could see it coming externally. CHAD: Yeah, the results speak for themselves. One is we've got to be good. We got to be focused on that Monday through Thursday. If we start to erode that or to not be as productive, that'll obviously be nothing. But also, people don't choose thoughtbot because we're good developers or good designers. That's table stakes. And there are lots of good design and development companies in the world. People choose thoughtbot because they want to be part of what we've done and what we do as a company, and how we work and our culture and our values. They want that for their team, for their product. So, that's what causes people to choose thoughtbot. When clients come to us, they see that and they know that if we're working on open source during that time or we're working on, like we're having a company meeting about how to make the company better, or the design team is doing a design critique during that time, they know they're getting the benefit from that. Like there is benefit to that. It's why we're able to do the things that we do and that the client before them successfully did the four day week which allowed us to then become better and faster at what we do. And so, it's a little bit of 'they've bought into it and they're paying it forward' kind of thing, as well. Although investment time is free. We don't charge clients for it. NOEL: Sure. We're sort of coming up on time. Is there something I should have asked you but I didn't ask you? CHAD: Well, I thought you were going to ask like, "How does a Rails consulting..." We're not a Rails consulting company. We do lots of different things. But I think there's a lot of stuff in the market and in the news about how Rails isn't as popular as it was. I thought you were going to ask me about that. NOEL: It's funny because as much as I hear that, Table XI, we don't have any trouble selling Rails to people. Especially, I think because a lot of our clients are not technical and therefore on some level, don't care. And so, we say this is the best way to have a small team act like a big team, which we believe. And they say, "Great, that sounds good. Where do we sign that up?" And it's the people that come in that have a little bit more tech savvy that start... And we don't just do Rails. We also do React in mobile and a bunch of other stuff too. But I guess the reason I didn't think to ask about are you having trouble selling Rails is because we're not. CHAD: Yeah. We're not either and we do lots of different things either. People know us, especially because we were -- strictly speaking, I believe we were the first consulting company in the world to announce that we were switching to Rails. And because we were so early on, we were using it before it hit 1.0, we saw, "Hey, we can create this open source here." NOEL: I'm reasonably positive that when you did that, I was working at Motorola. I had just moved to Chicago. I grew up in Chicago but I lived in Boston for a while and had just moved back to Chicago at that point and was working at Motorola on not web stuff, who had just started using Rails when you saw -- I was pretty sure I remember seeing that announcement thinking, "Oh, maybe I shouldn't have moved from Boston because this would have been very interesting," because I was using it for internal tools pre 1.0. CHAD: But we didn't choose Rails because we thought it was going to be popular. We chose it because as designers and developers, we really liked using it and we felt it helped us build better products. And so, that's the same approach we take to our tools every day. We want to choose the best tool for the job and one that makes us fulfilled as designers and developers. Sometimes, Rails isn't the right choice for a particular project. But a lot of times, it still is. As designers and developers, we're continually trying every new thing that comes out. That's what our clients expect from us. But as people who want to be fulfilled in our work, we're also looking for the next thing that's going to make us fulfill because we don't want to miss out on it. NOEL: Sure. It seems to me that in terms of these projects that Rails has developed a huge ecosystem advantage, even over the tools that are like nominally better at certain individual things. That's, to me, one of the biggest selling points that we use. CHAD: Especially when it comes down to the time frames of the projects that we work on. Like where we're working on the first version of our product, we're targeting that launch between 8 and 12 weeks. If we need to integrate with an external service, and there's not a library to do it in another framework that we might think might be a better technical choice and we're going to have to spend two or three weeks creating the library to talk to that external service, it completely blows the timeline out of the window and that's going to increase their budget by a quarter or something like that to do that. It just almost completely removes that option from the table. NOEL: Yeah, I agree. All right, Chad. This has been a lot of fun. Can you tell people where they can find you or thoughtbot, if they want to continue talking to you? CHAD: Yeah, you can find me on Twitter @cpytel and @thoughtbot on Twitter. And you can also go to thoughtbot.com. We work locally with clients. We have local studios in London, Boston, New York City, Raleigh, Austin and San Francisco. And in those cities, we work locally with people. We do remote work as well. But if you're in one of those cities and you have an idea or you have an existing company that you want to make even better, definitely get in touch. NOEL: Great. Thanks for being on the show, Chad. It was really good to talk to you. CHAD: Nice talk to you, as well. Thanks. NOEL: Tech Done Right is available on the web at TechDoneRight.io where you can learn more about our guests and comment on our episodes. Find us wherever you listen to podcasts. And if you like the show, tell a friend, your social media network, tell me, tell your boss, your pets, tell anybody really. And if you can, leaving a review on Apple podcast helps people find the show. Tech Done Right is hosted by me, Noel Rappin. Our editor is Mandy Moore. You can find us on Twitter @NoelRap and @TheRubyRep. Tech Done Right is produced by Table XI. Table XI is a custom design and software company in Chicago. We've been named one of Inc. Magazine's Best Workplaces and we are a top-rated custom software development company on Clutch.co. You can learn more about working with us or working for us at TableXI.com or follow us on Twitter @TableXI. We'll be back in a couple of weeks with the next episode of Tech Done Right.