Markus Oberlehner: Nobody starts out being a perfect developer. There are things we have to learn and we have to experience ourselves. And it's the same with testing. So we can't expect to start writing tests and are perfect at writing tests, but we need to do it and do it more and at some point, hopefully, we get better at it. Sean: Hi, and welcome to Pod Rocket, a web development podcast brought to you by Log Rocket, which combines session replay, air tracking and product analytics to help software teams solve user reported issues, find issues faster and improve conversion and adoption. You can get a free trial today at logrocket.com. I'm Sean, and today we have Markus Oberlehner joining us to talk about his recent Talk TDD for Vue.js developers, how to write good tests from this year's Vue Amsterdam conference. Marcus is a web developer, blogger and open source contributor. Welcome to the show, Markus. Markus Oberlehner: Hello. Nice to be here. Sean: Thanks for joining us. And before we get into the top, could you tell us a little bit about your background and your experience in web development? Markus Oberlehner: Sure. I'm a web developer for 15 years. I'm doing it professionally for 12 years, but coding for somewhat 15 years. Currently, I'm working as a software architect for [foreign language 00:01:18]. And yeah, it started all as a self-learner, basically. I didn't study anything related to development or software development but I learned it all by myself. And starting 15 years ago, I was learning electronics in school, and at this point I wasn't really interested in it, so I skipped school and I did go to a bookstore and learned all about BHP at the time. That's how it started and now I'm here. Sean: Wow. And I imagine in those 15 years you've seen the good, the bad and everything in between when it comes to testing. So I'm curious, why do you think there's such a gap in knowledge when developers go to write tests? Why do you think it can be so difficult? Markus Oberlehner: Basically there are two things, I think. The first thing is that everything is moving so fast in web development and sometimes the tools don't keep up. For example, in recent years we had things like SBAs coming and we had things like microservices and micro frontends, and all those decisions we make when building our stacks influence how we can test them. And I think one of the reasons why it's so difficult or why it feels so difficult is that the tools or the techniques we use didn't keep up with the rate of change. I think another thing is that like everything testing is a skill we need to learn. Nobody starts out being a perfect developer. There are things we have to learn and we have to experience ourselves. And it's the same with testing. We can't expect to start writing tests and are perfect at writing tests, but we need to do it and do it more. And at some point, hopefully, we get better at it. Sean: That kind of sounds like part of the philosophy of test-driven development. The test shouldn't be an afterthought, but part of the development process in the first place. Have you had experience with TDD? Have you found that to work out? Markus Oberlehner: Yeah, I can agree what you said. With DDT, it's about the tests guiding the design of our applications. What I experience when I practice it is that usually you end up with better design applications. If you don't write tests at all or if you write tests after the fact, then oftentimes the tests don't really help you and you oftentimes end up with the big ball of mud. We are building the next legacy system, basically. And with having tests in place and with practicing in DDT, what this enables us to do is that we can refactor our code and that we can avoid building the next legacy system, and write or refactor the code to prevent ourselves from ending up with a code base nobody wants to touch anymore because it's so complicated to work on and it doesn't match the requirements of today but the requirements of the last year for example, and we are not able to refactor it. But with DDT and with practicing DDT and with having good tests in place, we are able to refactor with confidence and avoid this problem. Sean: I find it so interesting that a code base could be only five years old and it could be considered legacy just because it evolved more organically rather than based around the inputs and outputs and the testing. What kind of problems might a developer run into they're trying to do TDD or test in a more mature, processed way? Markus Oberlehner: So I think the most common problem is that we are not really aware when starting out how important it is to write the test first. So the most common problem is that people think, "Yeah, we should test." And a lot of people know testing is important, but they start out with doing everything the same as before, but writing the tests after they have written the code. And I think this is the most common problem, and this is one thing that we also experienced at work was that we started a new project and we knew testing is an important thing, and we wanted to do all the right things, obviously. So we wrote a lot of tests, but we wrote basically all of them after implementing the features. So we ended up with a lot of tests that didn't really help us. So I think this is a big problem when people start out with this topic. Sean: And I think that writing the tests after the feature's already implemented, it feels like it's almost done already, so why not take shortcuts or maybe leave out some of the test cases that you would've written if you had them first. I'm curious, coming at this from the perspective of someone who maybe needs to convince the business that tests are important, what kind of strategy would you recommend for how tests actually increase and drive business value? Markus Oberlehner: In a perfect world, this wouldn't be necessary, we wouldn't have to convince anybody that testing is important because we could say if we practice this then we will be more effective, we will have less bugs, we will be able to build features faster because we don't build the next legacy system. But we all know we don't live in a perfect world, unfortunately, so at the beginning it's definitely an investment. We have to keep this in mind. So it doesn't come for free to start with testing especially if we have to gain experience with it, especially if we don't have to experience how to do it. We have to learn it and this takes time, and time equals money. So in the beginning it definitely is an investment and if something is an investment, we have to convince our bosses most likely. So there are two strategies. The first one is to get a data and find out and read about other successful teams that have gone this route and were successful with starting using a test development approach. The second way depends on how that company is structured and how it all is set up. You don't tell anybody to just do it because you could consider testing some integral part of developments. We don't ask somebody at the management level if we should switch from Webpack to Wheat, for example. We just do it because we know that at the end we will be more efficient and we need this or we want this to be fast at our job. I think it can be the same with testing as well. It's simply it's something that we need to do to do our work better and we could consider it our responsibility to just do it because it's what's needed to do our job in a good way, basically. And our managers, most likely, as I said it depends on the structure of the company, but oftentimes our managers are not the ones who know about all the best practice when it comes to coding, so it's our responsibility to tell them what is state of the art. Sean: That's interesting. I really like that perspective that it's just part of the development process and there's no real reason to think of it separate from implementation and communicate it separately from implementation to the broader organization that you're in. And so I want to drill down a bit more into the writing the test first thing and test-driven development because how do you test a thing that doesn't exist? Do you mind talking a bit more about how this actually happens? Markus Oberlehner: Yeah, exactly. When starting out, it seems kind of impossible, right? So how can we write a test for something that isn't there yet? It was the same for me. When we started out doing our first project where we wanted to do everything the "right way," we faced the same problem. We kind of knew we should write the test first, but we weren't able to really do it. I think the problem is twofold. For once, if we are used to the old way of doing things, if we are used to writing the code first and only then maybe doing no testing at all, or only manual testing and some automated testing, it's hard to imagine how we can do it. But if you think about it in a different way, if you think about the outputs first of whatever you build, if you think more like, "I need to build this little part and if I put myself in the shoes of whoever is consuming this part of our application..." So if you think about a React or a Vue application, then it might be a component, if you think about some function, then you can put yourself in the shoes of another developer who must use this function to do something with it, then we can start thinking, "Yeah, if I use this thing I built, how would I want to use it?" And then I think it gets easier if we change our mind how we think about it. So the first one is think about the problem like the consumer of whatever you build and not of the one who builds it. The second aspect is we can go one level up. So if we think about unit testing, there it is kind of tricky, but I think it helps following the first step. But if we think more about the features of our application, then I think it gets easier to write a test first because we know what we want from our application, we know how a certain feature should work, we know at the beginning, but otherwise we wouldn't be able to start implementing it at all. So we can write instead of focusing so much on unit tests when thinking about React and Vue applications, then it gets a lot easier to think about writing tests first when we take a look at the features we build and write tests that make sure that the feature does what we specified it to do. So change the perspective, basically. Sean: Just a quick pause here to remind you that Pod Rocket is brought to you by Log Rocket. Log Rocket can help you understand exactly how users are experiencing your digital product with session replay, error tracking, product analytics, frustration indicators, performance monitoring, UX analytics, and more. Solve user reported issues, find issues faster and improve conversion and adoption with Log Rocket. That kind of reminds me of the methodology of working backwards from the customer because when we implement something, we understand what the user requirements are and so what we ultimately want to test is that it behaves how we expected, not necessarily what the implementation details look like. So I've got a question specifically about the front end testing side of things. I think something that you emphasize in the talk is decoupling. What does that mean and how can we achieve that in practice? Because I think from personal side of things, I'm certainly guilty of written a component that maybe is a little bit too big or is doing too many things at once. So how can we start to break things up and I guess decompose them in a way that makes them testable? Markus Oberlehner: So I think about decoupling again in two ways. First of all, when we think about our code, we all know that decoupling is a good thing. So we should decouple certain parts of our application from other parts that are not really to make our components we build more reusable. This is the first part. We all know decoupling is good when writing code and when building applications, but what I am talking about is also recognizing that decoupling is also important for writing our tests. We also want to decouple our tests from certain aspects of our implementation and certain aspects of our features, even. So I say there are three aspects in which we want to decouple our tests. The first one is we want to decouple tests from the test framework we are using, for example. Think about using a test framework like Chest. And nowadays, there is a new test framework, a new rising star, which is WeTest. And luckily in this case the API of those two is kind of similar, so it's not that big of a deal. But if you want to switch from Chest to WeTest for example, because now WeTest is the new cool kid, basically, you need to change all your tests where you use a different API or where the API is not compatible between those two. But if you had decoupled your tests right at the beginning when writing them, then you can swap test frameworks as you want. You can even just try another test framework and see if it's faster or more stable, or anything like that. And if you don't like it, you can switch back. Even more important, so WeTest and Chest are more for unit testing or testing components, but not really for end-to-end testing. But if we think about interesting tools like Cypress or Playwright and all of those, then there is also the case where for the longest time Cypress was considered one of the best testing frameworks, but nowadays Playwright is catching up and this means that some of us might want to switch from Cypress to Playwright because playwright for example is a lot faster. But if we didn't decouple our tests from Cypress, then it is a lot of work and if we have hundreds of tests, we might decide it's not worth it. But we have opportunity cost with that because now we are using Cypress although we know it's not the best tool and it slows us down. Just as an example, Cypress has other benefits, so it's a trade-off basically. But if we consider that the benefits of Playwright are more important for us but we can't switch because our tests, our hundreds of tests are coupled to Cypress, then it's a problem. But if we use decoupling techniques and we are able to relatively low efforts switch between test frameworks, then it's a good thing. It's optionality, and optionality is always a good thing when moving forward and for business decisions. Sean: Yeah, and also if something comes along like Playwright or something that's faster, we might want to switch and what does that decoupling actually look like in the code? Are we wrapping the API of the testing framework with our own testing functions that we can then swap out what's under that wrap? Markus Oberlehner: Yes, exactly like that. In my example, we implement the driver, we use the driver button, this means we have a generic driver interface and this is a wrapper around the test framework API. At first this might sound like a lot of overhead, but I think it not only serves the purpose of decoupling our framework from the tests, but also it enables us to create our own API that is tailored to the needs of our application. And we can have a smaller API. If you think about Cypress and Playwright, those are awesome tools because they can do basically everything. But for our applications, we only need just a few of those methods that those frameworks have to offer. This can be a benefit if we have our own wrapper, which is limited to only a couple of functions that we can use to write our tests because that way we can enforce certain best practices because we can say we don't want to use CSS selectors, for example, for testing. And if we come up with our own wrapper, which makes it impossible to use CSS selectors, then that way we can enforce best practices. So we have two benefits and for the cost of the little overhead to write our own driver, which is not that much work actually. So I have an example: GitHub repository and the driver implementation for writing tests that are hundreds of tests at our company, it's like 115 lines of code, for example, so it's limited. Well, it's very small overhead, I think for the benefits the test offer. Sean: The side of that where you intentionally prevent usage of certain features from the testing framework, that's really interesting because at our company, and I think at many others, we have docs around the best practices when testing in Cypress in particular because I think Cypress is prone to flakiness when testing the wrong way, and the CSS selector, sure, because that might change and then all of a sudden when you change the CSS, the test break. So that sounds very convenient to explicitly disallow access to certain parts of the API. But you mentioned another part of decoupling and I think you were getting to it, which is the user interface. Do you want to dive into that as well? Markus Oberlehner: Yeah, decoupling from the user interface might sound a little abstract to most people because what does it mean? What I mean by decoupling from the user interface is that if you think about your application most likely solves some real world business need. So your application may be for a shipping company which needs to track their containers around the world, for example. So you have a real world use case, you are trying to solve with your application. And for this you create the application with a certain user interface, but this user interface might change. What might also change but at a much lower frequency probably is the real-world business use case behind that. So, how the containers move around the world and how they tracked probably doesn't change that often, but it's more likely that you want to change the news interface because you want to make some improvement. Or it might even go so far that you completely change how people interact with the application because if we stay with the example of a shipping company, you might start out with somebody using a tablet and a pen to do certain interactions, but then you might realize they all wear gloves and they can only swipe or something like that. The user interface changes completely and that's one reason why you wanted to decouple your test from the user interface, but only to show that the user interface is not static. The more important reason why you want to decouple your test from the user interface is because you want your tests or you have to put opportunity that your tests serve as documentation of how you solve the real-world business needs with your code instead of a documentation how it's actually implemented in the user interface that there is this button and there's this input field, which at the end of the day isn't that important. So by decoupling from the user interface, instead of saying in your test, "Select this input field with a certain label," for example, or with a certain CSS selector, which you shouldn't do but many do, instead of doing that, we say we want to find the container with the ID or something like that. So in our test it basically says what a real-world person who is not a programmer or not primarily concerned with the software aspect but primarily concerned with solving the real business need, how would they think about it? So maybe an easy example is something like a shopping list. Instead of saying, "Select this input field, enter 'butter'," for example, because I want to buy butter or because I'm in need of butter, you could say, "add butter to the shopping list," like a sentence. And this is implemented as a function. And behind this function, this function serves as an abstraction for actually entering the text in this input field and pressing a button to submit it. But by having this decoupling, we can read our test like how you would describe how a shopping list should work if you're implementing the software instead of the concrete user interface elements. Sean: That sounds so useful to be able to just go into the test and understand exactly what the business context is or what the need for a particular feature is. I think at a lot of companies you don't have testing that is that mature and so you find some old code and it requires you to have to go back to old PRs, go back to Jira tickets or wherever the product specification was laid out, and there might be a Slack history or whatever conversation tool where that stuff is lost. But if it's right there in the code, then it's kind of preserved by version control. But at some point when you write the function that is the "add to shopping list" function, at some point it does need to interact with the page. Is there expectation that, let's say, you change it to the touchscreen version or you change the user interface? Is there expectation that you will then have to change that wrapper wherever we interact with the page? Markus Oberlehner: We can prevent that we have to make changes whenever big changes to the user interface actually happen. But what we gain is that we can use this "ad to" shopping list function like 100 times and we only have to make the change for that, that it's related to the change to user interface in one place and one time. So this is the benefit we get from it. Sean: That makes sense. Another topic you dived into was this idea of the specifications, and I don't think that's super common practice, so do you mind explaining more about what you mean by that? Markus Oberlehner: Usually we think about tests and testing, but what I recommend is, again, a shift in perspective, to not think about writing tests but to think about writing specifications because the semantics are different. If you think about a test, usually you mean you have something like a physical product or a software product and you test it if it does what it should do after the fact. But with a specification it's a little different. A specification, usually, you write before writing you implementation before building a real product or building a real building even, you do this beforehand. And so I think it's healthy to think not about writing tests but writing specifications, and that way you can even integrate the whole testing process. Or you can start with the whole testing process, not only when you hand over a ticket to a developer, but even during the ticket writing phase or even during the phase where you come up with the idea how the feature should work and stuff like that. You can start syncing in terms of how do I specify this? How do I write the specifications in a way that we can check them automatically with tests at the end of the day? Sean: Yeah, and does that look like specifying what the inputs and outputs will be? Or is it more high level of describing from the user perspective or the user story what the requirements are? Markus Oberlehner: Exactly. It's more high level. It's more like, as you said, how does the user interact with the application or what does the user expect from our application? What do they want to achieve by using our application and what do they expect from a particular feature? And as you said, also I start thinking about it when you come up with the user story and as I said before already when writing the user story and when thinking about how to build a feature, think about how the user will interact with the application or a particular feature, what would they expect if they complete a particular task? Sean: Awesome. And then for our listeners, I guess if they were to take a couple things away about how to go about starting to implement some of these best practices or introduce them to their company, I guess how would you recommend to get started? Markus Oberlehner: That's a great question because I'm exactly at that point with a couple of teams in our company and I'm thinking a lot about it. I think the first step is to recognize that is a skill you have to learn. You can't expect, as I already said at the beginning, to be good at it right at the beginning, because that's just not how things work. If we do something new, we always have to do it for a while and actually get better at it. So don't be frustrated if at the beginning it'll be not that easy probably to get started because you most likely write a couple of bad tests and realize only after that, but I think there are more great resources out there that can help you to get started with it. And the second thing is invest some time to think about how you want to test your application because there are many ways how to do it. It very much depends on your stack, what's the best way of testing. So I really can't say do it this way and do it exactly that way because it really depends on your infrastructure and stuff like that. So really look at the application you want to build or look at your existing application and think about, "How can I ensure that whenever I run my automated tests and they succeed, I'm confident enough to hit the deploy button and deploy it to life. How can I achieve this goal?" This, I think, is the most important question you have to ask yourself in getting started because how you get there is not that important, but the goal is to get there somehow. And there are a couple of ways I described today and I described in my talk that can help you get to this goal. But like I said, it depends on your application and your skills and your needs. Sean: Yeah, that confidence in deploys when the test pass I think is a really great North Star to have. And we really appreciate the insights you've brought today. I think testing, especially with the front end, is tricky. So thanks for joining us. And before we wrap, is there anything you want to plug, maybe anything you've written online about testing or just other general resources you think our listeners, helpful? Markus Oberlehner: Yeah, sure. Currently I'm in the process of writing a book and I release every chapter as I'm writing it in form of a newsletter. You can find it by going to goodvuetests.com all in one word For now, it's mostly targeted at Vue developers, so all the code examples are for Vue components and stuff like that. But I think it's also useful for all kinds of front end developers because the basic principles when it comes to testing are not that different, actually. So I'm also thinking about maybe even renaming it and making it generic for every framework or releasing multiple versions for React and Vue. But especially if you have good developer, I think you will be very happy with it. And also if you are working with React or Angular, I think that testing knowledge also works for you if you are able to translate the code examples to whatever framework you use. Sean: Yeah, that's awesome. I'm looking forward to that. Thanks for joining us today. Markus Oberlehner: Thanks for having me. It was a pleasure.