Emily: Welcome back to PodRocket. I am Emily, producer for PodRocket, a web development podcast from LogRocket, and today we're answering your questions about testing. A lot of you, over the past couple of months, have written in to us with frustrations and questions about testing specifically. So, we're taking the time today to talk to our panel of experts to answer your questions. But before we get into it, let's welcome the panel. First, we have Debbie O'Brien, senior program manager at Microsoft for Playwright. Welcome back, Debbie. Can you give us just a brief overview about what you do and what your work is like? Debbie: Hi. Yeah, thanks for having me. So, basically, my job is to make the Playwright community even bigger than it is and make Playwright known. So, I'm doing everything that you can absolutely imagine that involves a community, from conference talks to creating content to our Discord channel, and just getting as many people out there in the world involved in this great community. Emily: Next, we have Kent C. Dodds, full-time software educator through his site's, Epic Web, Epic React, and Testing JavaScript. Welcome back, as well, Kent. Can you also just give us a little brief overview about what you do? Kent C. Dodds: So, I teach people how to build excellent software, and hopefully, we all work together to make the world better by building excellent software. And testing is definitely one of the things that I have opinions on and teach people those opinions on testing JavaScript. So, happy to be here. Emily: Absolutely. Happy to have you. And then, finally, rounding out our experts, we have Gleb Bahmutov. Gleb is a senior director of engineering at Mercari. Welcome back, Gleb. And can you also tell us what you do and what your work is like? Gleb Bahmutov: Hi everyone. Thank you for inviting me. For the last four years I was at Cypress building the Test Runner. And I left Cypress, and two years ago I joined a company, Mercari US, which is an online marketplace. I'm responsible for building automation testing, and daily we run more than 700 full end-to-end tests. In my free time, I try not to read YouTube comments because, every time I read something about testing, I was like, "Ah, I have to write another blog post to explain it. Or record a YouTube video and post it, or something, because I cannot let a testing question be left unanswered." Emily: Well, very glad you're here because we have so many questions to answer. So, thank you all for joining us. And finally, we have Sean Rayment, one of our PodRocket hosts and software engineer at LogRocket. Welcome back, as always, Sean. Sean Rayment: Hello, everyone. Yeah, thanks for letting me crash the party today. Emily: Welcome, and now we can get into our questions. Everyone has been sending in questions over the past couple of months specifically about testing. And I'll start off with one of our listeners, Poolkit. He wrote into us saying, "I recently started testing using just in React Testing Library. The major issues I face are testing legacy code and the basic principle of what and how to test. Sometimes, it feels like we are just mocking everything and not really testing the actual code." So, with this, I would start very basic. What should devs be focusing on when testing? And what are some basic testing actions devs can take?" Kent C. Dodds: I don't mind jumping in first here. I forgot to mention I did create React Testing Library, and so I do have opinions about this. One of the things about Testing Library, one of the original motivations for creating it, was the de facto standard for testing was, at least for React, was a tool called Enzyme which supported a pattern called shallow rendering. It mocked every single component that the component under test rendered, which is such a bad idea in so many ways. And I've got a blog post about this on kentcdodds.com. So, I, first, just want to clarify that there's nothing about React Testing Library that suggests that you should be mocking everything, and, in fact, it's very much the opposite. I am of the opinion that you want to mock as little as possible. And the primary things that I typically do mock with the tests that I'm using Testing Library for is HTTP requests and, sometimes, animations, timers, that sort of thing. Occasionally, if you have a React component that is just very slow, like a data grid or something like that, provided that itself is well tested, I might mock that, as well, but only if it is showing some performance issues with my tests. And I have a pretty high threshold on that, as well. So, I'd much rather have my test take a little bit more time and get a lot more confidence than start mocking everything and speed up my tests. I can empathize with the feeling of like, "Oh, I just feel like we're testing for test's sake." If that is what is happening, then things need to change. Emily: So, how can developers get away from testing for test's sake? Gleb Bahmutov: You have to find something that developers hate even more than testing. In my case, it's like updating Jira Stories, Google Docs about technical scope of a project, the current status updates. So, to me, if you explain to them, if you can show your test with screenshots or videos of your test run, running all the time, then this is the state of your application, state of your feature. So, if you switch and say, okay, instead of updating all this Google Docs about where we are and constantly syncing, you write a test but goes and shows a feature in action, that's the state of a project. Then, you don't have to do all this manual updates and document it. That's it. The test actually shows the state of a project and how the feature is supposed to work. So, testing is good because it eliminates all this extra work for you. So, if you can convince them, I think they'll be more willing to say, "Yeah, yeah, I'll write a test because I don't want to do all this other stuff." Debbie: And you've also got to make sure that writing test is easy. Because if it's too hard and going to take too long, then they tend to stay away from it. Emily: This kind of goes into my next question a little bit. Another listener, Amin, wrote in and said, "Mocking is the most boring, badly managed aspect in most of my teams, including mocking of a Fetch API, libraries, and imports." So, how can these devs make it less boring or less tedious, like you guys are saying? And how can they better manage this process within teams? We got a lot of questions about team management, so I think this is pretty pertinent. Gleb Bahmutov: Before Kent answers, I've written what I think Kent will say on a piece of paper. Let's see if I'm right. Debbie: Less mocking. I'm going to say less mocking you've written on that paper. Kent C. Dodds: No, no. Avoid testing implementation details. Is that what you wrote? No? I don't know. Now, I'm curious. Gleb Bahmutov: I think you'll suggest a tool or library, right? Kent C. Dodds: Oh, yeah. Yeah, MSW. I was going to say that, as well. Gleb Bahmutov: I know my Kent. I know. Kent C. Dodds: So, I have a blog post titled, Stop Mocking Fetch. If you are mocking Fetch directly, you've got to stop doing that because we've got a really great tool. It is MSW, and we also have Cypress and Playwright represented here, as well. And each of those tools have HTTP mocking capabilities. Mocking Fetch is something that I used to do a long time ago, and it was a huge pain then. It's a huge pain now. We've got way better tools now, so don't bother with that. Mocking modules, we talked about a little bit earlier. The reason that mocking is something that you want to avoid is because it is, basically, poking a hole in reality, and then, eventually, your boat's going to sink. You've got to patch up all these holes with just a ton of code to make sure that your interfaces are being called correctly and everything, and you end up in a situation where your tests have to be changed whenever you refactor. One of the main benefits of tests is to confirm that you didn't break anything when you refactored. So, if you have to change it when you refactor, then what is the test there for? I have another blog post, called Avoid the Test User, where I talk about the two users that our code is supposed to cater for two, and that is the end user, who's going to be clicking the buttons and whatever, and the developer user, who's going to be rendering the component and passing the props or calling the function or whatever. Those are the two users that your test should care about. And there's a third user that slips in sometimes, and that's the test user. And that is what happens when you start mocking out your code, where you have to say, "Well this is going to be called X number of times or this is going to be called with these props or whatever." The only thing in your code base that cares about those implementation details are the tests. And so, by definition, when you change the implementation, you'll have to change your test and that is just super unfun, completely. Your test should look like, and only do, the sorts of things that your end user and your developer user are going to do. If it's doing anything beyond that, then it's not going to be fun at all. And it's not going to be very helpful. Debbie: Kent has a blog post for everything. Gleb Bahmutov: He's quickly searching right now for a blog post. Kent C. Dodds: I think, actually, I'm pretty sure that Gleb has more blog posts than I do. But I do have a lot of blog posts. Gleb Bahmutov: We're mocking, right? It's like what Ken said before. The problem with Enzyme and testing components was it was doing shallow rendering. And when you mark components somewhere in your code, you actually don't allow the actual code to propagate. So, you're doing, not shallow, but almost like somewhere in between, between shallow and full rendering, you're mocking at some different levels. And how can we call it something between shallow and full depth testing? It's like a bog, right? A swamp where sometimes you're seeing keen, but you got stuck in all this implementation details if you're trying to change something. So, my advice is try to at least mock at the periphery, mocking network requests when they happen, and go outside the browser. Then, you mock them and stop. Maybe, mock the file system, but don't mock the code that accesses the file system because those implementation details will change. Mock at the well-defined boundary outside of the code, and then, you can refactor safely and the test user will be realistic. Which brings me to another point is stop using Jest to test the browser code. And I know people will go crazy but Jest actually does a lot of mocking over browser of all the dumb APIs and events and everything. And at some point, you'll be testing something that might not necessarily work in a real browser. And right now, we have all the tools moving towards running your test for browser code into the browser, where Jest creates an obstacle for that, which is almost like you mocking everything. So, maybe consider better tooling rather than Jest and DOM Manipulation. Emily: What would be some good tooling? Debbie: Yeah, I mean you've got Playwright, obviously. You've got Cypress. And I would also say writing more end-to-end tests, using those tests, making it more like less unit tests, more end-to-end tests so that you're testing what the user is seeing, what the user is doing, and concentrate more on that. Gleb Bahmutov: Playwright, definitely. Cypress component testing, definitely. Even the Storybook ability to render a full component while shallow rendering is really powerful. Emily: Listener Medi wrote in to say, "I want to ask how to convince my team to implement testing. My company is completely dependent on manual QA," which sounds like a nightmare honestly. How can devs either convince their teams to implement testing requirements, or how can teams begin to slowly adopt a testing approach versus a manual QA approach? Debbie: It's really hard, and I had this problem as a team lead in my previous companies working for an agency where they want to cut the budget, and the first thing they do is they delete all the testing from the budget, and say, "Right now this is it." So, we have to try and get developers to sneak the testing in. So, testing is not seen as an afterthought of something. I've built everything and now I'm going to test. Testing as you're going along, it's part of the development process. So, make it part of you're building that component and you're testing. You're not just building the component and a later testing. And once it becomes part of something, a habit, that you're normally doing, then it's just you're not asking can you do testing? Developers really need to push for that because managers are just going to say no to testing, no to accessibility, no to anything that they don't see as important because they're not seeing the value. Kent C. Dodds: I have a blog post about this. It's called, Business and Engineering Alignment. The subtitle is, How to Get Whatever You Want. And the trick is that you might need to change what you want. But in this case, I think there's a pretty good case for testing. The idea is you need to establish what is the goal of the organization, and how is the thing that you want going to help you get to that goal better. In the case of testing, relying so much on manual QA is probably slowing you down a fair bit. You're shipping more bugs, there are some pretty clear things, but your job as an engineer trying to convince management to invest in this is to identify very specific things that are happening as a result of your lack of focus in testing. And then, associate that to the goals of the organization and say, "Hey listen, we're trying to ship these widgets, but we keep on shipping bugs instead. And so, we're slowing down on our ability to accomplish our mission of shipping widgets, and we could speed that up a lot by having these tests in place." And so, I suggest you give that blog post to read because it goes a little bit more in detail on how you can have those conversations. Once you've gotten the management buy-in, then getting developer buy-in, like Debbie suggested, is really important. One other thing that Debbie suggested earlier, is just making it easy. It is just so much harder for developers to be interested in testing if it's a real big pain. And so, if you're that one engineer who's like, "we've got to get good at testing," then be the one engineer who built all the utilities. It Makes it really easy to write a test that is authenticated. That's probably one of the hardest things when you're getting started with testing is how do I get my test into an authenticated state, and then make sure I clean up afterward and have all that happen automatically. Just call this little function and poof. Now, I can start doing things like an authenticated user with data seated into the database, or whatever level you're at. But just make it so that there's a clear path because developers who are not really interested in doing it aren't even going to begrudgingly do it if it's hard. Gleb Bahmutov: Can I add that I don't have a blog post. I don't remember if I have a blog post, but I do have a bunch of presentations about this because this is something I went through when I joined Mercari. It was specifically why they brought me in, how to create automated end-to-end testing. Not the unit testing. The web team was really good about unit tests. Kent is absolutely right. You have to align the business goals with your technical goals, but the problem is there are so many subdivisions in the company that the goals might be very different between people. So, you have to identify all the key players, the management, and the management only cares how much will it cost and what it will give us. So, you have to give them specific case for testing, right? Fewer bugs, more revenue not lost, and so on. But web developers, you have to identify arguments for them. Good tooling will make you more productive. Everyone wants to be more productive. But there is one other area where you have to be very careful, I believe, and it's the existing manual quality assurance people. Because every time you say automation, they hear, "We're going to all be fired." Debbie: That's our job. Gleb Bahmutov: Right? And you want to be very careful about approaching that and say, "I'm going have this big initiative here." So, we've done a lot of training. We allow people to dip their toes into automation after training with automation group help. Like Kent said, be the champion of automation so that everyone has a question, you're like, "I'll do it. I'll remove this obstacle because I have the knowledge." And then, the QA team will actually be interested in automating low-hanging fruit because you automate only the boring stuff, the exploratory, performance, accessibility testing, they all are manual. They require intelligence. And by giving the QA people these skills and removing automation from their plate, you actually let them do more interesting stuff, and then everything is good. Emily: Next question we have is from Glenn, and they write in saying, "What are some tips for staying disciplined on writing tests? For teams that don't force unit testing on checked in code, it's hard to be a good example and write the extra code. This is coming from someone who refactors a lot and hates sloppy code otherwise." So. What is your advice to this person? Kent C. Dodds: First of all, making it easy to add a test. Because you just finished your feature, you're ready to go, you're going to close your laptop, you're ready to be done, and then you're like, "Oh, yeah. I forgot a test." If it's really easy to add a test, then you'll just like, "Yeah, I'll throw that together really quick and then move on." If it's hard, then you're like, "Nah, I'm going to go enjoy my weekend." And so, I think that's one critical piece. I wouldn't necessarily say that it's important to apply test-driven development. I do have a blog post about when I use test driven development, and it is really not very often. In my work, it's a lot of exploratory stuff, and we're not sure what the feature is going to be and we want to just play around with different things. You don't want to test drive something like that. But if it's very clear what this bug is, try to reproduce that with a test first. And so, there are definitely situations where test driven development can help you. I especially love reproducing a bug with a test first. And I do this in small projects in open source and big products, as well. And that's another really good way to manage that. New features are a little bit harder to test drive, but bugs, try to test drive that. Debbie: I think it comes down to training, as well. When you have your team, they probably don't know how to test or don't know what to test or don't feel confident in testing. So, you really have to help them, give them an example, and just be there for them to say, "It doesn't matter if you write a bad test. I'll check it over. I'll look over it." And writing a bad test is a way of learning to, then, write a better test, right? So, it's not really bad, in a way. Also, we have great tools out there that generate the tests for you. So, now it's easier for you to actually just write the test and get started. But, yeah, we need more education and testing, in general, I think, in the whole world. Because standing over someone and saying, "I'm not going to merge a PR until that test is written," just doesn't work. Gleb Bahmutov: And to me it all comes down to feedback, right? What's the quality of feedback you get from your tests? You don't have to use watch mode to get instant feedback as you work, right? When I work on a feature, I want to see it work, and the test is just running next to my code and it tells me right away, "Hey, this is broken. Oh no, this is working." And then, immediately, as soon as I push, I want to get feedback that the things that I changed actually are green, the things that I touched. At least was maybe based on the change spec files running first, and then I immediately know I did not break something. For example, track code coverage. I could say "Yes, whatever I changed in my code, the corresponding things were tested." As soon as we get this quality high feedback, you'll be like, "So, yeah, I have to update the task because, pretty much, my happiness level depends on them, my productivity, my pull request being always green and ready to go." So, the second thing, how not to be sloppy, you have to pull request review your testing code, as well. A lot of people led with task merge by default and review of a production code, and I was like, "No, no, you have to review both together. It's the same quality standard." Emily: We touched briefly on automated testing, and we did get a question specifically about automated testing. Dave asked that they "wanted to find the right balance between automated testing and manual testing, specifically ensuring automated tests are useful rather than just using them as cursory tests. With that being said, are there ways that devs can make automated tests more effective rather than putting more time and effort into manual testing?" And again, back to the other point, how do we make sure that people don't feel they're losing their jobs because things are automated? Kent C. Dodds: Well, on the first point, if the people who are manual testers want to differentiate themselves, they can learn how to automate their job and be the one who's doing the automating. The fact is that the business will be more successful automating its tests. There's no argument against automation just being better. That said, as far as the balance between manual and automated, sometimes, the thing that you're working on is just not that important to need to have an automated test. We're doing an event this weekend, I need to test that the banner shows up properly, and that's the only time. It's just this weekend. I'm not writing an automated test for that, friends. I just will. I get a lot of traffic from my blog and that indirectly goes to my courses and stuff. I'm not making money off of my website. So, yeah, I do have some tests, but actually they fail constantly on CI, and I do not block deploys because of my failing end-to-end test because it's my blog. Who cares? It's fine. I'll go update them eventually. And they can't be helpful sometimes. So, there is a balance. You have to just decide how critical is this thing to not fail? And there's certainly some things that I just wouldn't bother testing. In fact, I see testing as requiring the same currency as all the other development we do, and that currency is our time. And so, if we're spending all of our time testing, we're not shipping features. And so, there is a priority. It's all just writing code. What are you spending time writing code doing? I would suggest if you don't want to lose your job and you're a manual tester, then start learning how to automate some of your work. You'll be in a much better position. I think that applies to any job in the tech world. Automate what you're doing, make yourself more productive, but then also recognize when automated testing is worthwhile. Debbie: I think it's hard to get the QAs on board because they're the first to say no because they're used to doing it this way, and people don't like change. So, you tell them they've got to change, and everything they've learned now has to go in the bin and they've got to do this instead. And they don't want to, and that's normal. So, show them something and say, "Hey, just give it a try. Give it two weeks and come back. And in two weeks time, tell me what you think." Gleb Bahmutov: Doing proof of concept, automating some of the boring parts, having QAs modify those tests when needed as a maintenance, it eliminates with fear, "Oh, this is a completely different skillset. I don't understand." And then, you figure out it's the same skillset set. You see if application works with different inputs, happy paths, edge cases, handling of errors. It's just your automation is your servant. Ultimately, it's the servant, and it's a pretty dumb one, right? Because it only knows how to do the things you showed it. We always fight non-deterministic tests when something changes, even the smallest change. And I know Kent will say, "You're not selecting the elements right." But still, the label changes, and all of a sudden everything's broken. And it's like, "Yeah, because your automated tests, despite being fast and never sleeping, they are worse than a one year old. They cannot deal with anything new." So, maybe once we get ChatGPT or open the eye into the testing, it'll be possible to maintain. But for now, a human intelligence is the king. Everything else is just a tool. Emily: We have a few more questions. But before we get into those final questions, if you are enjoying this podcast, please follow us on Apple Podcast. It really helps us tailor the content that you want to hear. One of the reasons we're doing this mailbag episode right now is because you guys said, "I have no idea what we're doing about testing," and you told us. Please follow us, please let us know what you want to hear, and this is why we're doing our testing mailbag right now. We're going to go through a couple more. We had one from Gabriel who started working at a new company this year that had no unit tests and decided to start testing using React Testing Library. While they were able to configure everything, they still encounter multiple issues, like cyclic imports that make Jest break but work perfectly fine in dev prod. So, understanding this, how can devs make testing more streamlined or effective without the fear of breaking their testing tools? Sean Rayment: Perhaps we could rephrase the question into just general tips for incorporating React Testing Library. From our experience at LogRocket, we're starting to move to it, as well. And then, one roadblock that we've encountered is some of our components are really big. So, it's hard to effectively unit test something that is just doing so much. And so, we've found breaking components down to just more bite-sized pieces that are doing one thing has helped. But I'm curious, Kent, if you had any other tips, like how to dip our toes into React Testing Library? Kent C. Dodds: I definitely have tips. Unfortunately, I do have a blog post. I say unfortunately because this blog post shouldn't exist, but it does. It's called, Common Mistakes with Testing Library. And if there are common mistakes, then the library authors should make it so that those mistakes aren't possible. But, sometimes, you just can't do that. And so, there are just a bunch of suggestions of things that you can do to avoid common mistakes. As far as what to do with components that are doing a lot, I actually am okay with components that do a lot. I don't mind having really big components that are covering a lot of ground. I prefer to have most of my tests render higher up in the tree than test a button component. I don't care about testing the button component. I'll cover that in my test of the components that use the button component. Sometimes, the challenge, though, can be that there's a lot of setup that needs to happen to be able to get to a particular code path. And in those cases, yes, I do see breaking things up. But you want to make sure that you break things up by concern rather than just saying, "Oh, this is a hook. I'm going to go put it in a custom hook." People make these rules about never using a built-in React hook inside of a React component. They always make a custom hook, and that's the worst rule I can imagine. That's such a bad idea. Don't do that. You're totally fine using raw React hooks in your React components. However, I can definitely see there can be multiple concerns in a single component. This is the code that's responsible for keeping the document title up to date. Here's the code that's responsible for keeping us subscribed to the Firebase endpoint, or whatever. There can be multiple concerns in a single component, and breaking those out, I think, makes complete sense. And then, you can test those individually. React Testing Library does have a built-in mechanism for testing hooks, as well. So, if you want to be that granular. I typically recommend against doing that. You want to test the component that's using the hook. But if it's a highly reusable hook, then that makes a lot of sense to test there, as well. So, yeah, other than that, the blog post has a lot of really good tips. One of the big ones is the ESLint plugin that the community has put together. That will help guide you in your use of testing library, as well. So, I recommend giving the ESLint plugin a look, as well. Gleb Bahmutov: Don't use just one tool, right? It seems like you concentrate on unit testing. But, like Kent said, bring Alinta, that type of testing. Maybe consider using TypeScript. That will catch errors that are super hard to test through just component testing. I bring component testing, or even end-to-end testing, because, a lot of times, setting up, let's say, a routing in your unit test is just so much code for zero payoff, or very little payoff. But end-to-end tests, that actually visits the page, goes from page to page exercising the full functionality actually makes perfect sense, and it's super easy to write. And another thing, it sounds like, in your situations, you have some cryptic errors. So, if you have cryptic errors with any of the testing tools, check if you are on the latest versions. Sometimes, you let you know the testing tools get very old and obsolete, and you struggle with them and those problems might be already solved. You just have to upgrade your Jest version, your Testing Library version, or any other versions, including maybe React itself, that you use. Debbie: I think we grew up with the testing triangle where end-to-end tests were the smallest part of that triangle. But I think that's changing now, and it's more end-to-end tests and less unit tests. Kent C. Dodds: I completely agree. And actually, I came up with a testing trophy as a mechanism for saying, "Hey, no, your integration test should be the bigger chunk." And honestly, the testing tools have gotten so good that my trophy is a little more top-heavy than it used to be. I've always been of the opinion that, if you only write one test, it should be end-to-end 100%. That is a no-brainer. And when you start testing in a new project, the first test should be an end-to-end test a hundred percent. And then, the second test should be a low level unit test just so that you can get those tools set up, and then you fill in the middle with integration tests. But in any case, I really prefer those higher level tests because maybe they do require a little bit more setup because you do have to think about, "Well, my database or my HTTP mocking," or something. That's the setup that you just do in one place, and then everything else just references that setup. And so, once you have that set up, then you're good. And maybe the tests don't run quite as fast, but they give you so much more confidence. And so, I'm not against unit testing, especially for complex logic. I prefer just these pure functions. Unit test the heck out of that, that's fine. But most of the tests that I write are integration or end-to-end, these higher level things that step back from the implementation as far as possible. Emily: Somewhat in the same vein, but going off of what you were just talking about, unit tests and integration tests, what are some good tips for actually writing the integration tests after you've already written the unit tests? Kent C. Dodds: One thing that people sometimes are worried about is over-testing, and this totally happens. You write a bunch of unit tests, and then you write an integration test, and it turns out that the integration test does everything that the unit test. It gives you all the same confidence that the unit test did, which is one of the reasons why I typically start with the integration test, and then I go for unit tests when I can see in my coverage that, "Oh, there are a bunch of branches in this code that I'm not covering, and it doesn't really make sense to set up everything just so I can cover those branches in eight tests," or something. So, I'll just test that separately. That's fine. You do want to have the integration test because it checks your integration between the higher level stuff and the lower level stuff. The other thing, this may or may not be totally related to the question, but something else that's on my mind, is that, and a lot of people still don't know this. I don't know why. But Testing Library is not just for React. So, there's View Testing Library, there's Spelt Testing Library, Angular testing library, but there's also a Cypress Testing Library. And there used to be a Playwright testing library, but Playwright just thought it was such a good idea that they built in even better implementation of these queries. And so, if you're really enjoying React Testing Library, you should give those queries a look in these other tools, as well, because it's just as great over there, too. Debbie: And it's a nice easy way of writing tests because it's easy to read and it makes sense. Kent C. Dodds: Yeah. And it gives you a lot of confidence, especially if you're using the by row query, you're getting accessibility, confidence, you're looking up the accessible name and stuff. So, yeah, it improves the accessibility of your app, too. Gleb Bahmutov: The only advice I'll give is make sure you know what the test should expect to see on a page. Very often, you have these if else conditions in the test because you're not sure what the app will actually show you. For example, if you're testing it to do application, you say, "Visit the page, [inaudible 00:31:40] items, then delete the first one. But if there are no items, create an item, and then delete it." And this is wrong because you might get into a situation where there is always an item. So, you never actually tested creating an item yourself. You missed the whole path. You have to control the data. So, set up some mocking, set up the data, visit the page, and now, I will have free items. That's it. If there are no items or 10 items when something is wrong and the test fails right away, there should be no if else conditions in your test that look at the state of a page, application, and decide what to do next. If there is a button, I'll click on it. No. Why don't you know if there is a button. You might be missing something. Okay? So, make sure the test are completely deterministic. Always do one thing, one path, and if you need something else, then write a second test that follows another path but knows exactly to follow just the second path. Emily: Any parting words that you want to send to our listeners about testing? Or any words of wisdom that you want to give before we head out? Debbie: I would say don't be afraid to give testing a go. If you've never tested before, use a tool, such as Playwright, that is very easy to get started. And just test one thing, and then come out and say, "Well, I've tested that," and then move on. Don't lik take the biggest challenge. Just take a small thing and start with that and just start getting testing happening. Gleb Bahmutov: I'll say investing just a little bit of time to train with a professional or free course will pay off 10 times, maybe a hundred times. The best investment you can make is train yourself on how to use modern testing tools, and then apply that knowledge. Because I see a lot of people just struggling with basics because we never took time to really master them, and then apply them. Instead, we're trying to improvise, and that's just a source of problems. Debbie: And read Kent's blog posts. Kent C. Dodds: I just shared a link that, hopefully, we can get in the notes that it just searches all of my website's content on testing. This is not only in blog posts, but also podcast episodes, and talks, and things. And one thing in there that, I think, is a really simple piece of advice, but we overlook it a lot, is make sure that your test can fail. So, you write the test, it's green, go to the code that you're trying to test and break it intentionally, and make sure that your test fails because you may not actually be testing what you think you are. So, that's a really simple piece of advice that I think would be good. And then, for anybody who's listening who is uncertain on testing and just barely getting into this, we welcome you. And I have a blog post called, Demystifying Testing that will give you a really nice introduction to what automated testing is. And then, that's a really nice lead in to testingjavascript.com where I can teach you everything that I know about testing. So, give that a look, too. And get your whole team. If you're a manual QA team that you want to start getting into automated testing, testingjavascript.com will get you going, and I think you'd really like that. Emily: In that same vein, Debbie or Gleb, is there anything you want to promote before we close out so people can find you? Gleb Bahmutov: Oh, you caught me on a spot, right? Really, no, you can find me at gleb.dev. That's my site. It links to everything I've done from there. Debbie: You can find me on debbie.codes, and my site is open source. It's all built in Nex3, and I have tests on there, as well. So, clone it, play around with it, and do whatever you want with it. Emily: Awesome. Well, thank you all for joining us today. We'll also have all of your socials in the show notes so people can follow you on Twitter, GitHub, wherever. Thank you again for joining us, and if you're listening, and you have a specific question about web development, it doesn't have to just be testing. We're going to be putting together a UI/UX mailbag episode. So, if you have any questions about that, you can message me on Twitter @emilykketner. We'll also put that in the show notes, and we might feature your question in a future episode. So, thank you all for joining us today, and have a great day. Gleb Bahmutov: Thank you. See ya. Debbie: Thank you. Gleb Bahmutov: Bye-bye. Debbie: Bye.