[music] Welcome to Test and Code, a podcast about software development and software testing. So I got a question recently, I don't remember where I got the question from, but essentially its: I've got manual tests and I want to convert them to automated tests, but when I look up all the information about testing and automated tests, it says stuff like you should have one assert per test, and how does that jive with converting manual tests to automated tests? Well of course what follows will be my opinions based on my experience. I want to describe the process I like to use for converting manual to automated tests, but before I start, I want to break the question up a bit. Manual tests are inherently system tests run through the user interface. Automated tests can be at many different levels. However, if you are converting a manual test, it should be converted to either an automated user interface test, like through Selenium or something, or converted to an API test, or a subcutaneous test. The notion that tests should have one assert and one assert only per test is, I believe, misguided. It comes from a lot of mockist test driven development tutorials. I do think it's a good goal to have each test check one thing, but that check might involve more than one assert. The best way to achieve checking one thing is to try to adhere to a given-when-then strategy, where the asserts go at the end of the test. I want to set something up, do some action, check to see if it worked OK. However, don't be dogmatic about it. An automated procedure is frequently better than a manual procedure. [1:50] What are manual test? They're a mixture of things. There's often user interface components that need to be checked, that is difficult to check automated. There's also new functionality that we just haven't had a chance to automate yet. Manual tests are set up for people to run through, and not computers. With manual tests, we don't want a tester to get bored and time is valuable, so we often set up tests and checklists as a workflow, with items to check along the way. It's quicker and less tedious, though not part of a continuous integration of course. [2:27] The problems with manual tests: * Hard to repeat: if you give the same checklist to two different people, you get two different results. * They're slow: they don't provide quick feedback to the development process. * And sometimes they're left to be done at the end of the development process. * Since it doesn't line up that directly with the code changes, it's not easy to track problems back to code changes. [2:51] Reasons to convert to automated include: to fix all that, to fix all the problems, and to have things take less time. And to run them earlier and more frequently. I think it's always fine to have some manual tests, especially before fixed release checkpoints, but having those focused and short makes it so that you can run them more frequently. [3:14] So I have this procedure that I wrote down for what I think is a good way to convert manual tests to automated tests. First, read through the manual procedure once completely. Then, read through it and actually do it. Without understanding what's going on in there, it's hard to automate it of course. Now I want you to go through and read the whole thing again. So this is the third time you've read it. Now you're gonna mark up different sections as things that should be automated, and things that should be left manual, and why. Now, for the things that should be automated, you gotta go through and try to --it's really important to try to understand what it is that you are trying to test. What is the underlying reason for this being in there? And that's gonna help you with writing your tests. Now, some are going to be easier to convert to given-when-then or arrange-act-assert, the general flow of a good focused test, and others are gonna be longish workflows that are kind of hard to break up. Just make a note of which ones you want to do. This might be a good time to start writing up like a little small document for each of the things that should be automated, and approximately pseudocode how it's going to be checked. If you're writing stuff down, this is a great time to rewrite the manual procedures, the ones that are going to be left manual, and write up little procedures for which things should be automated and kind of the flow of the test. It's a good idea to write these down because if it's a project of any reasonable size with multiple people, this is a great time to just code review that pseudocode, just check it in some form, maybe the docstring at the top of a test file, or even just a text file, doesn't matter, just get more people looking at it. Or email people, say "hey, does this look reasonable?". Now, we want to start writing those tests. Just go through and start keeping track of which ones you've converted and which ones you haven't, and convert them to tests and get them to run. Hopefully, your automated tests are catching similar things that your manual procedure will. I always find when we're doing this, that computers are more picky than people are, and more tests will fail during an automated sweep than the same thing run as manual procedure. And sometimes it's because the test is actually too picky, and we need to talk to the designer, the architect or somebody, to try to figure out if that's a real failure or not. Maybe we have a requirement that we don't quite understand. [5:49] The procedure I've described takes a bit of time, so here's an alternate one: go ahead and go and do the first two readings just the same, and mark things up. Now, for all the things that should be automated, don't worry about trying to break them up into focused tests, just write them as big giant automated flow things. The downside of that is it's going to be hard to maintain, so you don't want to leave it like that for a long time. It's not going to be granular, so if a test fails, its going to be, hey something failed but I'm not quite sure what. And it'll be harder to debug for that reason, things will fail more, it'll be one of those what people describe as brittle tests. If it's faster to do this, I think, go ahead, go for it. You'll quickly find out areas that are easy to automate and areas that are hard to automate, and you might say, hey this stuff that I thought I was going to automate is too hard, let's leave it as manual for right now. If you can get that done faster, then you've got time that you would have spent running those procedures to automate them into smaller ones. The alternate procedure is generally, just write big giant tests fine, and then break them down later. It'll give you feedback, but if it's hard to get running correctly, then maybe its time to just focus those tests a little bit more. [7:08] There's some critical features of your system that may require both automated and manual tests. I think that's completely reasonable. Manual tests should be short enough that they can be done regularly during development, not just once at the end, so keep that in mind for the ones that are remaining. Keep in mind the costs versus benefits of automated tests and manual tests. Writing tests is writing software. Software has to be maintained and it has costs and benefits. [7:38] If a test fails, what does that mean? Hopefully, that gives you an idea of what's wrong with your software. The software tests are supposed to be around for: is my software running correctly? Now, just the answer of "no, it's not running correctly" isn't quite helpful. I'm going to use a car analogy. There's a bunch of little lights and stuff on my dashboard. Now a little light that shows me that I'm almost out of gas, I know exactly what's wrong: I'm almost out of gas. Check engine light, it's not so good. It's got a lot of checks in it, it's doing too much I think. I don't know how you would make that simpler, but that doesn't tell me what's wrong. All I know is the gas is probably fine and there's something wrong with the engine, I need to take it to a mechanic because I don't have one of those do-hickeys that attaches to the onboard computer. Anyway, those are the kind of things to be careful of. You want to write focused tests that tell you what's wrong with your software, not there's something wrong with your software. [8:37] That's my two cents on converting manual to automated tests. Let me know what you think. Show notes can be found at pythontesting.net/22. I'm doing transcripts as I have money. When I get money enough from book sales and Patreon supporters to get them converted. So please consider becoming a Patreon supporter, even a buck can help to get more episodes out and more transcripts done. A funny story about the transcripts, I'd originally set up the Patreon campaign to try to get the $2 and up supporters early access to transcripts. When I got that first transcript up, actually I haven't posted it on the website yet, but I did send it to all the Patreon supporters. I wanted to send it to everybody because I'm really grateful to everybody, so I checked with all the two dollar and up supporters and said, are you OK if I just send it to everybody? And every single one of them said "I didn't sign up for the two dollar level to get the benefits, I signed up because I want you to do more shows." And, you know, seriously it's difficult for me to relay this information without being overwhelmed by the support. I know it's no a ton of people, and a lot of people are making more money than I am at podcasting, but, having people that just really want to give me a couple of bucks every time I do an episode, it's so cool!. If you want to know more information, you can go to pythontesting.net/support, or go directly to patreon.com/testpodcast. [10:12] Also I want to put another plug for the Slack channel. Yesterday we had a question, or a couple of days ago we had a question come in about how to work with selenium page object model in pytest. I had no idea, but there were other folks on the channel that had some information, and that's pretty cool. It's awesome actually, that's what I put it there for, so that we can have a little community that helps each other. If you want to be part of that, go to pythontesting.com/slack to get an invite. On Twitter, I'm @brianokken and the show is @testpodcast. Anyway, thanks for listening, I'll talk to you later.