Test & Code| Episode 41: Testing in DevOps and Agile Anthony Shaw Brian Okken: Welcome to Test and Code. Anthony Shaw is the group director of innovation and talent development at Dimension Data. On today's show we talk with Anthony about how to deal with testing in a DevOps situation. And also testing in Agile teams. We also talk about his experience in implementing a change to Pytest recently. Before we get started, I'd like to thank everybody that supports the show through Patreon. I also want to thank people that are continuing to buy the book "Python testing with Pytest" which also supports my continued efforts in this podcast. I really love it when I see reviews. I want to pull out an excerpt from one of the reviews. This is from Patrick Kennedy and it was just put up in March. Cool. Here's an excerpt, "I learned so much about using Pytest for testing Python projects from this book. I had previous experience using the built in unit test module in Python, but I had not used Pytest prior to reading this book. After hearing lots of great things about Pytest, I wanted to learn about this testing framework and this book is exactly what I was hoping for. So I've been taking this knowledge and applying it to testing a Flask web application. Excellent book and I highly recommend it to anyone wanting to learn about Pytest, it's my favorite testing framework. Thank you, Patrick." Now, on with the show. Thanks. Anthony Shaw: Hey Brian. Brian Okken: Hey. I know who you are, but introduce yourself anyway. Anthony Shaw: Yes, sure. I am Anthony Shaw, I'm based in Sydney in Australia. I work for a company called Dimension Data, I've been here for the last 6 years. And at the moment I am responsible for the developing the skills of all of our employees which we have quite a few, there are about 30,000 spread across 50 countries. But I've only been doing this for the last year and a half. Before that I was managing software teams in our R&D division. And in previous roles I guess I've kind of gone in technical roles, development roles and also product management. I am also a bit of a Python enthusiast. I am doing a lot of research on Python 3.7 at the moment and I am also working on a new Pluralsight course for the Python 3.7 and the impacts of that. Brian Okken: Well I am looking forward to that. You just published like a Python 2 to 3 recently, right? Anthony Shaw: Somebody had already done one, not on Pluralsite but I think it was on Udemy. But it focused a lot on the automated tools, whereas my course gives you a broad perspective that explains how to use the automated tools as well as what's changed, what the approaches are, how to plan, it's designed really for the large applications. So if you've got a big app and you want to move it to Python 3, it kind of steps you through the approaches you might take, how to break it down, how to use automated testing and just a whole gamut of different ways to migrate to Python 3. Brian Okken: That sounds great. So you've been with Dimension Data for a while then? Anthony Shaw: Yeah but I've done a few different roles. That kind of keeps things interesting and the role I am doing at the moment I absolutely love. I get to spend a lot of time talking to people about their skills and their careers and also we've had a big focus on Python over the last year and a half. We're trying to teach as many people as possible Python, so just within the company, there is around 3,000 of our employees over the last year and a half that decided to learn Python from scratch. And there's a few reasons for that, we're a technology company, we integrate different technology solutions together, some of it is hardware some of it is cloud based. And coding is becoming more and more part of sysadmin life, as well as people who do integration, people who do deployments. Being able to use Python is not just a job of a developer anymore, it's almost like a skill that everybody needs to have just as much as people know how to use Microsoft Excel for good or bad. Sometimes there's a programming tool, but that's a separate issue. Brian Okken: Were you a part of the Push for Python or was that already a part of the company when you came in? Anthony Shaw: That's something that kicked off about a year and a half ago. It proved I guess, that it was an investment we should make and then drove the program with some people on my team, and being pushing that out, I think there's about 48 countries that have been on board with that now, so yeah, it covers off really a lot of the world and it's great to see as well when I travel, people learning Python in all these different offices, now we've got hundreds of offices all over the world so yes, it's great to see people picking up and learning code. It's not just engineers are doing it, as well as people in finance and operations and some people in HR and our global CEO said he wants to learn Python now as well, because he's heard so much about it. It's been good. Brian Okken: So you're teaching people how to test their code as well? [laughing] Anthony Shaw: That's interesting. If people are starting from scratch, then I'd say as the beginner, you do learn a lot about exploratory testing. Because you're not really sure how you application should behave or respond. It's something that you kind of build through a lot of the courses when you're learning to code, it's this kind of instinct to do exploratory testing. And yeah, something I thought maybe we could talk about today was, Agile teams in particular and DevOps and something I've blogged about in the past, but something I'm quite passionate about which is really the impact that DevOps has had or an Agile has had on software teams. I don't see that the testing discipline has kept up as quickly necessarily as some of the other disciplines, something maybe we can have a chat about because I am sure you've got some opinions as well. Brian Okken: Yes. Before we get too into it, just to define a couple of things, because I know people come from all different backgrounds, and buzzwords might mean different things to different people. So a couple of things you brought up is DevOps and Agile. So, can we take a moment to define, or at least quickly to elevator pitch what are those? Anthony Shaw: Yeah, sure, so both of these, every definition I've ever heard can be criticized as being wrong which means by default, either everyone is wrong or everyone is right, so I'm going to go with my understanding. Brian Okken: So for you and your company. Anthony Shaw: So the goal of Agile is to deliver value in small increments, that's one of the goals. So typically for a software project what that looks like is, a client would come to you with a requirement, you would go and detail all of the requirements up front. You would do all of the work, you would do all of the testing and then you hand something over at the end. So that's what was typically done in a Waterfall style, what you would learn through Project Management as being a Waterfall project or a Prince2 project. And one of the ideas with Agile is that requirements change all the time and also situations change, teams change, so to keep up with all this change, if you break down what you deliver into smaller pieces, then actually, the outcome is better. So you can deliver value quicker and you can adapt to change a lot faster. So that's broadly the principle of Agile. And then there's different ways of doing Agile, so there are different practices like Scrum, for example, or SAFe which is what we use in our organization. Brian Okken: Really? Anthony Shaw: Yeah, but we have what we call a release, a PI planning which is the SAFe planning exercise. We have about 300 hundred people fly to South Africa to attend a two day PI planning session. So it's a different level of scale than I have ever seen before. But yeah, we do use SAFe. It's really how you go from one Agile team to having multiple teams and then how you have multiple programs, all of which consist of multiple teams, all using Agile. So what we found is that Agile is great at individual level, but because you've got let's say 30 different teams, all using Agile, coordinating across that is hugely complicated, and that's really where SAFe bridges some of the gaps. Brian Okken: So you've got like 30 different teams possibly working on the same project? Anthony Shaw: Not necessarily on the same project, but the different aspects of a project, yes. Brian Okken: Oh wow, that's a lot of people. Okay, I get why that's a different complication than I normally think about. I like the definition of Agile as even just delivering value in smaller increments, and then learning from it, doing a lot of learning. So DevOps, what does DevOps mean to your company? Anthony Shaw: So I see DevOps as being, it really bridging the gap between— well, delivering an application and having a custom actually using the application, is normally 2 different processes. What tended to happen before is that the software team would build an application, if that was delivered as a service or if that was a desktop application they would say, "Okay, we're done, here's our release, here is our binary, here is our pip package,” for example. Now, that actually getting from there to the hands of a customer is often a different process. So there's testing involved, operations get involved, for example, if you need to deploy that to service and that used to be a separate team, and because there were different processes then those teams typically wouldn't align very well. So what would happen is, especially when Agile comes around, so with Agile if you're working in Scrum and you work to it, let's say a two-week sprint, at the end of every sprint, the team delivers a new release. So that's fine, but if the customer doesn't actually get the new piece of software every 2 weeks, then you haven't actually finished it, it's actually got to get into the hands of the client otherwise it doesn't really count. So if the operations is running in the old style which is, "Okay, we get all of our requirements up at once and then we follow them through and then we deliver something at the end," then the 2 processes get out of step. So what's happened with DevOps is that development, which is the Dev, and the operations which is the Ops, are basically trying to work more closely together in a new kind of function where automation is at the core of it. So yes, there are lots of tools in DevOps and they're aimed at basically keeping up with the delivery, the rapid delivery of software components from an Agile process and getting it out into the hands of a customer. And there's lots of other impacts to that as well. Brian Okken: Are there DevOps people or are there developers and operations people that work together? Anthony Shaw: That depends on the company. Brian Okken: Okay. And we're talking about web deployments, deploying web applications? Anthony Shaw: Yeah, typically it is, because if you've got an operations team, that implies that there's something that you need to operate, some sort of service. So if you're doing standalone desktop applications, I wouldn't imagine you would need a large DevOps team because you're not running hundreds of the servers. But if you're running something as a service, which more and more companies are doing then you need to operate all those environments. Brian Okken: Thanks. Okay, so testing in that. So what we were trying to get back to is the role of testing in all of this and maybe it's broken, maybe it's not. Anthony Shaw: I don't think it's necessarily broken but I think it needs to change. Brian Okken: Isn't that part of like the continuous integration, continuous deployment thing that we have a whole bunch of automated tests that just give a gold start to our software and we can deploy it? Anthony Shaw: Yes, so if you think about the whole process, try and think of a fictional application here, because obviously, each of your listeners is probably working in a slightly different environment with different requirements . If you're if you're delivering a simple web application that has a database, that's probably the simplest example I can think of, then the Dev team would work on a feature and they would say, "Okay, the customer has just asked us for the ability to export the list of users as an Excel spreadsheet." That's an example, as a feature they want to add. The team put that into a sprint, they work on that, they deliver over two weeks, and now it needs to get from where it is today into the production environment so that the customers can start using it. So, what is the role of the testing team or the tester on the Agile team? Because in Agile as well you typically have one or many testers in the team, you don't have them as a separate team which keeps them in sync with the sprint. So what's the role of a tester in that process? Do they just make sure that the application has been tested or do they actually make sure that the application gets deployed into production successfully? And I think this is kind of been one of the challenges that I've seen, where we've gone from a lot of manual testing, let's say 7, 8 years ago when automated testing was a lot harder than it is today. We had a lot of people doing manual testing. Brian Okken: Just pause for a second. So manual testing still isn't impossible, in a deployment situation you can, what I am guessing is typical is software is delivered into a like a staging server or something, and then the testing team tests off of that until they give it the gold star, and then it can be pushed to the real server, right? Anthony Shaw: Yeah. Brian Okken: Okay, but we want to automate that so that we don't have to wait so long. Anthony Shaw: Yeah, because it comes about blockers and if you think about all the different types of tests and you've talked about this in your show before, whether or not you like the testing Pyramid, there is a whole range of different types of tests that you would need for an application. And the challenge is that in Agile, if you have a user story which is in this case getting export to Excel. Now, think about all the tests that have to be changed as that user story being finished. So let's say you've got low testing, you've got some security testing, you've got some UI testing, you might have accessibility testing as an example, and you've built up as a tester, all of those tests over the last couple of years. Now, someone's just added a new feature, so how many of them impacted by the new feature and have you updated every single one of those tests with this new piece of functionality before the sprint or the story is actually marked as finished? Because what I found is that the way Agile is designed is that an individual person can work on and see through a story to its completion, and then the testing team at the end of the sprint or during the sprint tend to test the code. Now, what they tend to test is what's in their comfort zone which is we've got lots of automated integration tests, or we've got some automated performance tests maybe, we check all of the unit tests against the CI/CD process and we know that we can deliver that component now. But when it actually comes to deploying things into production, if the application changes every two weeks, then the actual deployment process also changes. So what ended up being a blocker and has ended up being a blocker in many teams is the actual automated stuff not really working properly. So you've just added this new functionality, you've just added an another database and now the DevOps process, all the scripts that run, all the deployments, all the automation, suddenly start failing because something has changed. And this is where I see the biggest gap has been that it's not really anyone's responsibility to test that stuff. Some teams say they've got DevOps people but they typically come from a systems background, they don't come from a testing background, they don't have that testing discipline. And some of the code I've seen in scripts for deployments and automations in projects in open source, or projects through organizations is some of the worst code I have seen breaks like every software principle you can think of. There is stuff hardcoded, there is stuff copied and pasted. But it's seen as not critical code. Because it's just a script that automates this thing, it's not seen as part of the core application, but actually, it has just as big an impact if it doesn't work properly because you can't get stuff out. You just end up stuck. Brian Okken: So you're also talking about not just testing your application, there are possibly problems with that, but there are also problems with testing the DevOps script. Anthony Shaw: Yeah, definitely. Brian Okken: Those are hard to test, right? Anthony Shaw: They are really hard to test, yeah. And there is no really any tolling for testing them. Brian Okken: Can you manually test it? Anthony Shaw: It's frustratingly slow. If any of the listeners for example used Travis CI which is a service that's connected to github and whenever if you commit changes to a branch in github, it will go and run your tests in different Python environments. Now, if you have ever tried to change the configuration of Travis CI, it's a really frustrating process, because you make one change the YAML file, you check it in, you wait 15, 20 minutes to see if it works. No, that wasn't the right syntax, you go back, you change it again. It can be really frustrating because the consequences of it are it takes so long to see the impact. And if you think that's just Travis, whereas for a big project, then there are actually being issues with the automation scripts that deploy the application, can have a knock on effect of hours or days in some cases. Because the deployment can become so complicated. Brian Okken: Why would it take days? Are there really that much steps? Anthony Shaw: It depends what you've changed. Let's say you know, Microsoft have their Office 365 Service. And they claim that they do releases 200 times a day or something, they do a release in Office 365. So there's Microsoft's email as a service, Microsoft's instant messaging as a service, all of their office productivity applications are deployed in the cloud as a software service you can subscribe to, and lots of companies do use Office 365. Now Microsoft says that they do 200 deployments a day or something ridiculous to the cloud environment. Now that's when things are not changing that much. But if they go in, I don't know, let's say launch a new data center in the middle of Ohio, that's not going to take two minutes to deploy for the first time, that's probably going to take a lot longer. And it's all those other aspects I guess that become tricky. Brian Okken: You're not really asking me though, you're just pointing out that this is a problem? Anthony Shaw: Yeah. I'm not asking you this. I don't think I can see an answer. I've read a couple of books on Agile. When we initially started rolling at Agile, I have read a few books on Agile testing and how should you structure a testing team or testers within an Agile environment. And there were different schools of thought in terms of how this should be done. One was that you have a sprint and then the outcome of the sprint almost is the release and then the testing team works basically a sprint behind the development team. Brian Okken: Doesn't that just sound like Waterfall to you? Or is that just me? Anthony Shaw: I can hear in the background all these Agile people screaming into the night about how bad that is as a principle. In practicality, that's what happens in a lot of teams because a lot of things come together on the last day in a sprint. In a testing environment, you don't want things to be changing as you're testing them, like that makes it a lot harder to test something if people are still working on the code. Brian Okken: What happened to test first? Anthony Shaw: Yeah, that's a good point. Well, I mean you did the test for the develop, so if you seem that all developers have got a testing— first of all I've got a responsibility to test as part of the development of a user story, so in order for a user story to be finished, they need to have developed some level of testing to mark a story as done and this is referred to as the definition of done sometimes in Agile processes. So you have as a team you agree that you can't mark something as finished unless you have deployed it to staging, made sure that you've got a certain level and quality of unit tests, make sure that you have updated on the integration testing requirements. And let's say, for example, you've also run one of the security scans of the security checks on it, because that's also a big piece. But that then puts a lot the responsibility of testing on to the developer. And then, where does the testing discipline come in there, do you teach all of the testing discipline to all of the developers? Or do you have people that specialize as testers? Brian Okken: Both. I think both is the answer. But I'm in the minority here. I'm an insider looking at the rest of my discipline going, "What the hell, everybody". Have you read "The Pragmatic Programmer? Anthony Shaw: No, I have not. Brian Okken: Maybe it doesn't matter if you have or not, but one of the ideas is— scrum has a spike or something right? And a pike is like, I'm not going to write test for it I'm just going to play around or something and I'm not going to worry about corner cases or a whole bunch of error conditions, I'm just going to see if the technology works all the way from from the top to the bottom. And "The Pragmatic Programmer" have and idea called the tracer bullet, which is similar, which is kind of like a happy path test, which means like, it's not necessarily a system test or a unit test, but it's like the thing I'm responsible for, like a happy path case, like all the way through, from the whatever interface I'm programming to down to getting it done. And at least one case works. And so that's the part where I think those types of tests are good for developers to do. Because they can use those instead of doing repetitive, simple tests to make sure their stuff works. And then, so if I think of it like a tracer bullet or a skeleton, so the the developers build the scaffolding of the test or the trunk of the tree and then a test team can take it further and push out the corner cases and do more analysis for like all of the use cases and make sure that all the corner cases are hit and all the security is done and all that stuff. And I think that totally works in parallel to me. Because I still like this idea of test first, this idea that testing can actually make developers faster if we get the tests done right away, like as soon as the developer says, yeah, this bit of the API I'm pretty sure that this is solid, and so we write like a skeleton test through the system to make sure that works. And what I mean by through the system is different for every team. Anybody that's heard me talk about different levels of testing, I'm a proponent for really system level testing from the like API down to the hardware for most of the tests. However, this isn't practical for something like a 30-team project, but it is it is reasonable to say the software that is in my team's responsibility, that bit I am going to treat as a clump, as a product in itself, and I'm going to test that as a system. And I think that's the right way for developers to test. In your example, you're talking about test teams and either testers within an Agile team or test teams. Now in an organization that has like 30 different teams or even a project that has 10 teams working on an individual project, in the user project. Is there one test team for that or is there a test team aligned with each development team? Anthony Shaw: There's, well, different have been tested, we tried different approaches, we tried having a tester in each team and we've tried having a testing team separately. There are pros and cons of both approaches to be honest. One of the challenges with having a tester in each team is that you have one individual who knows that application inside out and if that person goes on a leave, or wants to go on holiday for example, then you've got a big challenge there. So having them together in one team makes it easier to distribute all the workload basically among the different teams. The issue is though that you're not having testers who are actually working on the same sprint cycles as the development team. So what we kind of figured out is the best approach is almost having like a virtual team, virtual test team where the testers sit within the Agile teams and you then have someone who's also a secondary tester into that team. So a good size team could be 8-10 people was an example, for Agile, if teams get too large, the process is not as effective. So you can have a thing called a scrum of scrums in the scrum methodology where you've got multiple teams working together in the same process. We kind of found that virtual testing teams is effective where you've got someone who is the sort of the main tester for that team and then someone else who knows and understands the project well enough to help out when needed and when things are busy. So that helps distribute the load a bit. But again, the challenge then, so for me managing my old role, and trying to recruit people was an example into the team as testers, is really difficult because this is an example and now I do a lot more on the Python side. If you're hiring someone as Q&A as a role, you'll get a lot of people out there on the market who know just how to use Microsoft test suite. But then, you put them in an Agile DevOps environment and the tooling is just completely foreign. So it was really tricky to actually find people in the market who know how to work in an Agile team and know how to automate a lot of the tests. So if people out there are thinking about learning this stuff, I mean, it really really helps your career because, they are like gold dust at the moment to to find at the market. Brian Okken: Is a tester at the same level within respect and in pay as a developer? Anthony Shaw: That's a really good question. And yes is the answer. That's when you take in all the requirements that I just listed. That was definitely one of the things I noticed originally, was when we were looking at manual testers that would definitely not be the case, just in the Australian market anyway. I know the UK market a bit as well, but not the US market, but definitely the benchmarks I've seen suggest similarly. But yes, for an automated tester then you would expect a similar level as a developer of similar technical proficiency. Brian Okken: It is software, so you're writing software to test some other software, but it is a software development role, it's also understanding how to look at the testing problem as well, right? Where do you get a qualified testing person? It seems like it's just learned on the job or something? Anthony Shaw: It's really hard to find as well. Somebody's CV or resume doesn't tell you whether they really buy into the testing discipline. I think this is one of the things that puts off developers from wanting to get into testing, is that one of the perceptions is that it's a repetitive job. I think that actually points to some of the stuff we've been talking about, which is if you're not automating stuff that you're testing, then yeah, it could be pretty automated. I think about my first job ever in a hardware company, I was 14 years old and it was 2 weeks of placement as part of the school. I went to a technology company that made these crazy pieces of hardware and they gave me a testing job for a week, that you would give to a 14 year old kid, which was just to push the buttons in sequence and to write down which ones turn the light on and which ones didn't. And that was not really a testing job. Well I guess it is, but you could get a robot to do that. Now, I think a developer still to this day sees testing as being quite a repetitive job, whereas it's really not like that, the discipline has changed. I think that the tools have changed, the approach has changed and now you're looking at a much broader spectrum of ways to improve the quality of the application of the software that you are delivering, so you are thinking about performance and a lot of things, because you freed up time by automating all the repetitive stuff. So once you've automated that then you can go, "Okay, let's make our application perform better. How does it perform right now, where are the bottlenecks, where are the issues? We need to sort our security stance out, how we're going to test a security at the code?" And that's where good, automated testers really demonstrate and prove what their value is by actually automating all the monotonous stuff and then actually working on the high-value stuff for the team, which is actually improving the quality of the team with delivering. Brian Okken: Yeah, and the high-value stuff that you just described sounds like the requirements of a system engineer with extra requirements on top of that. So a lot of the testing philosophy and training is considering that there's a black box and some requirements, and that's not real when we're integrating people into the development environment, they can talk to people, they can find out where the little corner cases are within that system that they really need to push on to make sure that they don't break. One of the things I really liked that you brought up the notion that exploratory testing when you're doing some of these DevOps scripts just trying them out, that is testing and it is not automated. How to move from exploratory and manual testing to automated tests is a different story. We didn't really have an outline other than we were going to talk about DevOps and testing. Where should we go now? Anthony Shaw: Should we talk about Pytest for a minute? Favorite topic or yours? Brian Okken: Oh yeah, you've been involved with Pytest debate. Anthony Shaw: Yeah I finally got my first contribution merged, so I am quite pleased with that. Brian Okken: Yes, tell me about that. Anthony Shaw: I'm working on another Pluralsite course for Python 3.7 and one of the new features in Python 3.7 is that there's a new built-in breakpoint function. So currently in Python, if you want to insert a breakpoint, you can import a debugger like pdb, the builtin one and you can call pdb.set_trace. So probably a pretty familiar pattern to people is to just type import pdb; pdb.set_trace whenever you want to stop the code and try to figure out what's going on. If you can't reproduce something successfully in a test. Now, what was that alone in 3.7 is that first of all, that's really, really unintuitive, especially if you compare that with other applications. Like if you're learning Javascript for example, the way that the way you insert breakpoint in Chrome, which is normally how you would or one of the browsers is how you'd execute things, is you just click on the left hand side of the script and it will insert a breakpoint. So for Python I think it was a bit of a downside that is harder to put breakpoints into code so what I proposed in 3.7 is that as a new builtin function just go breakpoint, if you do nothing other than type breakpoint open and close parenthesis, it will call pdb, and it will do the same thing as pdb.set_trace. You can, as an environment variable now actually say what you want the hook to be in 3.7, so if you use another debugger like pudb, which is just a nicer version of pdb, then as an environment variable you can set that as your default debugger then if you've left breakpoints in your code it will use that instead, you don't have to import the right one. So I basically explained my module and I thought, well I have not actually written any code using the breakpoint function, I would kind of want to see how this works and I thought, okay, let's see how it works in Pytest. And then I just found an issue in the Pytest project saying, oh yeah, we need to support the breakpoint builtin and I just put my hand up and said, "Please, please, please, can I do this?" And they said yes. So I just built the functionality so, Pytest will now support the builtin breakpoint. So Pytest has its own version of pdb, because pdb doesn't really work very nicely inside Pytest. You'll never actually notice unless you've read the Pytest source code because it kind of happens in the background. But when you run pdb, it asks you for commands so you can explore things, but Pytest actually overrides the standard input and the keyboard terminal, for various reasons. So they basically got their own custom pdb which is wrapped whenever you call Pytest. So the contribution I made is just basically to make sure that if you call breakpoint, then it works nicely inside Pytest and also if you run --pdb on Pytest the networks should do. Brian Okken: Just to remind people, what is --pdb, that means if you've got a test failure it pops up in the debugger, and what you're saying is if somebody has the breakpoint and the environment or variable set to a different editor, like a different debugger, like pudb, then Pytest will correctly open the right one on a breakpoint then? Anthony Shaw: Yes. So yeah, it was fun creating a contribution to Pytest. I learned a lot about some of the internals. I'd say 90% of the contribution were actually tests and no actual significant changes. It ended up being quite complicated figuring out how to test Pytest from Pytest, if that makes any sense. Brian Okken: That's one of the first things that I learned about Pytest, is how to test Pytest with Pytest. It's pretty interesting that they do that and there's this ability to within a test run another Pytest session. Most of the time was spent in testing really? Anthony Shaw: Yeah, and trying to figure out how to simulate different scenarios that might occur. First of all, people are not having Python 3.7 in the first place, or I'm assuming that in the future PyPy 3 might implement this feature and then yeah, I was really kind of paranoid about contributing to something that's used this frequently. I can't imagine people to be very happy to see Pytest not behave the way they expected. So yeah, I kind of double, triple checked everything before I made changes. Brian Okken: Since I want to try and encourage people to contribute to Pytest also, because even though Pytest is used by so many people, the team working on it is really not that big compared to other software projects that are used that widely. More help is always needed. Was it a good experience, would you do it again? Anthony Shaw: Yes, it was a great experience. I mean the team was super helpful, they do— I don't know the technical term, informally known as helper commits where someone called Bruno who is one of the core developers on Pytest actually committed stuff to my pull request to show me how it was supposed to be done. So in particular, I had a few questions around how you should write a test to test Pytest itself in different scenarios. And he committed to my pull request to actually demonstrate and fix, because it was easier than just trying to explain over comments. But yeah, the thread is quite long, it has a lot of discussion, the first thing I did is outlie what I thought the behavior should be before I write any code and then try to guess some agreement, okay, this is how you would expect it to behave. Once everyone is agreed, then I started putting together the code and then test a couple of the different scenarios. But yeah, super easy to get a pull request committed. And then what is probably a little bit of a complicated change, so you know, it's only a small change but it had a few consequences. And this comes off the back of just contributing to the Tox project as well. A couple of months ago I had committed something a bit more obscure, but it was the ability to basically write your own plugin to change the behavior about how it shows you what's in the environment, which is the report function. So it's hard coded at the moment to do pip freeze I think or pip show and I just made that available as a plugin. Brian Okken: We've talked about quite a few things. One of the things I want to talk about also is you write a lot, you do quite a few Python articles on Medium and I don't know if you write other places, you only write on Medium? Anthony Shaw: Yeah, it's only on Medium at the moment and I published some of them on to some article sites like Hacker Noon, it has quite a lot of developers and stuff. Yes, it's just purely for fun. Brian Okken: This isn't part of your job role or anything? Anthony Shaw: No, I just like it. This is something I just do when I'm bored or curious or both. I just kind of think, "Oh, I wonder how this works," and then just basically try and pick it a part and explain it to people. Brian Okken: Okay, I've got nothing else man, just thanks for coming on the show. Anthony Shaw: Yeah, it's been a great discussion, thanks Brian.