One of the most contentious and difficult topics in software development has to be the automated testing of code. This could be unit testing, integration testing, or end to end testing – there’s lots of different types. With near dogmatic levels of commitment to practices such as TDD, many disagreements about what is and what is not a unit test vs an integration test, and so on, testing code is a difficult topic. Just about everyone talking about testing has valid arguments and experiences, but with the complexity that appears to be involved in testing (often due to all the disagreements on the “right” way to do it), there’s often a perception that integrating something like unit testing into our development process will add *tons* of time and slow us *way* down.
Over the years, I’ve had the privilege of working on a lot of different projects, and heard the perspectives of many people much smarter than me about testing code. Here’s some of the thoughts I have on this topic.
You should be writing tests for your code
Automated tests improve code quality and reduce bugs. There are many studies and articles around this. They provide a safety net that allows you to make changes faster and with more confidence. They also help you catch things before your end users do. Just the other day, a mentee of mine wrote a unit test that he didn’t understand why it was failing. Come to find out, it was failing because there was a bug in the code he had just written that he was testing that both he and I had missed in a manual review. Re-covering that code by writing an automated test for it saved us from publishing a bug to production, that would have cost us a lot more time to hunt down later. The light bulb that went off in that developer’s head when he realized the test had caught the bug (he is still learning how to test effectively) was probably the best part of my day.
You don’t need 100% test coverage
While you should be writing tests for your code, you don’t need 100% test coverage. This has been said time and time again, but it bears repeating. You should test the business-critical parts of your application – write lots of tests for those things, from different angles. Don’t write very many tests (or any even) for the less business critical/more obvious code that you work with. One person I talked to put it this way – have more than 100% coverage on just a few important parts of your application.
One good example of this is an application we just finished and shipped to a customer. This application dealt with data management, and overall was very simple. We didn’t write all that many tests for it. However, the tests we did write were very concentrated on the most business-critical aspect of the application – only certain people had access to see certain records, and the records of people under them. We wrote lots of tests for this to feel confident it was working, and not many tests for much else. You could say the data permission portion of the application had more than 100% coverage, since we exercised it from so many angles.
If you’re working with a large codebase (or many large codebases) that don’t have any testing in them, the initial feeling might be overwhelming. You might think – “it will take us months just to get testing integrated over what we already have! We can’t do that!” You’re probably right, and that’s not worth doing. What you should do, is start implementing tests in existing code gradually over time.
I’m a big fan of the mandatory “add at least one test per pull request” approach. With this approach, any time a developer is fixing a bug or adding new features, they have to add at least one unit test to the codebase. The key here is consistency – if a pull request doesn’t have at least one unit test, it gets rejected, no exceptions. This ensures that when a bug is fixed, it shouldn’t happen again, and that new features have testing. I’ve found that while the requirement is only one test (which doesn’t feel intimidating), often people will add 2 or 3 to get a little bit of extra coverage around the bug or feature they were working on. It’s also a great way to teach through code review. If someone can’t figure out how to test what they’re working on, it’s a good opportunity for a group work session, or possibly even for a refactor to make the code testable.
It will slow you down… at first
Without question, the number one thing I hear from colleagues I talk to who don’t write tests are that they don’t have time, or it will slow down their development too much. And you know what… they’re right in a way. It WILL slow down development, because you have to start thinking about writing code differently for it to be testable, and you have to take the time how to learn to write effective tests. It’s not something that happens overnight, and it does slow you down… at first.
It will make you faster… later
Even though you might go slower for awhile when you start writing automated tests, that passes quickly if you stick with it. You may have a couple of projects at first that are very slow, but once you master a method of automated testing that works for your team, along with figuring out the right tools to use, thing start to go a lot faster. This is when you start to see the real gains. Not only is the initial development of the software not much slower at this point, you also write higher quality code, and stop bugs from hitting production. It’s almost always less time consuming to catch a bug in development, rather than fixing it later after it’s gone live.
You’ll also be able to make changes (either adding new features or fixing bugs) faster and with more confidence, if you have test coverage on the most critical parts of the application. How often in the past have you had to make a change, and then something else broke? Good test coverage can help prevent this.
One scenario that comes to mind is when we were working on a complex data importer for an application. This importer had a variety of mapping rules, with new ones added occasionally. Each time a rule was implemented, unit tests were written to prove that rule worked. As more and more rules were added, we were able to work with this complex code with confidence, knowing we’d broken something if tests started to fail, or we were good to go if all the tests passed. I know it’s anecdotal, but we have never once had a single bug with this part of the application, and I believe it’s due to this test coverage.
You’ll learn to write better code
One of the reasons getting into something like unit testing can seem very intimidating at the start is because we aren’t writing code that is testable. When I was very first exposed to the concept of unit testing, I looked at the pile of code I’d written at the time, and thought, “how in the world do you actually write a test for this? It’s 500 lines and is full of static functions and helpers and database hits.”
The problem was that I was writing poor code at the time, that was tightly coupled, too bloated, and not testable. Once I started to learn how to test my code, I also learned that I need to write code better. The two go hand in hand. If I couldn’t figure out a way to test my code, that meant that I needed to refactor it.
You’ll prove the code works
When you write code (especially for the back-end of your application), there’s often either not the data present or not the ability with an interface to actually test and confirm that it works. Unit testing usually solves this problem – I’m able to write some code that executes my new features and confirms that it actually works, even if there’s no other way to confirm this yet. That’s a tremendous feel good, and very important. When you start to think about it this way, you’ll want to write a lot more tests for new code. You want to be able to prove that code works at any time by executing it, and a unit test does that for you.
The physical products you use have automated testing
It has always amazed me that our attitude toward software could be so lax when it comes to automated testing. A great deal of the physical products we rely on every day go through some type of automated testing to ensure quality. Think about cars, or other equipment manufacturing. These parts typically go through automated stress tests to ensure that they will hold up under real world conditions. We wouldn’t want it any other way. Why don’t we have the same attitude about software? Customers and users of the software should have the assurance that it is automatically being tested for quality, just like with other physical products they depend on.
The right way is the right way that adds value for your organization
At the end of the day, you should look at what will add the most value for your organization. Will a few integration tests catch the majority of the bugs with your products? Great! Start adding those. Are you building software that will cost businesses millions of dollars if bugs make it into production? Perhaps heavy unit test coverage is the way to go. Are you building a utility that is pretty basic and will only be used by a few people? Maybe no tests are needed in that case. Maybe across your entire portfolio you need a combination of multiple approaches. There’s no one size fits all.
The most important thing in my opinion is that you do some type of automated testing and start to figure out where you see the most benefits, the most bugs caught and the best results. The hardest part is always getting started. Once you go down this path, you’ll start to establish the tried and true patterns that work, and build up a set of tools that make testing easier for you. We’ve used many great tools for our testing that’s made it easier (and just open sourced one that we built – you can check it our here: https://github.com/johnkuefler/DotnetTestUtils) over the years, and this has helped a lot.
It’s never too late to get started with testing. Don’t wait to break the ice and add the first automated test to your code base. In most cases, even one test is better than none!