Unit Test Automation
We write unit tests to test small units of code so that we can get quick feedback on the correctness of that code. When we write unit tests, we're forced to ensure that these units are decoupled from other units. After a while, we have an extensive test suite that we can run easily and regularly. This automation further enhances our ability to write quality code and deliver faster.
Let's dive into what it means to use unit tests in your software development.
What is Automated Unit Testing?
Unit testing is, by definition, automated testing. You let a tool run a piece of code and verify the results. You've written the test so that it sets up the prerequisites (like data or any mocked calls to external dependencies), runs the code you want to test, and verifies the results.
You could easily write a small console app that does this. And a long, long time ago, quite a few developers did so. These days, we have unit test automation tools to make our lives easier. These testing frameworks exist for all popular programming languages (for example NUnit for .NET or JUnit for Java). They contain everything necessary to write tests that an accompanying CLI application will recognize and run.
We can then plug this into our CI/CD pipeline so that our unit tests are run after we push our changes to source control. When we work locally, we can quickly see if we didn't break anything. After we push our changes, the build server can double check. This way, programmers can avoid blocking each other due to accidentally breaking something.
A popular methodology for unit testing is test-driven development (TDD). In TDD, we write our tests first and ensure they fail. In a UI, this failing test is indicated with a red color. Then we implement the feature so that the test passes (and it turns green in the UI). Finally, we look at our code and see if we can improve it (refactor). This is called the red-green-refactor cycle. The last step, refactor, is important because as we add more tests, our code can become more complex because we're handling more and more edge cases. After a while, there are probably opportunities to improve the code.
Benefits of Unit Test Automation
As mentioned, a long time ago, unit tests weren't always fully automated. Even when using dedicated unit testing tools, developers still had to trigger the test execution manually. Today, we can benefit from full unit test automation, and we can do so using the TDD methodology.
Let's say you need to implement a price calculation routine. To do this, you first write one or more unit tests that provide the necessary inputs to the calculation function and then also verify the result. Immediately, you can run these tests with a simple command or click of a button. Of course they fail, but you start implementing the calculation and they pass. Now you add more tests, for edge cases for example. These also fail, so you change the code to make them pass. All while still running the previous tests to ensure you don't break the simpler cases.
This process would be very time consuming if you had implemented the calculation without automated tests. To test your implementation, you would have to start the application, run through the necessary steps to get to the price calculation, and then manually verify if the result is correct. Repeat this for every unit test case you've written, and you can see how many time you would be wasting here.
But not only does automated unit testing save time, it also avoids human errors. For example, we could easily overlook something. Or we might forget to re-test a specific test scenario after we made a change.
And if at a certain moment you're not happy with the quality of the code, you now have a safety net for refactoring. You can make changes to the implementation and easily run all those tests again to verify you didn't break anything.
Because of these benefits, automated unit testing allows organizations to scale their software development. Without it, it would be very difficult to implement and maintain large and complex applications and to release them on a fast and regular cadence.
Preparing for Unit Testing
Before you write a unit test, it must be clear what you'll be testing exactly. If you're testing an existing piece of code, it should be fairly clear. But if you want to write a test for code that still has to be written, you might need to think it through a bit more:
- What will be the inputs?
- What will be the output?
- Are there any dependencies, and if so, how will we abstract them away?
- What is the happy path, and what are the edge cases?
- What test data do you need, if any?
Also make sure you know you can isolate the code you're intending to test. If not, the test case might not be a good candidate for unit testing. Maybe you should be looking at integration tests, functional testing, or another type of testing (like load testing, for example). Writing tests requires some thought before you dive right into it. Although after some practice, this will come naturally.
How to Automate Unit Testing
Now that you know what to test, you can start writing the tests. But first, you need to install the necessary tools.
Install Necessary Tools
This may be a no-brainer for many, but let's cover it for completeness. Depending on your programming language, you may have to install a specific unit testing framework. For example, a Node.js application may need to install the Mocha npm package. If you're using xUnit.net (for .NET), you may want to install the runners for Visual Studio (if that's the IDE you're using).
Possibly, you'll want to create some scripts to run all your unit tests. With a Java or .NET project, there are usually built-in CLI tools already—for example mvn test for a Java/Maven application, or dotnet test for .NET. With a Node.js project, you can add the correct command to the scripts section in your package JSON file. It all depends on the programming language and tools you're using.
As a last step, you may also have to think about installing certain components on your build server, as we'll see below. But for now, you can start writing tests.
Decide on a Test First or Code First Approach
There are two testing strategies here: write the test first, or write the code first.
Writing the test first is "true" test-driven development. But experienced developers can still write quality, loosely-coupled code and write the tests afterward. Sometimes you may also need to do some exploratory coding before you really know what a feasible implementation is. In those cases, it doesn't make sense to write your tests first.
Choose London TDD or Chicago TDD
Another choice you may have to make is the choice between the London-style TDD or Chicago-style TDD.
The London style takes an outside-in approach. This typically means that unit tests are written for the outside boundary of the application (for example, the API). Underlying layers are mocked or stubbed. The application is implemented as the developers move down toward the core of the application. The advantage is that it makes us think about the public API and the behavior of our application. A disadvantage is that it ties our unit tests closer to the specific implementation. Changing the implementation will more easily break our tests.
The Chicago style (sometimes called Detroit style) works inside-out. Typically, this means starting with the domain model and working your way up toward the API or UI. The advantage is that this allows you to reduce the number of mock objects or stubs. Higher-level tests can just use the lower-level code without mocking/stubbing it. You know you can rely on the lower-level code because the tests you've already written cover it. This does make them more like integration tests, of course. The Chicago style allows for easier refactoring because tests aren't tied to the implementation. But a disadvantage could be that you might end up with unused code (which only became apparent when you started implementing higher levels). Also, a breaking change in a lower level might ripple up through several layers of tests.
Run Locally
Now you should be able to run the test code you wrote by running the script or command we mentioned earlier. Some IDEs integrate with these scripts and give you a UI to analyze the test results.
Running the tests on your own machine is an important step. It gives you quick feedback about any changes you made. And it also stops you from pushing changes that include bugs to the source repository. These bugs could then be pulled in by other members of the team and slow them down in their development.
If everything does run well, you can push the changes, and the build server should be triggered.
Run on the Build Server
Just having unit tests and running them on the developer machines isn't enough. You need to ensure that they run on a build server as well. It’s a crucial part of the continuous integration process because it ensures that the tests don't simply pass because you have some local configuration or piece of code that isn't included in source control. The build server acts as a sort of impartial referee, if you will.
At the very least, your build server should run all unit tests when you push changes. If the build fails because of failing unit tests, you should investigate and fix the issue. Many teams also run other tests on the build server, like integration tests, end-to-end tests, and security checks.
Some DevOps teams also run these tests on branches and block team members from merging into the main branch if any tests fail or if test coverage is too low.
After this, the team can deploy the new code to an environment where testers can take a look at the finished product.
What Happens After Unit Testing?
After the unit tests have passed, both developers and testers can test the application manually. This is an important step to check the bigger picture. Does the application work when all bits come together? Does it have a nice user experience? Developers can perform these tests before they push their code further down the development pipeline. Testers often have more experience in finding edge cases that real users might encounter.
These test results can then indicate whether or not the software can be shipped. If any improvement must be made, or if any regression bugs were encountered, the team will need to address these issues. It's possible for the team to decide to fix it in a next release though. This depends on the context the team is working in.
This manual testing is a lengthy process, of course. That's why many team only test the newly finished features. Many also test some basic features that make up the core of the product (i.e. they perform smoke tests). Even better is to automate these functional end-to-end tests. That way, the team can run them before every release in a fraction of the time it would take to perform manually.
Unit Test Automation - In Review
Unit tests are a basic building block of agile software development. They are a tool every developer must have in their tool belt. Unit tests provide quick feedback about the code we're working on, force us to write clean code, and provide a safety net for refactoring.
There are many tools and techniques for unit testing. We covered code-first versus test-first and the London versus Chicago style of writing unit tests. We also saw that we should automate our unit tests throughout the whole development pipeline.
But we also saw that unit testing isn't enough. We still need to do functional tests. Even though these are often done manually, it pays to automate them as well. A tool like Waldo helps automate mobile app UI tests. Developers, testers, and managers can easily create and modify tests with Waldo. If you're interested in learning more, don't hesitate to book a demo.
This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.