Unit and Integration Testing Overview
The two important types of testing
If you are serious about the quality of the systems you deliver to your customer, today’s topic is important to you.
But before we dive in, watch this video on the easiest way to ensure quality (the link is in the “Learn more” section of this web page). There are some concepts in the video that we will be reusing here.
Alright, so there are a few types of testing that allow you to probe for different dimensions of system quality. The dimension that we will look into in today’s video is the scope within the system that is being tested. See, a developer can test just a single line of code, and that, in some sense, is what any developer would do inadvertently when they debug their code. On the opposite side of the spectrum, you can test the behavior of the system as a whole. Testing a small chunk of your code is usually called unit testing, where a unit (that is being tested) is typically some function or a method, depending on the programing paradigm. Testing the system as a whole is called an integration test.
How about just unit tests?
So, why would you need the two types of testing? Can we just pick one and do it right, and we wouldn’t need to bother with the other? Well, the answer is: we can’t, we do need both. And the reason for that is the inherent complexity of software systems. Testing everything with unit tests is not possible because while your functions may operate fine, when they are actually executing a system scenario together, they may not do it correctly. See, function “A” is correct and function “B” is correct too, but when calling each other, they may be using incorrect parameters and the result is still incorrect. This happens incredibly often. So, we do need integration tests to catch this type of problems.
...Or only integration tests?
But can we maybe just have integration tests then and no unit tests? Also no. A small program of just a few lines of code may easily incorporate dozens of different scenarios. But when you have thousands of lines of code, the number of scenarios for the system goes through the roof; something known as a combinatorial explosion. Just to explain intuitively, if you have only ten forks in the road, you end up having over a thousand paths. You got the idea. So, integration tests ARE critical to iron out the inconsistencies in how different parts of your system interact with one another, but you always have a very limited coverage with integration tests. So, unit tests to the rescue, to extend the coverage by reducing the scope of testing.
How to create good integration tests?
So, what should you cover with your integration tests? Normally, this is the high-stake scenarios. It makes a lot of sense to cover a couple of primary ones. And it usually includes some success and some failure scenarios, because the system must respond correctly to both. You want to automate these scenarios to be able to run them at any point in time to make sure that the system functions properly. From the technical standpoint, integration tests should be approached very carefully because testing all layers of the system together has some complexity involved.
So, for example, if an automated test script attaches to the UI, the big question is: How exactly does it do that? And will a slight change in the user interface or stylesheets or something else minor, break all the tests? It easily may. To mitigate this, some teams follow very strict rules on how they develop and sustain their UI all the way to the naming conventions of the UI objects and so on and so forth. And some, to make these tests more robust, bypass the UI, and instead test the business logic directly, which is totally possible, as long as you have a thin enough UI with a good, clear separation from the business logic. Some architectures better support testability than others; either way, today’s systems must be designed for good testability, and if yours isn’t, begin gradually refactoring it towards a better state.
Another common problem comes from the fact that integration tests have to operate with data, and the data has to come from the actual data sources, otherwise it’s not a proper integration test. But to be able to re-run a test in the future, your data has to be in the exact form and shape as that test scenario expects it to be. So, you need to think about the automated setup and teardown process for the data. And often times this has to happen separately for each test scenario or else one test will mess up the data for the other. The next thing you know is nobody trusts those tests anymore. And the whole practice quickly deteriorates. We don’t want that.
A good advice on creating automated scenarios for integration testing is this: create a few, and make sure you can sustain them over time. Do not rush into creating too many tests at once. And after, have those tests prove their worth by making them a part of your continuous integration process, running them routinely and most importantly, seeing them catch problems. If your tests don’t catch defects, it’s too early to pat yourself on the shoulder concluding that you are writing excellent code. More likely than not, your tests are simply not catching the real problems. Ask yourself, have you ever seen your tests fail? Can they fail at all? Or maybe your test themselves contain an error that prevents them from failing and they will constantly supply you with a false success test outcome.
How to create good unit tests?
Now, in contrast to this, unit tests don’t have most of these problems, because they don’t test multiple layers of the system at the same time. But they are exposed to other problems that mostly have to do with the system design. One of them is connected to refactoring. You certainly have to refactor your code every once in a while, to improve its structure, which is, let’s be honest, constantly deteriorating under the pressure of development priorities. But almost any, more or less significant refactoring, breaks some unit tests. And not because your code is necessarily wrong but because it’s been restructured. This is at times really heartbreaking to teams. We’ve invested so much in those unit tests, and now a bunch of them are not working? This may create overall reluctance to refactoring in the future, which is not good. Or it may shake the team’s faith in unit testing, which isn’t good either. So, a very measured approach is needed. It may be unwise to go too deep in the call stack in terms of unit testing and instead draw the line beyond which you don’t go and not attach any unit tests directly to very fine-grained functions and methods, instead, only testing what’s calling them.
Besides, you may have another common issue with unit tests. The system may not have been developed with testing in mind and it’s very hard to even separate out a unit, to test. Do not be discouraged. You have to gradually move towards a more loosely coupled set of classes and functions, so start somewhere. The best place to start is in the area of code on which you are already working in this iteration. That way you don’t have to context-switch.
Lastly, create unit and integration tests in the same iteration as you develop the functionality. Do not postpone it. It never works well if you are postponing the creation of tests. You lose the context, you lose the grip, you’re not moving at a sustainable pace, you are simply accumulating a test debt, which is something extremely costly to cover.
So, you probably already noticed that to have a really effective approach to testing, it takes a lot of balancing and making smart trade-offs. But when properly, this combo of unit and integration tests can substantially increase not just the quality of your deliverables but also the speed of development and delivery because you will eliminate a lot of rework.
Alright, time to talk some action! What is the situation with unit and integration testing in your case? Plan one simple but specific action item to get you started on improving the way you test.