Test Driven Development - application to a project
Intro
What is TDD?
TDD, or in full Test Driven Development, is an approach to development in which we start from writing tests.
Before we write the code for a particular functionality, we think about exactly what result we expect when triggering a particular action and put it in a test beforehand.
An example of this is a date mapper functionality. When you pass certain input to the mapper, you expect it to come out in a different format.
1
2
3
$mappedDate = DateMapper:AmericanDatetoDefaultDate('05/27/2005');
self::assertEquals('2005-05-27', $mappedDate);
When we run this test, of course it will fail, and that is what it is supposed to do.
The next step is to write minimum code to make the test pass. Then we run the test again and repeat this process until all assertions are successful and thus all expectations have been met. Once the test has run successfully, we look at how we can optimize or simplify the code. If the test is successful with minimal code, then we write out the next unit test.
This continuous process can be represented as follows:
As indicated in the text and diagram above, we work with unit tests. In TDD, it is important to start from small isolated functionalities that can be easily tested. Integration tests can then of course be added at a later stage.
Why choose TDD?
Test-driven coding has several benefits, both in the short and long term.
We list the most important ones below:
- Maintainability and scalability
- Minimum amount of code to achieve a given result
- This makes the code consist of small pieces of understandable code
- This in turn makes it easier to add extensions
- Return on investment
- In the long run, the development time will be much lower than on a project that is not developed according to TDD.
- Due to failed tests coming to the surface immediately, the chance of bugs creeping into the system is much smaller. After all, these are noticed early during the development process
- The impact of changes on existing functionality is immediately visible through failing tests. The developer therefore does not have to manually go through different test flows to check the impact of his/her tests, which in certain cases can be a very time-consuming task.
- Trust
- If the code of the application is always fully and automatically tested when additions or modifications are made, you can rest assured that the proper functioning of the application is guaranteed. Thus, as a developer, you can always launch new features with confidence.
How do we implement TDD at Codana?
At Codana, we use Gitlab CI/CD. In the pipeline, we define different "stages" in which, among other things, we check whether the code complies with the latest standards. This is done using tools such as codesniffer, psalm, grumphp, ...
It is also in this pipeline that we run our tests automatically for each merge request. Merge is only possible when all tests are "green" and this way we can be sure that everything keeps working properly, regardless of new features or changes.
Concrete implementation
Calculation tools
Our client Librius needs a calculation functionality for their membership platform "Libra". Specifically, based on certain inputs, they need to calculate the fees for all publishers who are members with them.
These calculations involve a lot of complexity and so it is important that they are well tested before implementing in a full flow. Test driven development was therefore the logical choice.
Before you can start writing the tests, you must of course have reference results. In this case, the best option to arrive at such reference results was to "translate" the functionality into an Excel file. Using the formulas provided to us by the customer and a limited data set (based on real life data), we prepared a transparent Excel in which each step of the calculation is listed separately. Where possible, we also added control numbers that we will be able to verify later in our tests (e.g. does the total percentage always come out at 100%).
This Excel file was provided to the client so that she could review all steps and results and then give final approval to implement the calculations in this way.
And so the testing begins...
With the Excel, we have a good guide to build our tests. Step by step, we develop a unit test following the principle described earlier.
We isolate the functionality, determine what results we want to obtain, make sure the test fails and write the necessary code to obtain a working test.
We repeat this process until all unit tests are successful and all intermediate steps can be calculated correctly.
In these unit tests, we use exactly the same data as in the predefined Excel. This way we can check one-on-one whether the results are calculated correctly. We do this by checking both the expected sub-results and the control numbers. If both are correct and match the Excel, we are sure that the underlying code is calculating everything correctly.
After all unit tests for a certain calculation have been worked out, we write an integration test that will execute all steps one after the other so that we can check whether the entire flow also produces a correct end result.
Where exactly is the advantage of working test-driven here? There are numerous advantages, but we will highlight two more:
- Certainty: since complex financial calculations are involved, it is important to be certain of the accuracy of the results. We want both to have complete confidence in our code ourselves and to be able to give our client the guarantee of the accuracy of the results. Specifically in this case, this also allows us to quickly detect and correct minor issues such as rounding errors.
- Maintainability: if in the future we are asked to make a certain adjustment to a specific step, through the tests we can see very quickly whether this adjustment might also affect other steps and what the possible impact of that might be. That way we can adjust our code so that we do not affect the rest of the operation
Import flows
Another important functionality within the Libra tool is the title management and more specifically the title import. Titles can be imported in several ways, either manually or automatically, but no matter how they enter the system, several checks need to be run for each title. Based on these, among other things, additional attributes are added to the title.
The flow to be gone through is quite complex and includes several checks necessary for possible edge cases.
To get as complete an overview of this as possible, together with the customer we draw up a flow chart that will be run through for each title.
When this flow chart is (theoretically) fully developed, we take some real examples and go through the flow step by step to check whether each title would be handled correctly. Where necessary, we adjust the flow chart.
After the flow chart is in place, we will start isolating certain functionalities, such as the "flaggers". These are small pieces of code that will do certain checks on passed values and return a true/false value based on them.
First, we again write small unit tests in which we validate the expected results. Then we write and modify the functionality of the mapper until we obtain a successful test. As mentioned earlier, after the test succeeds, we also look at whether we can optimize and simplify the code.
Step by step, we can thus ensure 100% test coverage for this functionality as well.
Conclusion
Although test-driven development may seem like a time-consuming challenge at first, you'll soon notice the short- and long-term benefits. For example, bugs will surface early in the development process and your implementation will consist of smaller readable pieces of code that are easy to maintain for the future.
If issues do surface in the future, the cause is easy to isolate and track down through testing.