Not everything was better in the old days. This is particularly true of software. Today’s solutions are generally of far higher quality than their predecessors, and a combination of improved methodologies and better development tools definitely plays a role.
No less crucial, though, is the fact that we have become better at testing systems and finding errors in plenty of time. That is why testing is also a top priority in all global development firms.
On the other hand, we sometimes tend to overlook the value of systematic, automated testing in the development and implementation of business solutions at company level. That is a shame, because testing is not just a necessary evil; it should be regarded as an investment that provides returns on several levels.
Why should I test?
Testing is partly about ensuring the quality, stability, capacity and integrations of a solution with other systems.
About finding errors in advance and checking whether everything is working as it should and will continue to do so – even when a large number of users deploy the solution at the same time. That is relevant to know, both before a new solution goes live and when changes are made to a solution that is already in production.
Because it can easily become critical if, for example, an ERP system starts to churn out incorrect shipping documents, or makes it impossible to issue (or pay) invoices or to order new raw materials. In the worst case scenario, a company can end up in the red as a result of poor software quality. This is a risk that testing can minimise to a very significant extent.
A modern testing strategy can also result in an updated documentation of the specific functionality of your solution, which in some industries may even be an outright demand from authorities and/or auditors. You even have the option of gaining direct access to documentation that describes your processes in detail and can be incorporated directly in the onboarding and training of new employees.
When should I test?
I often see that people only really start testing at the end of a project, when the overall solution is largely complete. For several reasons, that is not very appropriate. First and foremost, it is a very heavy workload to tackle at once – in fact, almost an entire project in itself. At the same time, you risk overloading the project group with feedback during the most critical phase of the project – just before you go live.
You should actually start testing as early as possible in the project process. Initially, it is about defining a test strategy, then about testing the individual units – or solution elements – as they are ready.
Let us say that you build a solution that handles the total flow from the moment a customer places an order to the time when it is delivered and billed, and so on until payment is required. In that case, you can easily test the order element, the invoice module or any other sub-element separately, regardless of whether the rest of the elements are in place or not. And you should.
By tackling one thing at a time, it is also easier to handle the project and reallocate resources efficiently along the way. We are also already in the process of building the library of test cases, which can be used as a template for future tests, and which will be part of the final end-to-end tests of the entire new solution.
How should I test?
In the past, testing manually was hard, and the discipline gained a reputation as cumbersome, slow and costly. Actually, in its day, the reputation was justified. For example, I once worked on a project in a pharmaceutical company where we had to go through 17 test cases that required three days of full-time work with manual scripting and testing, every single time there was just the slightest change to a particular interface. It was hard, monotonous and costly – but absolutely necessary.
But today, Automated Testing in particular has made the work much easier. Automated testing means performing a specific action in a piece of software (e.g. billing or purchasing of raw materials), while the testing software automatically ‘records’ all the sequences of the action and automatically constructs a test script in the background, which can later be ‘played’ and, if necessary, adjusted. For example, automated testing makes it much easier to conduct regression testing, in which you test potential effects of the changes made in a piece of code. It saves an amazing amount of time, makes testing easier and increases the quality of the measure, because you minimise the risk of manual errors.
If we take the example above, today it would still take a few days to prepare and conduct the very first tests of the piece of software in question, but all subsequent tests could then be completed in 2-3 hours. In other words, this is an efficiency gain of at least a factor of 10.
Similarly, Automated Testing makes stress testing easier by simulating a situation in which a large number of users, for example, access different parts of a solution or the same sub-element of the solution at once. This helps to ensure that the company’s solution works appropriately both in everyday life and when there is a peak load in the context, say, of holidays or when presenting annual accounts.
When is testing relevant?
At Columbus, we always recommend preparing and following a testing strategy as part of a process that includes development/configuration, implementation and operation of a solution – or when implementing updates. Of course, partly to ensure quality and increase the likelihood that all business-critical processes will work as intended. But also because it is basically good business for the customer, who, as previously mentioned, can also help to strengthen documentation, training and compliance.
So, if I had to give a short answer to the question of when testing is relevant, it would be: Again and again! And thankfully, it has become easier than ever.
Would you like to find out more? We have compiled lots of good tips about automated testing in an e-book.