A few years ago, our CEO was at a client meeting – this is one of the largest media firms in the world – while talking to the QA Director, our CEO asked him: ‘how do you decide that you have done enough testing?‘ He replied ‘if the average time taken to find a defect by my testing team is decreasing – I will know that there are few bugs left there in the software to find and I call it Done!‘.
From that stage, we have come a long way in measuring our software testing project metrics – from test coverage through test efficiency and most recently test effectiveness.
Just by looking at the metrics dashboard, one can get a reasonable understanding of the process maturity of the organization.
“What cannot be measured cannot be managed “is something that we have always been told. We know that metrics are an essential part of managing – keeping on track – our QA projects.
Traditionally, we have looked at productivity (read efficiency) as the primary set of metrics.
We may also look at effectiveness side of the project including the effort spent on finding high critical/impact bugs vs. cosmetic defects; number of ‘not a defect’ hits by the QA team etc.
As with every process – one can think of a ‘metrics life cycle’. That would include an iterative process of
- Metrics definition (what-why-when-where)
- Metrics collection (how)
- Metrics base lining and benchmarking (target or tolerance levels); and finally
- Metrics refinement