In the current time, enterprises are adopting new methodologies for software development. One such methodology is the Agile framework. Agile Software Development is a combination of SAFe, Scrum, Kanban, XP, DSDM and Waterfall models. But it is observed that most enterprises are having trouble with this methodology. The cause of the trouble is lowered defect detection efficiency of the software testing process.
Because of this, many defects or bugs in software can make it to the production line, and are later reported by the user. This harms the reputation of any enterprise and the reliability of the product.
Project and test managers are not able to find the right areas for testing the software, leading to reduced defect detection efficiency in the software and find a way to integrate testing in the early stage of development or sprints. QA leaders and project managers are looking for strategies and metrics to discover the inefficiencies in their end-to-end process.
Some of the common QA and testing metrics are:
1. Defects Metrics
- Active Defects
- Defects fixed per day
- Severe Defects
- Rejected Defects
2. Requirements Metrics
- Covered Requirements
- Passed Requirements
- Reviewed Requirements
3. Test Metrics
- Authored Tests (Manual or Automated)
- Passed Tests (Manual or Automated or Exploratory)
- Failed Tests
- Unexecuted Tests
- BLocked Tests
- The velocity of Test Execution
One such metric to evaluate a test team effectiveness in delivering bug-free and quality software to the market is Defect Detection Efficiency.
What is Defect Detection Efficiency?
Defect Detection Efficiency is also known as Defect Detection Percentage is the percentage of the total defects detected during the testing phase of the software. The formula to calculate DDE is:
DDE = (Number of Defects Detected in a Phase / Total Number of Defects) x 100 %
In the lifetime of any software, its defects are reported at different times. Many defects are detected during the software development lifecycle, and some are detected after release by the users. By Defects detected during SDLC, we mean bugs that are reported before release, these defects are not only detected during the testing phase but also while other development stages like the requirements stage, design stage or coding stage and some later stages.
The malfunctioning or unexpected behaviour of software reported by the user after deployment are also considered while calculating Defect Detection Efficiency. This percentage provides the information that ‘how effective the testing team and the testing process is in deployment a quality product?’.
But what do we mean by effectiveness in testing?
‘Effectiveness’ means that the degree to which something is successful in producing desired results. And in the case of software development, testing is expected to be effective in terms of:
1. Being able to protect the production environment in the team.
2. Being able to produce enough data to prove test efficiency.
3. Being able to produce a quality product for the stakeholders over time.
4. Being able to save money spent on QA of the organisation.
The amount spent to assure the quality of a product is the very higher interest data for the leadership team, as it helps in explaining the efforts put by the team in the process.
Following expenses will be included in this:
- What was the cost of defects prevention?
- What was the cost of defects detection?
- What was the cost of any failure, either before production or after production?
Uses of Defect Detection Efficiency
1. It helps in measuring the effectiveness of the process adopted for testing, as a good process will not let a bug to pass to a further level without getting a fix.
2. It will expose the weakest link in the process, and corrective actions can be taken to improve efficiency.
3. It can be used as one of the measures to evaluate the performance of the tester team.
Data required to calculate Defect Detection Efficiency
1. Whenever a bug is detected, the information about the testing layer of its detection should be reported. It may be system testing, user acceptance testing, or regression testing.
2. The information about the release in which the defect was able to make it to the production line should be tracked.
3. The date and time of the defects reported by the end-user after release should also be stored.
This information will help in pinpointing the weak link in the testing stage, which is leading to wastage of time, money and the company’s reputation.
Let’s understand the calculation of DDE with the help of an example.
Suppose, the number of software’s defects reported during a testing stage is equal to 25 and the number of defects detected other than during testing is 15. Then, the DDE of the testing stage will be equal to (25/(25+15))*100= 62.5%.
What does DDE value imply?
Some possible conclusions can be interpreted from the Defect Detection Efficiency value:
1. DDE more than 90%
If the value is as high as 90% then, it might be possible that:
- The team is very impressive.
- The software has not been used extensively by the user’s, because of which they couldn’t encounter an undetected defect.
2. DDE less than 65%
The lower value of DDE implies that:
- There might be some ambiguity in the requirements, leading to the execution of less effective tests.
- The testing and Quality Assurance came into the picture at the very later stage of SDLC.
- The data sets have not been used to their full potential during the automated regression testing.
- The tester was responsible for exploratory testing not good or experienced enough. Because of this, software defects cannot be discovered.
- Either the team would not have been provided enough testing time. It is observed in Agile development that the testing team is left with a very less amount of time in a sprint for effectively testing the software.
From the above discussion, we concluded that Defect Detection Efficiency can be a very valuable metrics to evaluate the effectiveness of the testing. But judging the significance of a team on just the basis of this percentage is not fair. There are various other factors on which the results produced by the team depend despite their efforts. So do consider them before declaring a team inefficient.