Consider a scenario in which the logical error density found in different modules (modules of the same nature) in a project has been analyzed. One module, z.B. “A,” has a value greater than the logical error density compared to other modules. The baseline of the setting in the organization is 0 .1 to 0.5 defects /KLOC. That is, the expected variation in the process is between 0.1 and 0.5 defects/KLOC. In the module project, it was increased. The project manager decided to take appropriate steps to take them after triggering a cause analysis. Each defect was thoroughly analyzed. The analysis revealed that the problem was related to the assignment of the type of error and that it had nothing to do with the logic of the code. For this, a variety of documentation errors has modulated logical defects and thus increases the logical density of error drastically. As performing an attribute analysis can be tedious, costly and generally uncomfortable for all stakeholders (the analysis is simple versus execution), it is best to take a moment to really understand what should be done and why. Now, if the same reviewer had also performed the verification of other modules, as well as the defective assignment, the logical error density may be on the upper side due to its fault mapping error.
Within the project, it cannot trigger further analysis, as there is no unspeakable cause (in terms of CPS, no rotation points). Well, what`s going on. Misclassification of errors enters the organization`s tracking system and, from there, improvement initiatives are initiated to reduce it and, at the same time, the analysis of actual defects (perhaps documentation errors) would be obscured due to erroneous errors. This ultimately affects the organization`s process performance goals. Unlike a continuous measurement value, which cannot be accurate (on average), any lack of precision in an attribute measurement system inevitably leads to accuracy problems. If the error coder is not clear or undecided on how to encode a defect, different codes are assigned to several defects of the same type, making the database imprecise. In fact, the vagueness of an attribute measurement system is an important factor in inaccuracies. People can be calibrated, even though most people like to think differently. The frequently used standard, attribute analysis or so-called AAA is a practical tool to help. Attribute analysis is a method of assessing the degree of compliance or consistency between the reviewer`s assessment and the standard. The elements used in the evaluation are then identified that have the greatest divergence from the standard. The most common use of Agreement Analysis (AAA) attribute analysis is the evaluation of agreements under quality controls.
In what other situations can you use AAA to your advantage? In addition to the sample size problem, logistics can ensure that listeners do not remember the original attribute they attributed to a scenario when they see it for the second time, also a challenge.