Thursday, March 1, 2012

Minimum Testing

Recently, I had  discussion on identifying the Minimum testing required for a particular product. The historical data suggested that the the development process is matured. This was one of the reason the key stakeholders believed that the 'exhaustive testing' was not required & 'just enough' or 'minimum testing' was required.
While performing minimum testing, it was expected to measure the testing effectiveness.

Certainly The first step is to validate the expectation & if really there is need to perform 'just enough' testing then the following approach may be considered.

In my opinion, the concept of testing quadrants is useful to derive the solution.

The quadrant not just arrange the cases based on the testing types but it also recommend how and when it should be executed.


Let us assume that Q1 cases are already satisfying the quality criteria. 
Naturally,  the desired focus is on Q2, Q3 & Q4 cases. Even in these focused quadrants, all do not need equal priority. The analysis of historical defects & pain areas would suggest which among these three need higher attention.  The testing activities for the quadrant with higher focus need to start early - during the requirement stage. (Here clearly the benefit would come from moving the testing upstream).

Q2 & Q3 both target functionality; however Q2 focuses on positive, happy path & end to end flows - especially the explicit requirements. Q3 focuses on exploratory, negative test cases - the implicit requirements. 
During functional testing team should work with the agreed priority between Q2 & Q3 – (e.g. if explicit functionality is properly covered in unit testing, then higher attention on Q3 is expected).


Measuring the Effectiveness:

The expectation of “just enough” testing is driving the above approach. So the underlying assumption is that no exhaustive testing is required because of overall satisfactory & quality output from upstream phases.

While building "just enough" testing approach, the trade-off between time & (coverage + documentation) is achieved. Test execution is thus dependent on 'managed exploratory testing' (which is identified based on the nature of defects observed during earlier phases & the current phase).

The conventional mechanism (of measuring defect mapped to test cases, defect leakage, defect rejection, etc) should not be treated as the only indication of testing effectiveness.
Along with these measures, the time & testing thoroughness planned for each module / functionality also play an important role. Dependency from earlier phase/s in terms of vagueness, defects detected & leaked is also going to drive effectiveness of the next phase.

So a mechanism would be required -
a. to map the conventional measures with time & testing coverage allocated to each quadrant
b. to link the measures (like defect leakage) from all quadrants