Thursday, March 1, 2012

Minimum Testing

Recently, I had  discussion on identifying the Minimum testing required for a particular product. The historical data suggested that the the development process is matured. This was one of the reason the key stakeholders believed that the 'exhaustive testing' was not required & 'just enough' or 'minimum testing' was required.
While performing minimum testing, it was expected to measure the testing effectiveness.

Certainly The first step is to validate the expectation & if really there is need to perform 'just enough' testing then the following approach may be considered.

In my opinion, the concept of testing quadrants is useful to derive the solution.

The quadrant not just arrange the cases based on the testing types but it also recommend how and when it should be executed.


Let us assume that Q1 cases are already satisfying the quality criteria. 
Naturally,  the desired focus is on Q2, Q3 & Q4 cases. Even in these focused quadrants, all do not need equal priority. The analysis of historical defects & pain areas would suggest which among these three need higher attention.  The testing activities for the quadrant with higher focus need to start early - during the requirement stage. (Here clearly the benefit would come from moving the testing upstream).

Q2 & Q3 both target functionality; however Q2 focuses on positive, happy path & end to end flows - especially the explicit requirements. Q3 focuses on exploratory, negative test cases - the implicit requirements. 
During functional testing team should work with the agreed priority between Q2 & Q3 – (e.g. if explicit functionality is properly covered in unit testing, then higher attention on Q3 is expected).


Measuring the Effectiveness:

The expectation of “just enough” testing is driving the above approach. So the underlying assumption is that no exhaustive testing is required because of overall satisfactory & quality output from upstream phases.

While building "just enough" testing approach, the trade-off between time & (coverage + documentation) is achieved. Test execution is thus dependent on 'managed exploratory testing' (which is identified based on the nature of defects observed during earlier phases & the current phase).

The conventional mechanism (of measuring defect mapped to test cases, defect leakage, defect rejection, etc) should not be treated as the only indication of testing effectiveness.
Along with these measures, the time & testing thoroughness planned for each module / functionality also play an important role. Dependency from earlier phase/s in terms of vagueness, defects detected & leaked is also going to drive effectiveness of the next phase.

So a mechanism would be required -
a. to map the conventional measures with time & testing coverage allocated to each quadrant
b. to link the measures (like defect leakage) from all quadrants

Friday, February 24, 2012

we always UNDERSTAND REQUIREMENTS

I strongly believe that a testing professional keeps understanding the requirements - right from the proposal stage till the last activity in their assignment. I will share my opinion on "on the field" techniques / methods for this - in a separate blog.

Here I am showcasing how a simple "list" can do a wonderful work of
1. understanding "learning pattern" and its link with domain & technology.
2. understand the impact on quality.
(Above should be done by people with "technical" or "core testing" inclination.)
3. measuring effectiveness of KT & its time cost impact (this should be for managers / aspiring managers).

The dedicated phase most probably revolves around understanding the document (any form of requirement document) sharing the vision, expectations, etc from the products (both documented and not documented).

Post this phase - every week list down the "new" requirements (everyone has to maintain it individually).

  • New - could be one that is present (in direct or implicit form) but not understood
  • New - also could be one that is not specified

This difference should be understood well by all.
In free time / on weekly basis / with suitable frequency consolidate this list to arrive at "team's view" on new requirements.

Expand this list to create a table  (note - this table is for those requirements where team agrees that the requirement is new / "discovery") -

  • domain
  • technology
  • # cases added / deleted / affected by the "discovery"
  • possible impact on design (won't work for functional testing team without any visibility to development) 
  • possible impact on data model (won't work for functional testing team without any visibility to development) 
  • possible impact on code (won't work for functional testing team without any visibility to development) 
  • projected impact on NFRs
  • time (roughly) spent by whole team to grasp it (do not add a tracker for this! already you and your team are in enough trouble)

Be open to accept all comments - as understanding a requirements is "highly" subjective. BAs / SMEs / Client might have conveyed this in different way, other team members (including design / development team) might have noticed it. Definitely there will be few, which all "agree" as valid new requirements.

Whenever / if CRs or other commercial mechanism is raised for managing these - your table would help.

Technically - this table would help you to understand a pattern - that team is likely to miss requirements of a particular nature. If all projects provide 1/2 such patterns -
1. your organization would get a wonderful trend for a KT effectiveness module / program.
2. further analysis on domain & technology would provide valuable inputs to BAs, design & development team.

In my opinion, the management purpose of # of requirements, cost, impact etc should be separated from the team - this needs to be maintained and tracked by managers themselves.

So team should not relate # of new requirements to measure KT effectiveness, rather team should derive new patterns, solutions, testing techniques suitable to these "discovery" requirements - its only then your core testing knowledge would go up.

Thursday, February 23, 2012

Opening!

About Me:
I am not one of those who have chosen & achieved software testing as their as career - I landed here. So far, I do not have any intention of being "manager" / "account head" / "delivery head" and so on - who mange, deliver, enrich the respective area in software testing engagements.

I am proud of my career. By now, for sure(!) I have understood that it is a technically challenging field. This field is far beyond spreadsheet or document based test cases.

I am aiming to reach a stage where I start providing a "sensible" and a "practical" software testing solution based on all that I know about software testing.

Reaching this stage in my opinion is one of the toughest task in this field.

Why I want to write the blog:
I am still young in this field to suggest something to someone & there is nothing great in "About Me" section that should make me write a blog. Well - still ironically I am going to write on Software Testing in this blog.
I am completely aware that there are already superstars / Tendulkar's of this field:- James Bach, Lisa Crispin, Cem Kaner and many more. But, I am not blogging to fill the gap of superstars / to be a superstar.


I am just trying to post -
1. what I feel & what I think others might be feeling about this field.
2. share the technical problems that I have faced (barring and masking everything that I should).
3. solutions to these problems whichever I know & that I can provide on a blog.

I have not thought about anything else that can be added here :) - probably time would suggest.