Thursday, May 17, 2012

Multi-skilled testing is coming our way...

CAT is using the following scenes just to represent thoughts in different way.

Scene 1: IT guru talk
4 gurus were sitting on a table with beer can. They were debating about some article they read on future of IT industry.
CAT sitting on nearby table overheard few dialogues -
..."In few years time three type of roles will survive: 
DT - Developer with (must to have)good testing background.
ArcMan - An Architect with basic understanding of management principles of cost, time, quality.
FT - Techical expert & Domain expert together."
...."There are going to be few players with the basic components hosted in cloud. Major development is going to be around binding such components from different vendors together."
...."Dedicated multiple roles might turn out as an overhead".

Scene 2: Staffing team discussing with CxO
Staffing: :It is tough to get NFR testing specialist in our cost & available time
CxO: Why do we hunt people everytime? Why not many of existing professionals trained so far to fulfill these special testing needs?
Training: It is tough to get people for training & grooming - from their existing schedule.
CxO: Is it unfair to expect multiple skills from experienced testing professionals? Are they going to work with MS office as core skillset?
All testing professionals should know at least one skill apart from functional testing & automation. Both are must for anyone with more than 5 years in this industry.
Additionally they can choose from one of the following:
Database
Security
Performance & other falling in this bucket (Load, Stress, Availability, etc).
Accessibility
Cross-browser

Scene 3: Agile team discussion - probably a daily meeting
Few dialogues from team meetings -
..."In this sprint we need to check the single user response & 10 user response for 5 shortlisted transactions"
...."I have never done such thing before & I do not have enough scripting skill as well"
...."Performance expert will guide and mentor for sometime"
...."We will help you with basic understanding of codebase & build & initial set-up in this sprint"
...."your progress on performance testing will improve with product's performance :)"
..."be prepared as approx 6 sprints from now security is going to come"

And many more such scenes... let us eyes open & read the situation. It is evident that if we(testing professionals) are in IT industry there is no escape from technology.

One of the following should be equally good along with both manual & autmated functional testing -
1. Database knowledge with some development ability (at least 1)
2. Scripting language with ability to write unit test methods (at least 1)
3. One of the non functional testing skill-set
4. Open Source - Most important. It appears that this is surely going to be a preferred choice in future.
Paid tool is cost to projects when there are open source tools -
Open source does it with bit more pain - can not be the reason to pay for tool, unless that paid tool is the only possible "technical" solution.

It is tough ask - but then changing scenario demands it, why wait for the last minute? Let us take first step at least!

Tuesday, May 15, 2012

Context Specific Defect Prediction

Defect Prediction is required.

It is easy to implement defect prediction model within context of your project. Although we are testing professionals and not statisticians, some common sense can help us to build a 'just enough' defect prediction model for our project.

CAT thinks that the following parameters are key contributors to the defect prediction model:
1. Nature of requirements
2. Defect injection factor for project (which is linked with development team)
3. Defect detection factor for project (which is linked with us)
4. Size under consideration e.g. 1 use case, 1FP, 1 Test Story Point, etc.

(you might wonder what is test story point - CAT will share its thought in upcoming posts on this).

A simple formula will help you achieve the defect prediction. Follow these steps:
1. Measure defect density (defects / size) from the "applicable" historic data.
2. Build Three level scale based on your gut feel (such that sum of all coefficients is 1)
e.g. Nature of requirements
Simple - 0.2, Medium - 0.3,Complex - 0.5
3. Measure the size for the project under consideration.
This size and the one in defect prediction should match.
e.g. both places should talk about use case or both places should talk about test story point.
I can not predict defects with density in FP (defects per FP) & current size in use case - unless I know relationship in FP and use case "for my project". Equations from any other source will not work as is unless it is "calibrated" for my project.
4. use simple formula
Predicted Defects = defect density * size
5. add our key factors to it
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor)
6. In the first implementation consider all factors at "middle" level.
This brings question to all intelligent minds - how do I equate both sides of equation?
We introduce additional factor for this.
7. New equation looks like this
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor) *(correction factor)
The first version will have correction factor = the value which equates both sides of equation.
8. Thereon conduct variance analysis after each release / cycle and based on entire team's understanding change rating for injection, requirement & detection.
Revisit the correction factor to equate both sides.
Applying correction based on variance is the best possible way to mature any home-grown model.
9. In next release / cycle - injection & detection factor should not change unless there is huge change in team composition / technology than previous release.
We can change only requirement factor.
10. Repeat this in logical cycles - after roughly Four rounds, one should get a model in good shape.

It is hard to find equation that gives exact number of defects before we start testing - we do not have Harry Potter's magic wand. It is "Prediction" model not a "Defect Detection Spell"!

Paid solutions, complex equations or tools may provide better results - how much better is the key driver to identify ROI & thus should be the deciding factor for choosing these tools / reports / equations.

Common sense says that next step is to include Five level scale instead of three to reduce inaccuracy band further.
When we are in execution - defect density & size should be in context of the current cycle. Historical data is useful only in the beginning.

There are much advance ways of predicting defects using a home-grown model. This was just one of those which hopefully should ignite the defect prediction culture in testing world.