Saturday, November 24, 2012

Inputs beyond conventions

CAT came across two concepts recently - aesthetics in software testing and Conway's law.

These two are not directly linked with software testing in a way. However, when I was listing the primary inputs that we (testers) use apart from '(so called)documented requirements', I realized that such inputs are seldom considered. And as a result we loose out on few crucial aspects.

To elaborate further ... , let me list out typical aspects that we consider apart from requirements -
 - methodologies
 - techniques
 - framework
 - statistics
 - theories
 - industry benchmarks / trends

CAT witnessed a presentation in a conference - the speaker (T. Ashok from STAG software) talked on Aesthetics of software testing. Truly it is a nice concept, we generally do not dedicate ourselves to bring in the beauty in software testing activities like - test design, team composition, etc.
It is not that hard to bring in "beauty": e.g .Test artifacts can be beautified with grammar, clarity, 'apt' diagrams than huge paragraphs. Testing processes is beautified with clarity and yet discipline. Testing tools can be beautified with naming standards, comments, scripting practice and so on...
A point to note - whenever we are adding beauty, we are in fact adding value to testing! So beautifying is not merely the surplus-time activity.

Conway's law (one eponymous law) suggests that 'organizations that design systems are generally constrained to produce designs which are copies of their communication structures. Wow! Yet another interesting thought! It is like organization's soft-skills get reflected in its 'product'.
We as testers never focus on organization's soft-skills ....

Well, CAT is not suggesting to translate an organization communication structure into a test case and then dig out the defects ... neither the suggestion is to focus on beautification like 'color, fonts, width, huge coding standard documents, etc' beyond acceptable limit.
CAT is just asking all testing professionals to raise the questions(to oneself) like -
  do we pay enough attention on organization's soft skills .... ?
  has my team worked on beautifying the deliverable in true sense....?
  are we ignoring the obvious loophole in existing 'soft-skill' and its associated impact on product under test ....?

Few unconventional inputs to look for (in addition to those listed above) -
 - inter organization communication structure(as suggested by Conway's law)
 - existing QA / QC activities.
 - structure of BAs / product owner team/s.
 - Management structure.
 - Infrastructure availability.
For sure these might provide a useful link or association with one of the existing pain areas, loophole, etc.

We find a number of models / approach  / methods in industry that identify the common problems associated with the existing testing activity. Close look indicate that the 'unconventional inputs' provide vital information captured in 'assessment model' or 'compliance model' or 'audit checklist'. In CAT's opinion first an individual should focus & understand the unconventional inputs and then analyse them using one industry model.

What after we understand the problem areas from this exercise? We can design a solution - it could be transitioning approach / test centre (CoE) formation / automation / improvement in test strategy / change in existing toolkit ... anything.

So let us think unconventionally a bit...

Saturday, August 4, 2012

Generic onsite offshore model for testing

The most of the onsite offshore testing models revolve around the one shown in the image below. The core objective of having such a model is to get judicial mixes of onsite and offshore components for the various phases of software testing. While mixing these components, the general expectation from the offshore components is to deliver the cost savings to the clients / end customer without compromising the quality of the project deliverable.




The requirement gathering stage is executed onsite - at the client's premises; this helps the team to interact with the end users, requirement owners and understand implicit and explicit requirements. Outcome of this phase comprises of reverse knowledge transfer sessions from the project team (typically offshore), high level testing approach (with the list of business critical requirements), & high level estimation (or revisiting the proposal level estimation).

Team then comes back offshore & downloads the entire KT session to the offshore (expanded) team & generate detailed estimation and detailed test strategy. This strategy & estimation undergo at least three review rounds (a typical scenario) before team goes ahead and takes a deep dive further.

Hereon team builds high level scenarios, map them to NFRs, identify links between scenarios (as an input to end to end test cases), identifies the test data dependency, stub/harness/driver requirement, POC requirement, queries / unclear & ambiguous elements & share it with the requirement owners. Once these are reviewed and approved, the other deliverable are produced in sequence.

Obviously, as the best practice during every stage, the team is expected to revise the test strategy and test estimates and share it with the onsite team.

Matured model advocates face to face interactions between customer counterpart & offshore team in a meeting room during each review and approval stage. e.g. both onsite & offshore team member again interact for a week during signing off scenarios, test cases, execution results for each cycles & during the UAT support. These meetings can happen onsite or even at the offshore; I personally prefer to have all discussions where core work is happening. e.g. during application development & testing stage I will prefer to have all meetings at offshore development / test centre & during UAT support I recommend to have meetings at the location where UAT is taking place. Such frequent visits to onsite and offshore automatically wipe out dependencies, ill effects of distributed teams & most important misunderstanding.

No matter how we change or enhance such models or even build our proprietorial one, the importance and maturity will come if 'your way of working' wipes out ill effects of the distributed team, thereby sustain the quality & has tap on the cost.

Sunday, June 17, 2012

Test Story Points in agile testing

The story point is a mechanism to identify "size" of the requirement in agile projects.

Any estimation consists of the following elements:
1. Size - the size of the requirement you are working on
2. Effort - efforts needed to complete all the tasks that are associated with the requirement. Repeat - "all associated tasks".
3. Cost - monitory estimation of expenditure & all other costs necessary while working on the requirement
4. Resources (human & infrastructural) - the skilled resources, software-hardware resources, cloud resources, devices required, support requirements(transport, IT support, conference / demo, pantry ,anything that can be treated as support), other infrastructural requirements, etc.

Generally in estimation the link of other elements is established with size.
So here we convert the size (as story points) to efforts (in hours), cost (in currency), resources(number, configuration, skills, etc). When this link is specific to testing efforts, cost associated with testing, testing resources - the same story point can be termed as "Test Story Point".

Note:- 
Just adding the "term" does not serve the purpose, the purpose is to have specific focus on testing estimation in agile projects. If we do it without changing the term or by assigning different term, it is still fine.

A) Why is it necessary to give separate focus to testing?
 - Agile principle gives the same or (rather more importance in my opinion) to testing.
 - Linking the size of requirement to development, technological aspects merely will not bring out all needs associated with -
a. software testing
b. the project &
c. the value / deliverable to the customer.

As the pigs who are committed to all tasks in a sprint / iteration, we are responsible for each and every possible deliverable going to end user. Conveying efforts, cost, resource requirements without proper importance to testing needs, indicates that we are not committed.
In fact - the agile principles convey that 'give importance to everything that adds value to the project and improves quality, productivity of the deliverable'. As testing a particular feature, application performance, accessibility, security, etc improves quality - team should provide details of all Four elements of estimation associated with testing.

B) How do we go about it?
The first change needed - is change in mentality. This is the toughest change for anyone.
Once we start walking the path - multiple ways come out:
 - your team has abundance experience, they can help you assume few things to begin with.
 - your organization is working on enough projects - which can provide you some guidelines, thumb rules, etc.
 - there are multiple forums, blogs, books available to answer ever possible question.
 - If none helps - you always have freedom to get 'pilot readings through a POC' & keep revising them.

C) How much time it will take? It is difficult to take out time from busy schedule for all this ...
Being Busy is good; to work on something other than routine we just need to revisit the priority of one of the existing activity. Check if there is anything that can wait till we complete this initiative.

It might be possible that because we did not give proper attention to estimation, we are spending more time on one of the activity. (We might have planned to finish something in 3 days with two people, which actually could have required more time).

D) which is the best possible tool / utility to do this?
Us! Any tool, utility - finally needs a sharp mind. Use everything that fits needs of your project.
For one of the project we had excel book where one test story point was explained with the Four elements of estimation. There were separate sheets for each element translating the one test story point in respective units. Simple multiplication used to provide us the details for desired story points.
After every sprint we use to revisit the benchmark - to check if some change can help us reduce variance in effort / schedule.

One might use power point, mind map, word, calculator, any other commercial tool, home-grown tool, just calculation in mind, etc. Just keep in mind that whatever we choose, should add value to project in terms of all three - cost, time & quality.

Happy testing!

Thursday, May 17, 2012

Multi-skilled testing is coming our way...

CAT is using the following scenes just to represent thoughts in different way.

Scene 1: IT guru talk
4 gurus were sitting on a table with beer can. They were debating about some article they read on future of IT industry.
CAT sitting on nearby table overheard few dialogues -
..."In few years time three type of roles will survive: 
DT - Developer with (must to have)good testing background.
ArcMan - An Architect with basic understanding of management principles of cost, time, quality.
FT - Techical expert & Domain expert together."
...."There are going to be few players with the basic components hosted in cloud. Major development is going to be around binding such components from different vendors together."
...."Dedicated multiple roles might turn out as an overhead".

Scene 2: Staffing team discussing with CxO
Staffing: :It is tough to get NFR testing specialist in our cost & available time
CxO: Why do we hunt people everytime? Why not many of existing professionals trained so far to fulfill these special testing needs?
Training: It is tough to get people for training & grooming - from their existing schedule.
CxO: Is it unfair to expect multiple skills from experienced testing professionals? Are they going to work with MS office as core skillset?
All testing professionals should know at least one skill apart from functional testing & automation. Both are must for anyone with more than 5 years in this industry.
Additionally they can choose from one of the following:
Database
Security
Performance & other falling in this bucket (Load, Stress, Availability, etc).
Accessibility
Cross-browser

Scene 3: Agile team discussion - probably a daily meeting
Few dialogues from team meetings -
..."In this sprint we need to check the single user response & 10 user response for 5 shortlisted transactions"
...."I have never done such thing before & I do not have enough scripting skill as well"
...."Performance expert will guide and mentor for sometime"
...."We will help you with basic understanding of codebase & build & initial set-up in this sprint"
...."your progress on performance testing will improve with product's performance :)"
..."be prepared as approx 6 sprints from now security is going to come"

And many more such scenes... let us eyes open & read the situation. It is evident that if we(testing professionals) are in IT industry there is no escape from technology.

One of the following should be equally good along with both manual & autmated functional testing -
1. Database knowledge with some development ability (at least 1)
2. Scripting language with ability to write unit test methods (at least 1)
3. One of the non functional testing skill-set
4. Open Source - Most important. It appears that this is surely going to be a preferred choice in future.
Paid tool is cost to projects when there are open source tools -
Open source does it with bit more pain - can not be the reason to pay for tool, unless that paid tool is the only possible "technical" solution.

It is tough ask - but then changing scenario demands it, why wait for the last minute? Let us take first step at least!

Tuesday, May 15, 2012

Context Specific Defect Prediction

Defect Prediction is required.

It is easy to implement defect prediction model within context of your project. Although we are testing professionals and not statisticians, some common sense can help us to build a 'just enough' defect prediction model for our project.

CAT thinks that the following parameters are key contributors to the defect prediction model:
1. Nature of requirements
2. Defect injection factor for project (which is linked with development team)
3. Defect detection factor for project (which is linked with us)
4. Size under consideration e.g. 1 use case, 1FP, 1 Test Story Point, etc.

(you might wonder what is test story point - CAT will share its thought in upcoming posts on this).

A simple formula will help you achieve the defect prediction. Follow these steps:
1. Measure defect density (defects / size) from the "applicable" historic data.
2. Build Three level scale based on your gut feel (such that sum of all coefficients is 1)
e.g. Nature of requirements
Simple - 0.2, Medium - 0.3,Complex - 0.5
3. Measure the size for the project under consideration.
This size and the one in defect prediction should match.
e.g. both places should talk about use case or both places should talk about test story point.
I can not predict defects with density in FP (defects per FP) & current size in use case - unless I know relationship in FP and use case "for my project". Equations from any other source will not work as is unless it is "calibrated" for my project.
4. use simple formula
Predicted Defects = defect density * size
5. add our key factors to it
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor)
6. In the first implementation consider all factors at "middle" level.
This brings question to all intelligent minds - how do I equate both sides of equation?
We introduce additional factor for this.
7. New equation looks like this
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor) *(correction factor)
The first version will have correction factor = the value which equates both sides of equation.
8. Thereon conduct variance analysis after each release / cycle and based on entire team's understanding change rating for injection, requirement & detection.
Revisit the correction factor to equate both sides.
Applying correction based on variance is the best possible way to mature any home-grown model.
9. In next release / cycle - injection & detection factor should not change unless there is huge change in team composition / technology than previous release.
We can change only requirement factor.
10. Repeat this in logical cycles - after roughly Four rounds, one should get a model in good shape.

It is hard to find equation that gives exact number of defects before we start testing - we do not have Harry Potter's magic wand. It is "Prediction" model not a "Defect Detection Spell"!

Paid solutions, complex equations or tools may provide better results - how much better is the key driver to identify ROI & thus should be the deciding factor for choosing these tools / reports / equations.

Common sense says that next step is to include Five level scale instead of three to reduce inaccuracy band further.
When we are in execution - defect density & size should be in context of the current cycle. Historical data is useful only in the beginning.

There are much advance ways of predicting defects using a home-grown model. This was just one of those which hopefully should ignite the defect prediction culture in testing world.

Friday, April 6, 2012

Testing in agile projects

This topic is so huge that there will be series of posts to cover some aspects comprehensively.

I will begin with testing as a task in agile projects.

Agile projects function such that the agile manifesto is followed. The manifesto indicates that there is value in process, contracts, planning, etc. however, it advocates people, team and core work (of building product) over these.
People sometime do take undue benefit of this and call process-less or chaotic way of working as "agile".
It is recommended that novice reads the myths along with the manifesto.
Few links to understand myths -
http://www.theappgap.com/exploring-ten-myths-about-agile-development.html
http://stackoverflow.com/questions/1871110/agile-myths-and-misconceptions

Naturally testing as a task - "should retain" its basics even while working on agile projects. Writing test cases, reporting defects, conducting RCA, Defect Prevention, metrics, etc have its value addition in agile projects. The difference is just enough & context specific application of these concepts. It is the 'team' that decides what is needed & not a uniform governing process document or a manager / testing head like authority.

One needs to understand about 2-3 flavors of agile. To begin with scrum, XP, agileRUP, would be good enough in my opinion. The purpose of understanding these is to get to know the 'culture & environment' when a particular approach is followed. The culture & environment plays key role to change the style of testing in agile projects.

In upcoming series I will cover more about -

  • Test Story Point - how story point can be made a test story point.
  • Useful web sites, books, etc on agile testing.
  • Testing metrics in agile projects.
  • Some open source tools that might be useful.

I will expand this as I move further...

To end this today, I will say that - testing spirit remains unhurt in agile; it is just expressed in different way.

Thursday, March 1, 2012

Minimum Testing

Recently, I had  discussion on identifying the Minimum testing required for a particular product. The historical data suggested that the the development process is matured. This was one of the reason the key stakeholders believed that the 'exhaustive testing' was not required & 'just enough' or 'minimum testing' was required.
While performing minimum testing, it was expected to measure the testing effectiveness.

Certainly The first step is to validate the expectation & if really there is need to perform 'just enough' testing then the following approach may be considered.

In my opinion, the concept of testing quadrants is useful to derive the solution.

The quadrant not just arrange the cases based on the testing types but it also recommend how and when it should be executed.


Let us assume that Q1 cases are already satisfying the quality criteria. 
Naturally,  the desired focus is on Q2, Q3 & Q4 cases. Even in these focused quadrants, all do not need equal priority. The analysis of historical defects & pain areas would suggest which among these three need higher attention.  The testing activities for the quadrant with higher focus need to start early - during the requirement stage. (Here clearly the benefit would come from moving the testing upstream).

Q2 & Q3 both target functionality; however Q2 focuses on positive, happy path & end to end flows - especially the explicit requirements. Q3 focuses on exploratory, negative test cases - the implicit requirements. 
During functional testing team should work with the agreed priority between Q2 & Q3 – (e.g. if explicit functionality is properly covered in unit testing, then higher attention on Q3 is expected).


Measuring the Effectiveness:

The expectation of “just enough” testing is driving the above approach. So the underlying assumption is that no exhaustive testing is required because of overall satisfactory & quality output from upstream phases.

While building "just enough" testing approach, the trade-off between time & (coverage + documentation) is achieved. Test execution is thus dependent on 'managed exploratory testing' (which is identified based on the nature of defects observed during earlier phases & the current phase).

The conventional mechanism (of measuring defect mapped to test cases, defect leakage, defect rejection, etc) should not be treated as the only indication of testing effectiveness.
Along with these measures, the time & testing thoroughness planned for each module / functionality also play an important role. Dependency from earlier phase/s in terms of vagueness, defects detected & leaked is also going to drive effectiveness of the next phase.

So a mechanism would be required -
a. to map the conventional measures with time & testing coverage allocated to each quadrant
b. to link the measures (like defect leakage) from all quadrants

Friday, February 24, 2012

we always UNDERSTAND REQUIREMENTS

I strongly believe that a testing professional keeps understanding the requirements - right from the proposal stage till the last activity in their assignment. I will share my opinion on "on the field" techniques / methods for this - in a separate blog.

Here I am showcasing how a simple "list" can do a wonderful work of
1. understanding "learning pattern" and its link with domain & technology.
2. understand the impact on quality.
(Above should be done by people with "technical" or "core testing" inclination.)
3. measuring effectiveness of KT & its time cost impact (this should be for managers / aspiring managers).

The dedicated phase most probably revolves around understanding the document (any form of requirement document) sharing the vision, expectations, etc from the products (both documented and not documented).

Post this phase - every week list down the "new" requirements (everyone has to maintain it individually).

  • New - could be one that is present (in direct or implicit form) but not understood
  • New - also could be one that is not specified

This difference should be understood well by all.
In free time / on weekly basis / with suitable frequency consolidate this list to arrive at "team's view" on new requirements.

Expand this list to create a table  (note - this table is for those requirements where team agrees that the requirement is new / "discovery") -

  • domain
  • technology
  • # cases added / deleted / affected by the "discovery"
  • possible impact on design (won't work for functional testing team without any visibility to development) 
  • possible impact on data model (won't work for functional testing team without any visibility to development) 
  • possible impact on code (won't work for functional testing team without any visibility to development) 
  • projected impact on NFRs
  • time (roughly) spent by whole team to grasp it (do not add a tracker for this! already you and your team are in enough trouble)

Be open to accept all comments - as understanding a requirements is "highly" subjective. BAs / SMEs / Client might have conveyed this in different way, other team members (including design / development team) might have noticed it. Definitely there will be few, which all "agree" as valid new requirements.

Whenever / if CRs or other commercial mechanism is raised for managing these - your table would help.

Technically - this table would help you to understand a pattern - that team is likely to miss requirements of a particular nature. If all projects provide 1/2 such patterns -
1. your organization would get a wonderful trend for a KT effectiveness module / program.
2. further analysis on domain & technology would provide valuable inputs to BAs, design & development team.

In my opinion, the management purpose of # of requirements, cost, impact etc should be separated from the team - this needs to be maintained and tracked by managers themselves.

So team should not relate # of new requirements to measure KT effectiveness, rather team should derive new patterns, solutions, testing techniques suitable to these "discovery" requirements - its only then your core testing knowledge would go up.

Thursday, February 23, 2012

Opening!

About Me:
I am not one of those who have chosen & achieved software testing as their as career - I landed here. So far, I do not have any intention of being "manager" / "account head" / "delivery head" and so on - who mange, deliver, enrich the respective area in software testing engagements.

I am proud of my career. By now, for sure(!) I have understood that it is a technically challenging field. This field is far beyond spreadsheet or document based test cases.

I am aiming to reach a stage where I start providing a "sensible" and a "practical" software testing solution based on all that I know about software testing.

Reaching this stage in my opinion is one of the toughest task in this field.

Why I want to write the blog:
I am still young in this field to suggest something to someone & there is nothing great in "About Me" section that should make me write a blog. Well - still ironically I am going to write on Software Testing in this blog.
I am completely aware that there are already superstars / Tendulkar's of this field:- James Bach, Lisa Crispin, Cem Kaner and many more. But, I am not blogging to fill the gap of superstars / to be a superstar.


I am just trying to post -
1. what I feel & what I think others might be feeling about this field.
2. share the technical problems that I have faced (barring and masking everything that I should).
3. solutions to these problems whichever I know & that I can provide on a blog.

I have not thought about anything else that can be added here :) - probably time would suggest.