Friday, May 24, 2013

Keep your eyes "Open" .... to get a good toolset

A software tester with no tools to assist or help ... this is as horrible as one can imagine. If one still has any doubt why don't read this article before just freezing your view. So a tool is a vehicle, a resource! Well said.On its own the tool has limitations, it needs a human driver - who else other than us can be a good driver?

Once we agree on this point - let us take a step further and determine which tools we need. The biggest blunder is to 'assume' that if there is no automation or performance testing, then the tools are not required.

Let me make a small attempt to provide some vibrant thoughts:
- calculator available in your windows machine is your tool.
- windows accessibility options are your tools.
- zoom facility provided in browser is your tool.
- tools are required everywhere - NO MATTER WHAT TYPE OF TESTING IS INVOLVED.

So here is the sincere request from CAT - to keep your eyes "Open". With wide open eyes there are a number of free (& Open source) solutions available. Let me share list of some interesting tools and utilities:

Memtest - these are designed to stress test the X86 computers RAM. The default pass does 9 different tests, varying in access patterns and test data. For OSX, check Memtest OSX.

Webscarab - This  framework is used for analysing applications that communicate using the HTTP and HTTPS protocols. It is written in Java, and is thus portable to many platforms. One need to have at least  good understanding of the HTTP protocol to work with this tool.

nmon -  Provides performance data for AIX and Linux platforms and is used for for monitoring and analyzing the servers. A large number of details are provided by this tool e.g. CPU utilization, disk I/O rates, top processors, run queue information.

perfmon - SNMP based performance monitoring tool with web interface and facility to add new graphs.

PICT - a Microsoft algorith for pairwise testing. The concept of pair wise testing is extremely useful across testing phases and in both functional and non functional testing.

win32:  GUI Test - a Perl module for windows GUI automation

Xenu Link Sleuth - is a computer program to check broken hyperlinks.This is a proprietary software available at no charge.

Screen recorders:
Jing - captures images and video
CaptureFox - freeware Firefox add-on. It records every action within the browser.

HTTP: Recorder - Perl module Browser-independent recorder that records interactions with web sites.

Dexpot - a virtual desktop tool, that allows to switch between different virtual desktop connections easily.

This list is not complete, and in my opinion there can never be a complete list. However, this is a good starting point. There are other well known tools (for test automation, performance testing, security testing, etc) that are not in this list - but one can easily find them on internet.

Tuesday, March 26, 2013

Assess the Accessibility focus


A lot has been realized by most of us regarding the Accessibility of the application. There are different standards, laws and guidelines to help every one of us in building a product that provides equal opportunities to everyone in this society e.g. WCAG 2.0, Equality Act 2012, BS 8878, etc.
Rather than considering this as some form of enforcement on us, we need to understand the benefits associated with being accessibility compliant. But before this, let us look at few myths associated with accessibility:
  • Accessibility gives what is needed by people with special needs and elderly people.
Accessibility means providing great user experience, enjoyment – fun of use & right value of our product to disabled people.

  • In journey of accessibility compliance, web sites should follow WCAG 2.0.
WCAG is useful for techies / designers in building the website. WAI docs are useful in getting how disable people use net & for mobile site creators, browser developers, etc.
Along with such initiatives a standard like BS8878 is useful – as it provides entire process of how we should follow and maintain accessibility compliant web sites and associated tests.

  • There is not much ROI / What will be the return on investment?
The population aware of internet and using sites / apps for their regular tasks is increasing regularly. It is advisable to build accessibility compliant product.
A joint study by Microsoft and Forrester conveys that- huge number of people are likely to be benefited from use of Accessible technology.


  • There is no need to perform separate testing for accessibility once the compliance / standard is followed.
Quality is conformance to the requirements; following the standards is not sufficient - unless the impact of compliance (during the development) is validated on different browsers.
Testing accessibility

This needs to be addressed case by case; however what all would like to understand is a common high level approach that can be used for accessibility testing.
This involves creating two fold matrix of your product.
Matrix – 1: Map the pages based on the level of compliance or type of accessibility solutions expected.
Matrix – 2: Map the pages to the checkpoints – this provides the details picture of which page should comply with which checkpoint.

For accessibility testing, merely validating the checkpoints on one browser is not sufficient; we should validate the accessibility checkpoints / test cases across multiple browsers and platforms.
This is a place where the concept of combinatorial testing is extremely useful. The algorithms provide us the good trade-off between number of combinations and coverage.

Once this framework is ready; it can be implemented on one product or entire portfolio. It is highly recommended to implement this across entire portfolio.

Few tips to remember when we talk about accessibility testing:
  • Comes without saying: If it is a web based application or native mobile app, Accessibility requirement comes without saying. In fact – even the desktop applications should be an accessible solution.
  • Brings in browser-platform support: the expectations from accessibility compliance automatically bring in the cross platform support for you site.
  • While testing the site – follow a thumb rule that the entire site should be accessible only with keyboard!
  • You should test multimedia pages without speakers … even if it sounds silly; this is the best possible way to identify and highlight the importance of text-video relation.
  • Do not rely on results of one tool.

Typical activities in the test strategy:
  • Static testing of code (for accessibility provisioning)
  • Browser testing – manual (checkpoint validation)
  • Check the pages without loading any image on the page
  • Use combination of tool driven tests and manual testing (online tools, JAWS, WAT 2.0).

Fitting this entire framework in your structured testing and risk based testing if obvious activity. 
We do get a question sometimes that how do we test accessibility in agile world ... Is agile world different as far as accessibility compliance is considered? Absolutely not. Merely, look at it as fitting the above framework in another framework (related to agile methodology). It is advisable to move upstream in case of accessibility testing (like any other testing). Jointly with developers - identify which type of test will add value upstream and then just 'add those cases'.



Saturday, November 24, 2012

Inputs beyond conventions

CAT came across two concepts recently - aesthetics in software testing and Conway's law.

These two are not directly linked with software testing in a way. However, when I was listing the primary inputs that we (testers) use apart from '(so called)documented requirements', I realized that such inputs are seldom considered. And as a result we loose out on few crucial aspects.

To elaborate further ... , let me list out typical aspects that we consider apart from requirements -
 - methodologies
 - techniques
 - framework
 - statistics
 - theories
 - industry benchmarks / trends

CAT witnessed a presentation in a conference - the speaker (T. Ashok from STAG software) talked on Aesthetics of software testing. Truly it is a nice concept, we generally do not dedicate ourselves to bring in the beauty in software testing activities like - test design, team composition, etc.
It is not that hard to bring in "beauty": e.g .Test artifacts can be beautified with grammar, clarity, 'apt' diagrams than huge paragraphs. Testing processes is beautified with clarity and yet discipline. Testing tools can be beautified with naming standards, comments, scripting practice and so on...
A point to note - whenever we are adding beauty, we are in fact adding value to testing! So beautifying is not merely the surplus-time activity.

Conway's law (one eponymous law) suggests that 'organizations that design systems are generally constrained to produce designs which are copies of their communication structures. Wow! Yet another interesting thought! It is like organization's soft-skills get reflected in its 'product'.
We as testers never focus on organization's soft-skills ....

Well, CAT is not suggesting to translate an organization communication structure into a test case and then dig out the defects ... neither the suggestion is to focus on beautification like 'color, fonts, width, huge coding standard documents, etc' beyond acceptable limit.
CAT is just asking all testing professionals to raise the questions(to oneself) like -
  do we pay enough attention on organization's soft skills .... ?
  has my team worked on beautifying the deliverable in true sense....?
  are we ignoring the obvious loophole in existing 'soft-skill' and its associated impact on product under test ....?

Few unconventional inputs to look for (in addition to those listed above) -
 - inter organization communication structure(as suggested by Conway's law)
 - existing QA / QC activities.
 - structure of BAs / product owner team/s.
 - Management structure.
 - Infrastructure availability.
For sure these might provide a useful link or association with one of the existing pain areas, loophole, etc.

We find a number of models / approach  / methods in industry that identify the common problems associated with the existing testing activity. Close look indicate that the 'unconventional inputs' provide vital information captured in 'assessment model' or 'compliance model' or 'audit checklist'. In CAT's opinion first an individual should focus & understand the unconventional inputs and then analyse them using one industry model.

What after we understand the problem areas from this exercise? We can design a solution - it could be transitioning approach / test centre (CoE) formation / automation / improvement in test strategy / change in existing toolkit ... anything.

So let us think unconventionally a bit...

Saturday, August 4, 2012

Generic onsite offshore model for testing

The most of the onsite offshore testing models revolve around the one shown in the image below. The core objective of having such a model is to get judicial mixes of onsite and offshore components for the various phases of software testing. While mixing these components, the general expectation from the offshore components is to deliver the cost savings to the clients / end customer without compromising the quality of the project deliverable.




The requirement gathering stage is executed onsite - at the client's premises; this helps the team to interact with the end users, requirement owners and understand implicit and explicit requirements. Outcome of this phase comprises of reverse knowledge transfer sessions from the project team (typically offshore), high level testing approach (with the list of business critical requirements), & high level estimation (or revisiting the proposal level estimation).

Team then comes back offshore & downloads the entire KT session to the offshore (expanded) team & generate detailed estimation and detailed test strategy. This strategy & estimation undergo at least three review rounds (a typical scenario) before team goes ahead and takes a deep dive further.

Hereon team builds high level scenarios, map them to NFRs, identify links between scenarios (as an input to end to end test cases), identifies the test data dependency, stub/harness/driver requirement, POC requirement, queries / unclear & ambiguous elements & share it with the requirement owners. Once these are reviewed and approved, the other deliverable are produced in sequence.

Obviously, as the best practice during every stage, the team is expected to revise the test strategy and test estimates and share it with the onsite team.

Matured model advocates face to face interactions between customer counterpart & offshore team in a meeting room during each review and approval stage. e.g. both onsite & offshore team member again interact for a week during signing off scenarios, test cases, execution results for each cycles & during the UAT support. These meetings can happen onsite or even at the offshore; I personally prefer to have all discussions where core work is happening. e.g. during application development & testing stage I will prefer to have all meetings at offshore development / test centre & during UAT support I recommend to have meetings at the location where UAT is taking place. Such frequent visits to onsite and offshore automatically wipe out dependencies, ill effects of distributed teams & most important misunderstanding.

No matter how we change or enhance such models or even build our proprietorial one, the importance and maturity will come if 'your way of working' wipes out ill effects of the distributed team, thereby sustain the quality & has tap on the cost.

Sunday, June 17, 2012

Test Story Points in agile testing

The story point is a mechanism to identify "size" of the requirement in agile projects.

Any estimation consists of the following elements:
1. Size - the size of the requirement you are working on
2. Effort - efforts needed to complete all the tasks that are associated with the requirement. Repeat - "all associated tasks".
3. Cost - monitory estimation of expenditure & all other costs necessary while working on the requirement
4. Resources (human & infrastructural) - the skilled resources, software-hardware resources, cloud resources, devices required, support requirements(transport, IT support, conference / demo, pantry ,anything that can be treated as support), other infrastructural requirements, etc.

Generally in estimation the link of other elements is established with size.
So here we convert the size (as story points) to efforts (in hours), cost (in currency), resources(number, configuration, skills, etc). When this link is specific to testing efforts, cost associated with testing, testing resources - the same story point can be termed as "Test Story Point".

Note:- 
Just adding the "term" does not serve the purpose, the purpose is to have specific focus on testing estimation in agile projects. If we do it without changing the term or by assigning different term, it is still fine.

A) Why is it necessary to give separate focus to testing?
 - Agile principle gives the same or (rather more importance in my opinion) to testing.
 - Linking the size of requirement to development, technological aspects merely will not bring out all needs associated with -
a. software testing
b. the project &
c. the value / deliverable to the customer.

As the pigs who are committed to all tasks in a sprint / iteration, we are responsible for each and every possible deliverable going to end user. Conveying efforts, cost, resource requirements without proper importance to testing needs, indicates that we are not committed.
In fact - the agile principles convey that 'give importance to everything that adds value to the project and improves quality, productivity of the deliverable'. As testing a particular feature, application performance, accessibility, security, etc improves quality - team should provide details of all Four elements of estimation associated with testing.

B) How do we go about it?
The first change needed - is change in mentality. This is the toughest change for anyone.
Once we start walking the path - multiple ways come out:
 - your team has abundance experience, they can help you assume few things to begin with.
 - your organization is working on enough projects - which can provide you some guidelines, thumb rules, etc.
 - there are multiple forums, blogs, books available to answer ever possible question.
 - If none helps - you always have freedom to get 'pilot readings through a POC' & keep revising them.

C) How much time it will take? It is difficult to take out time from busy schedule for all this ...
Being Busy is good; to work on something other than routine we just need to revisit the priority of one of the existing activity. Check if there is anything that can wait till we complete this initiative.

It might be possible that because we did not give proper attention to estimation, we are spending more time on one of the activity. (We might have planned to finish something in 3 days with two people, which actually could have required more time).

D) which is the best possible tool / utility to do this?
Us! Any tool, utility - finally needs a sharp mind. Use everything that fits needs of your project.
For one of the project we had excel book where one test story point was explained with the Four elements of estimation. There were separate sheets for each element translating the one test story point in respective units. Simple multiplication used to provide us the details for desired story points.
After every sprint we use to revisit the benchmark - to check if some change can help us reduce variance in effort / schedule.

One might use power point, mind map, word, calculator, any other commercial tool, home-grown tool, just calculation in mind, etc. Just keep in mind that whatever we choose, should add value to project in terms of all three - cost, time & quality.

Happy testing!

Thursday, May 17, 2012

Multi-skilled testing is coming our way...

CAT is using the following scenes just to represent thoughts in different way.

Scene 1: IT guru talk
4 gurus were sitting on a table with beer can. They were debating about some article they read on future of IT industry.
CAT sitting on nearby table overheard few dialogues -
..."In few years time three type of roles will survive: 
DT - Developer with (must to have)good testing background.
ArcMan - An Architect with basic understanding of management principles of cost, time, quality.
FT - Techical expert & Domain expert together."
...."There are going to be few players with the basic components hosted in cloud. Major development is going to be around binding such components from different vendors together."
...."Dedicated multiple roles might turn out as an overhead".

Scene 2: Staffing team discussing with CxO
Staffing: :It is tough to get NFR testing specialist in our cost & available time
CxO: Why do we hunt people everytime? Why not many of existing professionals trained so far to fulfill these special testing needs?
Training: It is tough to get people for training & grooming - from their existing schedule.
CxO: Is it unfair to expect multiple skills from experienced testing professionals? Are they going to work with MS office as core skillset?
All testing professionals should know at least one skill apart from functional testing & automation. Both are must for anyone with more than 5 years in this industry.
Additionally they can choose from one of the following:
Database
Security
Performance & other falling in this bucket (Load, Stress, Availability, etc).
Accessibility
Cross-browser

Scene 3: Agile team discussion - probably a daily meeting
Few dialogues from team meetings -
..."In this sprint we need to check the single user response & 10 user response for 5 shortlisted transactions"
...."I have never done such thing before & I do not have enough scripting skill as well"
...."Performance expert will guide and mentor for sometime"
...."We will help you with basic understanding of codebase & build & initial set-up in this sprint"
...."your progress on performance testing will improve with product's performance :)"
..."be prepared as approx 6 sprints from now security is going to come"

And many more such scenes... let us eyes open & read the situation. It is evident that if we(testing professionals) are in IT industry there is no escape from technology.

One of the following should be equally good along with both manual & autmated functional testing -
1. Database knowledge with some development ability (at least 1)
2. Scripting language with ability to write unit test methods (at least 1)
3. One of the non functional testing skill-set
4. Open Source - Most important. It appears that this is surely going to be a preferred choice in future.
Paid tool is cost to projects when there are open source tools -
Open source does it with bit more pain - can not be the reason to pay for tool, unless that paid tool is the only possible "technical" solution.

It is tough ask - but then changing scenario demands it, why wait for the last minute? Let us take first step at least!

Tuesday, May 15, 2012

Context Specific Defect Prediction

Defect Prediction is required.

It is easy to implement defect prediction model within context of your project. Although we are testing professionals and not statisticians, some common sense can help us to build a 'just enough' defect prediction model for our project.

CAT thinks that the following parameters are key contributors to the defect prediction model:
1. Nature of requirements
2. Defect injection factor for project (which is linked with development team)
3. Defect detection factor for project (which is linked with us)
4. Size under consideration e.g. 1 use case, 1FP, 1 Test Story Point, etc.

(you might wonder what is test story point - CAT will share its thought in upcoming posts on this).

A simple formula will help you achieve the defect prediction. Follow these steps:
1. Measure defect density (defects / size) from the "applicable" historic data.
2. Build Three level scale based on your gut feel (such that sum of all coefficients is 1)
e.g. Nature of requirements
Simple - 0.2, Medium - 0.3,Complex - 0.5
3. Measure the size for the project under consideration.
This size and the one in defect prediction should match.
e.g. both places should talk about use case or both places should talk about test story point.
I can not predict defects with density in FP (defects per FP) & current size in use case - unless I know relationship in FP and use case "for my project". Equations from any other source will not work as is unless it is "calibrated" for my project.
4. use simple formula
Predicted Defects = defect density * size
5. add our key factors to it
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor)
6. In the first implementation consider all factors at "middle" level.
This brings question to all intelligent minds - how do I equate both sides of equation?
We introduce additional factor for this.
7. New equation looks like this
Enhanced Predicted Defects = (defect density) * (size)*(defect injection factor)*(nature of requirements)*(defect detection factor) *(correction factor)
The first version will have correction factor = the value which equates both sides of equation.
8. Thereon conduct variance analysis after each release / cycle and based on entire team's understanding change rating for injection, requirement & detection.
Revisit the correction factor to equate both sides.
Applying correction based on variance is the best possible way to mature any home-grown model.
9. In next release / cycle - injection & detection factor should not change unless there is huge change in team composition / technology than previous release.
We can change only requirement factor.
10. Repeat this in logical cycles - after roughly Four rounds, one should get a model in good shape.

It is hard to find equation that gives exact number of defects before we start testing - we do not have Harry Potter's magic wand. It is "Prediction" model not a "Defect Detection Spell"!

Paid solutions, complex equations or tools may provide better results - how much better is the key driver to identify ROI & thus should be the deciding factor for choosing these tools / reports / equations.

Common sense says that next step is to include Five level scale instead of three to reduce inaccuracy band further.
When we are in execution - defect density & size should be in context of the current cycle. Historical data is useful only in the beginning.

There are much advance ways of predicting defects using a home-grown model. This was just one of those which hopefully should ignite the defect prediction culture in testing world.