Wednesday, September 18, 2013

Test Strategy in Agile Project

Imagine if we build the test strategy in an interesting way  - where the strategy for each user story is managed separately in form of test story cards. Each USC (User Story Card) demands a specific approach and attention for testing. It becomes difficult to manage this in typical word / ppt that form the test strategy of the project.

Unlike the traditional approach the test strategy itself needs revision and recreation in agile projects. In such a case maintaining the USC specific aspects in the test story cards (TSC) is advisable.
The common aspects and dependencies that we need to address over sprints, across SCRUMS can be maintained in master test strategy - obviously with the reference of necessary test story cards.

One of the depiction of agile test strategy -

The master strategy can be a traditional word / ppt. However, it is extremely necessary that it is revised in sprint planning and even in retrospective.

It is advisable that we should not constraint ourselves with one approach of building the test strategy. Imagine here that - instead of Test Story Card - team builds mind map against each USC.

One of the core aspect of building the test strategy is planned dry run of the testing that we are going to perform. Rather than worrying more about the template, section, font - a true testing professional must focus on testing approach, technique, risks and most important the proposed testing solutions.

Test strategy should help the project; if it creates value - it will be demanded. Otherwise, merely following any approach with the label ('agile strategy', 'scrum test strategy') will bring in all the disadvantages which agile aims to wipe out.

We can create value for ourselves, otherwise no wonder if testing members get questions on what will you do - when developer is building the code. Refrain from building a strategy that does not add value to project - as sometime even an eMail is enough to define the true intent of test strategy.

Finally, thanks to Fiona Charles - who encouraged me yesterday (during EUROSTAR online conference) about blogging these thoughts.

Monday, September 16, 2013

Big Data --- is it big testing problem?

Lack of knowledge acts as bigger problem than lack of tools / skills to work with.
Probably this is the situation with relatively uncommon things in this world... Imagine new sport, newly discovered planet, new archeological evidence and even recent technology.

Probably Big Data and especially Big Data Testing is in similar zone at present. Like any product, to test Big Data based products we need -
different testing types (functional and non functional),
well formed test data management approach,
thoroughly planned test environment management.

Big Data processing involves three steps - gathering the data from various nodes, performing the (Map Reduce) operations to get the output and load the output on downstream systems for further processing.
As the technology deals with huge data, the functional testing needs to be carried out at every stage to detect the coding error and / or configuration (node) errors. This means that functional testing should involve minimum three stages:
- pre processing validation (on extracted data)
- Validation of processed data (before loading on downstream systems)
- Validation of extracted and loaded data.

Big Data technology is also associated with number of "V"s, some say Three, some say Four or even Five. From testing perspective, we will consider Volume, Velocity and Variety.

Volume:
Manual comparison is out of question considering the quantity. It might be carried out only on exceptional instances, that too in my opinion with sampling technique.
File comparison scripts / tools can be incorporated in parallel on multiple nodes.

Velocity:
Performance testing provides vital inputs on speed of operation and throughput of certain processes.

Variety:
Unstructured data (text based), social media data, log files, etc are some formats that add to variety of data handled by Big Data.
To compare structured data - the scripts need to be prepared that will produce the output in desired format and then the actual output can be compared with the desired output.
Verifying unstructured data - is the largely manual testing activity. Automation may not pay for this due to variety of formats handled. The best bet here is the analysis of unstructured data and building the best possible test scenarios to get the maximum coverage.

EDT of NFR testing:
Setting up the test environment, building the dummy data in volume and utilizing proper tools are key aspects o non functional testing and these are no longer different in Big Data Testing.

Situational Tests:
The situation that induced adoption of Big Data should also be reflected while building test scenarios.
e.g. For an investment bank government regulation may induce need of Big Data based structures.

Big Data has profound impact on global economy; Big Data testing does in turn demand a good mix of innovation and common sense, tools and test cases. Testing community should evolve and live up to this, we have done it in the past and we will keep doing it in future.

Friday, May 24, 2013

Keep your eyes "Open" .... to get a good toolset

A software tester with no tools to assist or help ... this is as horrible as one can imagine. If one still has any doubt why don't read this article before just freezing your view. So a tool is a vehicle, a resource! Well said.On its own the tool has limitations, it needs a human driver - who else other than us can be a good driver?

Once we agree on this point - let us take a step further and determine which tools we need. The biggest blunder is to 'assume' that if there is no automation or performance testing, then the tools are not required.

Let me make a small attempt to provide some vibrant thoughts:
- calculator available in your windows machine is your tool.
- windows accessibility options are your tools.
- zoom facility provided in browser is your tool.
- tools are required everywhere - NO MATTER WHAT TYPE OF TESTING IS INVOLVED.

So here is the sincere request from CAT - to keep your eyes "Open". With wide open eyes there are a number of free (& Open source) solutions available. Let me share list of some interesting tools and utilities:

Memtest - these are designed to stress test the X86 computers RAM. The default pass does 9 different tests, varying in access patterns and test data. For OSX, check Memtest OSX.

Webscarab - This  framework is used for analysing applications that communicate using the HTTP and HTTPS protocols. It is written in Java, and is thus portable to many platforms. One need to have at least  good understanding of the HTTP protocol to work with this tool.

nmon -  Provides performance data for AIX and Linux platforms and is used for for monitoring and analyzing the servers. A large number of details are provided by this tool e.g. CPU utilization, disk I/O rates, top processors, run queue information.

perfmon - SNMP based performance monitoring tool with web interface and facility to add new graphs.

PICT - a Microsoft algorith for pairwise testing. The concept of pair wise testing is extremely useful across testing phases and in both functional and non functional testing.

win32:  GUI Test - a Perl module for windows GUI automation

Xenu Link Sleuth - is a computer program to check broken hyperlinks.This is a proprietary software available at no charge.

Screen recorders:
Jing - captures images and video
CaptureFox - freeware Firefox add-on. It records every action within the browser.

HTTP: Recorder - Perl module Browser-independent recorder that records interactions with web sites.

Dexpot - a virtual desktop tool, that allows to switch between different virtual desktop connections easily.

This list is not complete, and in my opinion there can never be a complete list. However, this is a good starting point. There are other well known tools (for test automation, performance testing, security testing, etc) that are not in this list - but one can easily find them on internet.

Tuesday, March 26, 2013

Assess the Accessibility focus


A lot has been realized by most of us regarding the Accessibility of the application. There are different standards, laws and guidelines to help every one of us in building a product that provides equal opportunities to everyone in this society e.g. WCAG 2.0, Equality Act 2012, BS 8878, etc.
Rather than considering this as some form of enforcement on us, we need to understand the benefits associated with being accessibility compliant. But before this, let us look at few myths associated with accessibility:
  • Accessibility gives what is needed by people with special needs and elderly people.
Accessibility means providing great user experience, enjoyment – fun of use & right value of our product to disabled people.

  • In journey of accessibility compliance, web sites should follow WCAG 2.0.
WCAG is useful for techies / designers in building the website. WAI docs are useful in getting how disable people use net & for mobile site creators, browser developers, etc.
Along with such initiatives a standard like BS8878 is useful – as it provides entire process of how we should follow and maintain accessibility compliant web sites and associated tests.

  • There is not much ROI / What will be the return on investment?
The population aware of internet and using sites / apps for their regular tasks is increasing regularly. It is advisable to build accessibility compliant product.
A joint study by Microsoft and Forrester conveys that- huge number of people are likely to be benefited from use of Accessible technology.


  • There is no need to perform separate testing for accessibility once the compliance / standard is followed.
Quality is conformance to the requirements; following the standards is not sufficient - unless the impact of compliance (during the development) is validated on different browsers.
Testing accessibility

This needs to be addressed case by case; however what all would like to understand is a common high level approach that can be used for accessibility testing.
This involves creating two fold matrix of your product.
Matrix – 1: Map the pages based on the level of compliance or type of accessibility solutions expected.
Matrix – 2: Map the pages to the checkpoints – this provides the details picture of which page should comply with which checkpoint.

For accessibility testing, merely validating the checkpoints on one browser is not sufficient; we should validate the accessibility checkpoints / test cases across multiple browsers and platforms.
This is a place where the concept of combinatorial testing is extremely useful. The algorithms provide us the good trade-off between number of combinations and coverage.

Once this framework is ready; it can be implemented on one product or entire portfolio. It is highly recommended to implement this across entire portfolio.

Few tips to remember when we talk about accessibility testing:
  • Comes without saying: If it is a web based application or native mobile app, Accessibility requirement comes without saying. In fact – even the desktop applications should be an accessible solution.
  • Brings in browser-platform support: the expectations from accessibility compliance automatically bring in the cross platform support for you site.
  • While testing the site – follow a thumb rule that the entire site should be accessible only with keyboard!
  • You should test multimedia pages without speakers … even if it sounds silly; this is the best possible way to identify and highlight the importance of text-video relation.
  • Do not rely on results of one tool.

Typical activities in the test strategy:
  • Static testing of code (for accessibility provisioning)
  • Browser testing – manual (checkpoint validation)
  • Check the pages without loading any image on the page
  • Use combination of tool driven tests and manual testing (online tools, JAWS, WAT 2.0).

Fitting this entire framework in your structured testing and risk based testing if obvious activity. 
We do get a question sometimes that how do we test accessibility in agile world ... Is agile world different as far as accessibility compliance is considered? Absolutely not. Merely, look at it as fitting the above framework in another framework (related to agile methodology). It is advisable to move upstream in case of accessibility testing (like any other testing). Jointly with developers - identify which type of test will add value upstream and then just 'add those cases'.