I strongly believe that a testing professional keeps understanding the requirements - right from the proposal stage till the last activity in their assignment. I will share my opinion on "on the field" techniques / methods for this - in a separate blog.
Here I am showcasing how a simple "list" can do a wonderful work of
1. understanding "learning pattern" and its link with domain & technology.
2. understand the impact on quality.
(Above should be done by people with "technical" or "core testing" inclination.)
3. measuring effectiveness of KT & its time cost impact (this should be for managers / aspiring managers).
The dedicated phase most probably revolves around understanding the document (any form of requirement document) sharing the vision, expectations, etc from the products (both documented and not documented).
Post this phase - every week list down the "new" requirements (everyone has to maintain it individually).
This difference should be understood well by all.
In free time / on weekly basis / with suitable frequency consolidate this list to arrive at "team's view" on new requirements.
Expand this list to create a table (note - this table is for those requirements where team agrees that the requirement is new / "discovery") -
Be open to accept all comments - as understanding a requirements is "highly" subjective. BAs / SMEs / Client might have conveyed this in different way, other team members (including design / development team) might have noticed it. Definitely there will be few, which all "agree" as valid new requirements.
Whenever / if CRs or other commercial mechanism is raised for managing these - your table would help.
Technically - this table would help you to understand a pattern - that team is likely to miss requirements of a particular nature. If all projects provide 1/2 such patterns -
1. your organization would get a wonderful trend for a KT effectiveness module / program.
2. further analysis on domain & technology would provide valuable inputs to BAs, design & development team.
In my opinion, the management purpose of # of requirements, cost, impact etc should be separated from the team - this needs to be maintained and tracked by managers themselves.
So team should not relate # of new requirements to measure KT effectiveness, rather team should derive new patterns, solutions, testing techniques suitable to these "discovery" requirements - its only then your core testing knowledge would go up.
Here I am showcasing how a simple "list" can do a wonderful work of
1. understanding "learning pattern" and its link with domain & technology.
2. understand the impact on quality.
(Above should be done by people with "technical" or "core testing" inclination.)
3. measuring effectiveness of KT & its time cost impact (this should be for managers / aspiring managers).
The dedicated phase most probably revolves around understanding the document (any form of requirement document) sharing the vision, expectations, etc from the products (both documented and not documented).
Post this phase - every week list down the "new" requirements (everyone has to maintain it individually).
- New - could be one that is present (in direct or implicit form) but not understood
- New - also could be one that is not specified
This difference should be understood well by all.
In free time / on weekly basis / with suitable frequency consolidate this list to arrive at "team's view" on new requirements.
Expand this list to create a table (note - this table is for those requirements where team agrees that the requirement is new / "discovery") -
- domain
- technology
- # cases added / deleted / affected by the "discovery"
- possible impact on design (won't work for functional testing team without any visibility to development)
- possible impact on data model (won't work for functional testing team without any visibility to development)
- possible impact on code (won't work for functional testing team without any visibility to development)
- projected impact on NFRs
- time (roughly) spent by whole team to grasp it (do not add a tracker for this! already you and your team are in enough trouble)
Be open to accept all comments - as understanding a requirements is "highly" subjective. BAs / SMEs / Client might have conveyed this in different way, other team members (including design / development team) might have noticed it. Definitely there will be few, which all "agree" as valid new requirements.
Whenever / if CRs or other commercial mechanism is raised for managing these - your table would help.
Technically - this table would help you to understand a pattern - that team is likely to miss requirements of a particular nature. If all projects provide 1/2 such patterns -
1. your organization would get a wonderful trend for a KT effectiveness module / program.
2. further analysis on domain & technology would provide valuable inputs to BAs, design & development team.
In my opinion, the management purpose of # of requirements, cost, impact etc should be separated from the team - this needs to be maintained and tracked by managers themselves.
So team should not relate # of new requirements to measure KT effectiveness, rather team should derive new patterns, solutions, testing techniques suitable to these "discovery" requirements - its only then your core testing knowledge would go up.