Building a Test Case Management solution
I've recently been looking at how to build a reasonable test case management solution (good!=word documents) for our company. I quickly learned this is not a very developed field. Mercury TestDirector seems to dominate the commercial field, with the other QA product companies (CompuWare, IBM-Rational) following suite, but not there yet.
Test Case Management as I understand it deals with the following objects:
* Test Plans
* Test Cases
* Test Labs
* Test Schedule
The main deliverables expected from a Test Case Management solution are:
* It should provide visibility into the testing process - both the plan and the actual execution.
* Is risk-oriented, focusing on the riskier aspects of the system first.
* Facilitates day to day management of the testing team.
* It should present a coherent picture connected to the Issue Tracking tool (what bugs are blocking tests, what tests need to be ready for a certain release, etc.)
* It should present a coherent picture connected to the Automated testing frameworks (If the smoke test / sanity failed during the night, the Test Case Managment should reflect that without requiring the QA engineer to manually copy+paste the information).
* Tracking pass/fail status for each test
* Tracking pass percentage per module per version per milestone requirements
* Dynamic priority management which affects to do list for testers
* Coverage of requirements by test cases (each change request should be linked to at least one test case ? Closing a change request requires successful test run ? )
* Management of test beds relevant for each test case
* Manage test cases that are blocked by other change requests (bugs/enhancements)
* Accessibility to test case information from each bug / change request.
* Accessibility to test logs from the relevant test case instance
* Manage the relationship between different test cases - its quite useful to create dependencies - e.g. run the login test case then run the change password test case.
* MANY test case instances for ONE test case in ONE version
* ONE requirement can be tested by MANY test cases
* ONE test case may test MANY requirements ?
* ONE test script may be used in MANY test cases
* ONE test case may run in MANY configurations
* Time estimate for coverage of a version
* Last time a specific test was run, by who, what results
* specific test case results across builds
* Ability to share the test cases with an OEM or remote team
* Version control for test scripts (link to the SCM)
* What version of the test script was used for each test case instance ?
* What version of the test case was used for each test case instance ? (if we added a sequence, and suddenly tests started to fail, it doesn't mean regression in software)
My understanding of this is based on some resources I've been monitoring (see the full list on ).
Some of the noteworthy ones are:
* StickyMinds.com : Article info : Reengineering Test Management
* StickyMinds.com: Bringing Your Test Data to Life
* Rhonabwy writes about his own experience with the open source test case management tools from time to time
* OpenSourceTesting - Test Management Tools is the list of test case management tools everyone refers to.
Based on the information I found I've been looking at TestMaster, TestLink and QATraq, but didn't install any of them yet. The other ones really seem either dead or not ready yet.
I'm still trying to understand whether the correct approach is to get a test management tool, and try to connect it to your issue tracker, or to get a really customizable issue tracker (e.g. JIRA) and build what you need of a test management tool there. I'm still contemplating the pros and cons, and trying to understand how much of test case management is actually an issue tracking type activity, and what are the parts that are not. This is quite uncharted ground from what I've found so far, and I understand that part of having a "reasonable" solution is to skip some of the requirements and vote for simplicity.
A good friend which knows what he's doing when it comes to managing QA efforts repeatedly tells me to aviod the bells and whistles and the complex reports metrics and processes, and go for simple worthwhile metrics, the reports and flows that are necessary to support them, and to focus on the substance. Thats a big part of what I consider to be a "reasonable" solution.