Thursday, September 28, 2006

Custom Fields - Regression

Can you tell off the top of your head (or via a simple query in your issue tracker) what is the regression ratio in your product for a specific version, and where are the regression areas?
Chances are, the answer is no. The reason is that out of the box, most issue trackers don't indicate an issue is a regression, and don't provide causality links.

This means that when you look at a new bug, you must rely on the description/comments if you want to note the regression source. This of course limits your ability to manipulate the data.

What can you do? Well, there are two things.
First, you can provide a simple "Regression" custom field which will be true in case the issue is understood to be a regression, or more accurately, a new issue caused by another change in the system (and not an issue which was there all along, just detected thru extended QA coverage).
This lets you know which issues are regressions, which usually points to issues you really want to deal with before releasing.

What it doesn't do is provide info as to the regression source . The only accurate way to track regression source is to provide links from the regression back to its cause. This can be done via a "Caused by/Caused" link. Hopefully your issue tracker allows custom links (JIRA does...). In case you know which specific issue caused it, fill that in. If you don't know, or its due to a large feature, just add a placeholder issue and link to that, even if its just a build number ( e.g. FeatureA, build1.2.1).

Lets assume this information is actually filled correctly most of the time (not a trivial assumption actually - those experienced in trying to convince all stakeholders to fill data they don't really think is useful to THEM will probably nod with agreement here). Now you can look at the SOURCES of regression and try to see if there is any intelligent conclusion that can be made. Is it the small stupid stuff that you feel will be trivial? Is it the hard fixes where you don't do enough code review and integration testing? Are the regressions local, or can an issue in one area cause a chain effect in different modules altogether? Are certain teams maintaining fewer/more regressions? Are certain modules pandora boxes for new bugs/regressions whenever they are touched?
These understandings should be leveraged to think where you want to improve in your development process.

NOTE: Even if you are afraid the data won't be collected, try to think along these lines via a less formal view of the regressions in your last big version. Hopefully you can make some conclusions with what you have at the moment.


Custom Fields - Detected In Field

This is a first in a series of small short suggestions on stuff you might want to track in your issue tracer.

One of the important ways to measure effectiveness of your quality effort is to understand the ratio of issues detected in the field (versus the whole issue count).

To track this, add a custom field that will be True/Checked whenever an issue ORIGINATED in the field. Note you should NOT include issues which were detected internally, waved thru by PM decision, and later detected/experienced by someone in the field. This is a different type of issue which doesn't reflect on the quality of your "detection" effort, but more on the quality of your decision making process.

An alternative to this simple field is to provide a link to a trouble ticket in some CRM system, and to decide only to do the link when the issue originated in the field. Of course a reverse link from the CRM to the issue is always recommended both for those that originated in the field and those that were detected internally.


Tuesday, September 12, 2006

Edible versions - Tips for implementation

So you read my Edible versions post and want to get the good stuff on how to make it happen in your organization. Well, to be honest, its not that difficult once all the parties sit together, talk about their expectations and design the protocols between the groups. See my earlier post for some general pointers.

Having said that - Maybe I CAN provide some tips that I saw working in the past:

  • Ensure all content in a delivery is tracked as a change request (bug/feature/other) in an issue tracker.
  • Provide an "Impact Level" for each change, so QA can easily focus on the high impact changes first.
  • For complex changes or large builds get used to hold delivery meetings where the DEV and QA discuss the changes and exchange ideas on how to proceed with covering this build. Be effective - know when the changes are small and the process can be lighter.
  • Try to establish an environment which automatically generates release notes for a version. At a minimum, as a report on whatever the issue tracker says. If possible, it should be based on actual deliveries to the SCM system. Use something like the integration between JIRA and Quickbuild/LuntBuild

Edible versions

All you QA people out there -

  • How often does your QA group "choke" on versions delivered by the development group?
  • Are you used to "unedible" versions which just don't taste right?
  • How about versions which simply come as a blackbox where you have no idea what changed, therefore no idea what to do with the version, what to expect of it?
Now all of you DEV people - think about the times where you installed 3rd party products/updates which caused you the same digestion problems...

Those unedible deliveries cause a variety of problems. Lets start with the fact that whoever gets the delivery wastes a lost of time chewing it up, in the meantime not only delaying coverage of the new delivery but also NOT making progress on previous deliveries (the classic question of when to commit your QA organization to a new build delivered by R&D and risk coverage progress for the earlier but known build. Especially tasteful when delivered to your plate a few hours before the weekend)

When the contents are unclear, QA people can only do general coverage, and the time it takes to verify regression concerns and make sure whatever we intended to fix was indeed fixed grows longer.

What is the point here? same as a sane person would refuse to swallow unmarked pills coming from unmarked bottles, refuse to take a version/build/delivery that is not documented sufficiently. I'm not aware of many good reasons not to mandate Internal release notes.

And DEV guys - consider some dogfooding on each delivery, even if its "just" to the friendly QA people next door. Lots of work you say? Well then, time to introduce Continuous Integration and Smoke Testing...

Thursday, September 07, 2006

Orcanos Product Life-cycle Management

A friend refered me to Orcanos QPack. This appears to be another candidate for the Product Life-cycle management segment. The company is based in Israel, and so far I've only briefly glanced at their documentation, and it seems interesting. Not much references in Google though...

If anyone has looked into this tool and can compare it to the other tools I mentioned here, please come forward...