Monday, February 01, 2016
Thursday, September 28, 2006
Can you tell off the top of your head (or via a simple query in your issue tracker) what is the regression ratio in your product for a specific version, and where are the regression areas?
Chances are, the answer is no. The reason is that out of the box, most issue trackers don't indicate an issue is a regression, and don't provide causality links.
This means that when you look at a new bug, you must rely on the description/comments if you want to note the regression source. This of course limits your ability to manipulate the data.
What can you do? Well, there are two things.
First, you can provide a simple "Regression" custom field which will be true in case the issue is understood to be a regression, or more accurately, a new issue caused by another change in the system (and not an issue which was there all along, just detected thru extended QA coverage).
This lets you know which issues are regressions, which usually points to issues you really want to deal with before releasing.
What it doesn't do is provide info as to the regression source . The only accurate way to track regression source is to provide links from the regression back to its cause. This can be done via a "Caused by/Caused" link. Hopefully your issue tracker allows custom links (JIRA does...). In case you know which specific issue caused it, fill that in. If you don't know, or its due to a large feature, just add a placeholder issue and link to that, even if its just a build number ( e.g. FeatureA, build1.2.1).
Lets assume this information is actually filled correctly most of the time (not a trivial assumption actually - those experienced in trying to convince all stakeholders to fill data they don't really think is useful to THEM will probably nod with agreement here). Now you can look at the SOURCES of regression and try to see if there is any intelligent conclusion that can be made. Is it the small stupid stuff that you feel will be trivial? Is it the hard fixes where you don't do enough code review and integration testing? Are the regressions local, or can an issue in one area cause a chain effect in different modules altogether? Are certain teams maintaining fewer/more regressions? Are certain modules pandora boxes for new bugs/regressions whenever they are touched?
These understandings should be leveraged to think where you want to improve in your development process.
NOTE: Even if you are afraid the data won't be collected, try to think along these lines via a less formal view of the regressions in your last big version. Hopefully you can make some conclusions with what you have at the moment.
Posted by Yuval at 2:35 PM
This is a first in a series of small short suggestions on stuff you might want to track in your issue tracer.
One of the important ways to measure effectiveness of your quality effort is to understand the ratio of issues detected in the field (versus the whole issue count).
To track this, add a custom field that will be True/Checked whenever an issue ORIGINATED in the field. Note you should NOT include issues which were detected internally, waved thru by PM decision, and later detected/experienced by someone in the field. This is a different type of issue which doesn't reflect on the quality of your "detection" effort, but more on the quality of your decision making process.
An alternative to this simple field is to provide a link to a trouble ticket in some CRM system, and to decide only to do the link when the issue originated in the field. Of course a reverse link from the CRM to the issue is always recommended both for those that originated in the field and those that were detected internally.
Posted by Yuval at 2:18 PM
Tuesday, September 12, 2006
So you read my Edible versions post and want to get the good stuff on how to make it happen in your organization. Well, to be honest, its not that difficult once all the parties sit together, talk about their expectations and design the protocols between the groups. See my earlier post for some general pointers.
Having said that - Maybe I CAN provide some tips that I saw working in the past:
- Ensure all content in a delivery is tracked as a change request (bug/feature/other) in an issue tracker.
- Provide an "Impact Level" for each change, so QA can easily focus on the high impact changes first.
- For complex changes or large builds get used to hold delivery meetings where the DEV and QA discuss the changes and exchange ideas on how to proceed with covering this build. Be effective - know when the changes are small and the process can be lighter.
- Try to establish an environment which automatically generates release notes for a version. At a minimum, as a report on whatever the issue tracker says. If possible, it should be based on actual deliveries to the SCM system. Use something like the integration between JIRA and Quickbuild/LuntBuild
All you QA people out there -
- How often does your QA group "choke" on versions delivered by the development group?
- Are you used to "unedible" versions which just don't taste right?
- How about versions which simply come as a blackbox where you have no idea what changed, therefore no idea what to do with the version, what to expect of it?
Those unedible deliveries cause a variety of problems. Lets start with the fact that whoever gets the delivery wastes a lost of time chewing it up, in the meantime not only delaying coverage of the new delivery but also NOT making progress on previous deliveries (the classic question of when to commit your QA organization to a new build delivered by R&D and risk coverage progress for the earlier but known build. Especially tasteful when delivered to your plate a few hours before the weekend)
When the contents are unclear, QA people can only do general coverage, and the time it takes to verify regression concerns and make sure whatever we intended to fix was indeed fixed grows longer.
What is the point here? same as a sane person would refuse to swallow unmarked pills coming from unmarked bottles, refuse to take a version/build/delivery that is not documented sufficiently. I'm not aware of many good reasons not to mandate Internal release notes.
And DEV guys - consider some dogfooding on each delivery, even if its "just" to the friendly QA people next door. Lots of work you say? Well then, time to introduce Continuous Integration and Smoke Testing...
Thursday, September 07, 2006
A friend refered me to Orcanos QPack. This appears to be another candidate for the Product Life-cycle management segment. The company is based in Israel, and so far I've only briefly glanced at their documentation, and it seems interesting. Not much references in Google though...
If anyone has looked into this tool and can compare it to the other tools I mentioned here, please come forward...
Posted by Yuval at 4:10 PM
Thursday, August 31, 2006
Rosie Sherry writes a very interesting blog focused on software testing.
One of these days I'll point to some of the interesting blogs I'm reading regularly.
In any case, One of her recent posts was "Hunt for Test Case Management System" where she discusses the lack of a killer test management solution, but tries to outline some alternatives.
Those interested should go over there and take a look...
Tuesday, August 29, 2006
How do you know your QA effort is being effective ?
Based on the different stakeholders which require input from the QA a typical answer might be that Product quality is high when released to customers.
Assuming that is indeed more or less what someone expects (I'd say effective QA needs to answer to some other requirements as well) how does one go about checking whether the product quality is indeed high?
Those who reached a fairly intermediate level of QA understanding would easily point out that the percentage "QA Misses" (namely, the number of issues missed in QA and detected in the field) should be below a certain threshold. A high number here means simply that too many issues/bugs are not detected during the entire QA coverage only to be embarrassingly detected by a customer.
If one naively optimizes just for this variable, the obvious result is a prolonged QA effort, aiming to cover everything and minimize the risk. If no reasonable threshold is set, there is a danger of procrastinating and avoiding the release.
See The Mismeasure of Man of a cool article on abusing measurements in the software world...
Of course, a slightly more "advanced" optimization is to open many many bugs/issues so the miss ratio will become smaller due to the larger bugs found in QA, not due to missing less bugs. This can result in a lot of overhead for the QA/PM/DEV departments as they work on analyzing, prioritizing and processing all those bugs.
Did I forget to factor in the work to "resolve/close" those issues? NO! Several of those issues might indeed be resolved and verified/closed, but those are probably issues that were not part of the optimization but part of a good QA process (assuming your PM process manages the product contents effectively and knows how to enforce a code-freeze...).
My point is that there are a lot of issues that are simply left there to rot as open issues, as their business priority is not high enough to warrant time for fixing them or risking the implications of introducing them to the version.
A good friend has pointed this phenomena to me a couple of years ago, naming it "The Defect Junk Factory" (translated from hebrew). He meant that bugs which are not fixed for the version on which they were opened on, indicate that the QA effort was not focusing on the business priorities. The dangers of this factory is a waste of time processing them, and the direct assumption that either the QA effort took longer because it spent time on these bugs, or that it missed higher business priority bugs when focusing on these easy ones.
Kind of the argument regarding speed cameras being placed "under the streetlight" to easily catch speed offenders (with doubtful effect on overall safety), but all the while missing the really dangerous offenders.
So what can be done? my friend suggested measuring the rate of defects that are NOT fixed for that version. The higher this number, the more your QA effort is focusing on the wrong issues.
Just remember that this is a statistical measure. Examining a specific defect might show that it was a good idea for the QA to focus there, and the fix was avoided due to other reasons. But when looking across a wide sample, its unreasonable that a high number of defects are simply not relevant. If not a QA focus issue, something else is stinking, and is worth looking at in any case.
Another factor of an effective QA is fast coverage. What is fast? I don't have a ratio of QA time related to development time. Its probably a factor of the type of changes (Infrastructure, new features, Integration work) done in the new version as each type has a different ratio of QA to DEV effort. (e.g. kernel upgrade usually requires much more QA compared to DEV effort )
Maybe one of the readers has a number he's comfortable with - I'd love to hear.
What I do know is that version-to-version the coverage time should become shorter, and that the QA group should always aim to shorten this time further without significantly sacrificing overall quality. I expect QA groups to do risk-based coverage, automation for regression testing, and whatever measures which assist them in reducing the repeatable cost of QA coverage at the end of each version. The price/performance return on reducing the QA cycle is usually worth it to some extent.
To sum up, a good QA effort should:
- Minimize QA misses
- Minimize the defect junk factory
- Minimize QA cycle time without compromising quality
There are a couple of alternatives for managing severity and priority in the Issue Tracker.
Although there are many resources out there on this subject (see http://del.icio.us/yyeret/priority_severity) I’ll try to consolidate them and provide my 2c on the matter, as I think its an important subject.
First, seemingly simpler alternative, is Single-field priority – representing Customer Impact
The idea here is to only have a single priority/severity field. The reporter assigns it according to his understanding of the customer impact (severity, likelihood, scenario relevance, etc.). Product Management or any other business stakeholder can shift priority according to current release state, his understanding of the customer impact according to the described scenario. The developers prioritize work accordingly.
The strength of this approach is in its simplicity, and the fact that several issue trackers adopt this methodology and therefore support it better “out of the box”.
The weakness is that once in the workflow the original reasoning for the priority can get lost, and there is no discerning between the customer impact and other considerations such as version stability, R&D preferences, etc.
An example why this is bad? Lets say Keith opened bug #1031 with a Major priority. Julie the PM later decided that since there is some workaround and we are talking just about specific uses of a feature rarely used, the business priority is only Normal or Minor. Version1 is released with this bug unresolved. When doing planning for Version2 Julie missed this bug since its priority is lower.
Even if the feature its related to is now the main focus of the version. Even if not missed, looking for this bug and understanding the roots and history is very hard, especially consider the database structure of issue trackers. History is available, but its not as easy as fields on the main table of issues…
Brian Beaver provides a clear description of this approach at Categorizing Defects by Eliminating "Severity" and "Priority":
I recommend eliminating the Severity and Priority fields and replacing them with a single field that can encapsulate both types of information: call it the Customer Impact field. Every piece of software developed for sale by any company will have some sort of customer. Issues found when testing the software should be categorized based on the impact to the customer or the customer's view of the producer of the software. In fact, the testing team is a customer of the software as well. Having a Customer Impact field allows the testing team to combine documentation of outside-customer impact and testing-team impact. There would no longer be the need for Severity and Priority fields at all. The perceived impact and urgency given by both of those fields would be encapsulated in the Customer Impact field.
Johanna Rothman in Clarify Your Ranking for System Problem Reports talks about single-field risk/priority:
Instead of priority and severity, I use risk as a way to deal with problem reports, and how to know how to fix them. Here are the levels I choose:
o Critical: We have to fix this before we release. We will lose substantial customers or money if we don't.
o Important: We'd like to fix this before we release. It might perturb some customers, but we don't think they'll throw out the product or move to our competitors. If we don't fix it before we release, we either have to do something to that module or fix it in the next release.
o Minor: We'd like to fix this before we throw out this product.
Here, the idea is to use a severity field for technical risk of the issue, and a priority field for the business impact. The Reporter assigns severity according to the technical description of the issue, and also provides all other relevant information - frequency, reproducability, likelihood, and whether its an important use-case/test-case or not. Optionally, the Reporter can assign priority based on the business impact of the issue to the testing progress. e.g. if its a blocker to significant coverage, suggest a high priority. If he thinks this is an isolated use case, suggest a lower priority. A business stakeholder, be it PM, R&D Management, etc. assigns priority based on all technical and business factors, including the version/release plan. Developers work by Priority. Severity can be used as a secondary index/sort only.
Developers/Testers/Everyone working on issues should avoid working on high-severity issues with unset or low priority. This is core to the effectiveness of the Triage mechanism and the Issue LifeCycle Process
Customers see descriptions in release notes, without priority or severity. Roadmap communicated to customers reflects the priority, but not in so many terms.
Strengths of this approach are:
* Clear documentation of the business and technical risks, especially in face of changing priorities.
* Better reporting on product health when technical risk is available and not hidden by business impact glasses.
* Less drive for reporters to push for high priority to signify they found a critical issue. It’s legitimate to find a critical issue and still understand that due to business reasons it won't be high priority.
* Better accommodation of issues that transcend releases - where the priority might change significantly once in a new release.
The weaknesses are that it’s a bit more complex, especially for newbies, and might require some customization of your issue tracker, although if your tool cannot do this quite easily, maybe you have the wrong tool…
In addition, customers have trouble understanding the difference between their priority for the issue and the priority assigned within the product organization. The root cause here is probably the lack of transparency regarding the reasoning behind the business priority. I’d guess that if a significant part of the picture is shared, most customers would probably understand (if not agree) with the priorities assigned to their issues. Its up to each organization to decide where it stands on the transparency issue. (see Tenets of Transparency for a very interesting discussion on the matter in the wonderful weblog of Eric Sink)
To see how our example works here – Keith will open the bug, assign a major severity, and a low priority since the bug blocks just one low-priority test case. Julie the PM sees the bug, and decides to assign a low priority value, so the bug is left for future versions for all practical matters. When planning V2 Julie goes over high-severity issues related to the features under focus for the version, and of course finds this issue as it’s a Major severity.
See the following resources for this approach:
* Priority is Business; Severity is Technical:
business priority: "How important is it to the business that we fix the bug?" technical severity: "How nasty is the bug from a technical perspective?" These two questions sometimes arrive at the same answer: a high severity bug is often also high priority, but not always. Allow me to suggest some definitions.
Severity is levels:
o Critical: the software will not run
o High: unexpected fatal errors (includes crashes and data corruption)
o Medium: a feature is malfunctioning
o Low: a cosmetic issue
o Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)
o P1: fix before next build to test
o P2: fix before final release
o P3: we probably won't get to these, but we want to track them anyway
* Corey Snow commented on Clarify Your Ranking for System Problem Reports:
Comment: Great subject. This is a perennial topic of debate in the profession. The question at hand is: Can a defect attribute that is ultimately irrelevant still serve an important function? Having implemented and/or managed perhaps a dozen different defect tracking systems over the years, I actually prefer having both Priority and Severity fields available for some (perhaps) unexpected reasons. Priority should be used as the 'risk scale' that the author describes. 3 levels, 5 levels, or whatever. Priority is used as a measure of risk. How important is it to fix this problem? Label the field 'Risk' if that makes it more clear. Not so complicated, right? So what good is Severity? Psychology! Its very existence makes the submitter pause to consider and differentiate between the Priority and Severity of the defect. In other words, without Severity, the submitter might be inclined to allow Severity attributes to influence the relative Priority value. Example 1: Defect causes total system meltdown. Only users in Time Zone GMT +5.45 (Kathmandu) are affected on leap years. There is one user in that time zone, but there is a manual workaround, and a year to fix it besides. Priority=Super Low, Severity=Ultra High Severity gives a place for the tester to 'vent' about their spectacular meltdown, without influencing the relative Priority rating. Example 2: Defect is a minor typo. Typo in on the 'Welcome to Our Product' screen, which is the first thing every user will see. Priority=Ultra High, Severity=Super Low Again, Severity gives a place for the tester to express how unimportant the defect is from a functional perspective, without clouding their Priority assessment. I once managed a defect tracking system with only a Priority field. This frequently led to a great deal of wasted time in defect discussion meetings as one side would argue about Severity attributes while another would argue about Priority attributes, but the parties were not even aware of the distinction that was actually dividing them. Having both fields serves to head off this communication problem, even if Severity is completely irrelevant when fix/no fix decisions are actually made. ~ Corey Snow (03/11/03)
Author's Response: Corey, Great counterpoint to my argument. ~ Johanna Rothman (03/12/03)
As can probably be understood by now, My personal favorite is the Severity+Priority approach. I confess I don’t have much experience with the single priority approach, but I really feel the Severity+Priority way is very effective, without significant costs once every stakeholder understands it.
What is your favorite here?
Here is a first round of my favourite resources.As those who read my posts probably noticed already, I'm quite a heavy user of del.icio.us. I won't go into what it is, am sure those interested can go there or google it to see whether they like it or not.
I'm playing around with Google Notebook as an alternative, with better google integration obviously, albeit less taxonomy/tagging capabilities.
In any case, I highlighted some of my favourite resources under the rndblog_resources tag, and provided some notes to accompany the links and explain why I find them essential in the favorites list of anyone interested in the contents of this blog (and probably for some people who are NOT that interested in this blog, but then again, they won't find be here...)
There are more gems in my account, so probably expect future rounds based on existing and new resources I find. I'd love to hear about more resources along those lines - either comment or suggest them to me via the del.icio.us network.
Anyone interested to track my favorites is welcome to join my network
Now for the resources themselves (copy paste from a del.icio.us linkroll page)
- » Articles - James Bach - Satisfice, Inc.
Articles and other resourcers from one of the thought leaders in the software testing/QA world.
- » Bret Pettichords Software Testing Hotlist
compilation of many interesting testing resources
- » Brian Marick - Writings
Articles and other resourcers from one of the thought leaders in the software testing/QA world.
- » Can we ship yet
Very interesting view on the multiple streams/versions problem, counting on SCM capabilities to ease the problem.
- » Cem Kaner - Publications
Articles and other resourcers from one of the thought leaders in the software testing/QA world.
- » CM Crossroads - Home
The ultimate independent resource for CM material
- » Cultural Fits and Starts
From the hiring expert, a focus on the cultural aspects (too often ignored)
- » Data Driven Test Automation Frameworks
Thought-inducing paper on how testing harnesses should look like. Very relevant when thinking about Automation
- » DefectTrackingPatterns
Initial work on defect tracking patterns
- » Eric Sinks Weblog
A remarkable blog mixing development and business/marketing, especially applicable to all of us in small companies - both ISVs and startups.
- » Grove Consultants Software Testing
Great material, the highlight being an interview test for testers Ive been using successfully.
- » Hacknot
Anonymous writer provides cynic homurous view on issues which will sound right at home for any manager of developers/testers
- » Hacknot - Interview With The Sociopath
As usual, a cynic but beneficial view, this time a spin on interviewing.
- » High-level Best Practices in SCM
Interesting practices/patterns provided by Perforce but with quite a generic appeal
- » Interviews
Highlights of Interviews with leading people in the testing world. Use as pointers - if someone is interesting go read their works...
- » Joel on Software
- » Quality Tree Software, Inc. - Publications
A single location for the works of one of my favorite content-providers on stickyminds.com
- » Rothman Consulting Group, Inc. - Writings & Presentations
Articles and other resourcers from one of the thought leaders in the Product Management / Software development/Testing / Agile world
- » Slashdot | Bug Tracking Across Multiple Code Streams?
Interesting view on how to handle the multiple code streams issue tracking problem. Not groundbreaking, but a good summary on what approaches are being used out there
- » Software Development, Computer Programming, Software Design - developer.* - DeveloperDotStar.com
a repository for lots of development-related essays/articles, focusing on a pragmatic view of the complex world we work in
- » Software Reconstruction: Patterns for Reproducing Software Builds
very interesting pragmatic work on the relatively uncharted domain of software builds
- » Software Testing and Quality Assurance Online Forums
Used mainly for looking for tool-related answers, quite legacy-focused in my oppinion.
- » Source Control HOWTO
basic concepts of SCM from the developer of SourceVault ( A VSS replacement system)
- » StickyMinds Home Page
Stickyminds is a vast repository of essays from various thought-leaders in the development/QA world. It’s the web presence of “Better Software Magazine” which is an interesting magazine (but probably not worth the subscription these days)
- » Streamed Lines: Branching Patterns for Parallel Software Development
the groundbreaking pattern work for SCM world by Brad Appleton
- » The Guerrila Guide to interviewing
a classic from Joel on interviewing
Thursday, August 24, 2006
David V. Lorenzo posts favorite interviewing questions of people on his Career Intensity Blog
Here is his post about mine...
At the risk of hinting the people who I interview in the future, also check out my interviewing tag on del.icio.us for a lot of resources on the matter.
Why am I open about this?
One of my main beliefs in interviewing btw is to try and understand behaviourial aspects in addition to skills. Someone might get a head start for the skills questions if he prepares, but I think in that area if someone is diligent enough to research his interviewer, go and read multiple resources, learn enough to know the subject, he's getting extra credit right out of the gate...
For the behaviourial aspects the discussion is more flowing, and no preparation can really help you there.
I cannot finish a post about interviewing without mentioning Johanna Rothman. She's writing the Hiring Technical People blog, and wrote the Hiring The Best Knowledge Workers, Techies & Nerds: The Secrets & Science Of Hiring Technical People book. Check it out.