tag:blogger.com,1999:blog-154672942024-03-09T01:33:21.497+02:00Stories from the RND Management trenchesYuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.comBlogger26125tag:blogger.com,1999:blog-15467294.post-90037436759353484802016-02-01T16:45:00.002+02:002016-02-01T16:45:48.125+02:00Find me at YuvalYeret.com<div dir="rtl" style="text-align: right;" trbidi="on">
<div style="text-align: left;">
Hi,</div>
<div style="text-align: left;">
While I'm still spending time helping people who are in the R&D management trenches, this blog is now defunct. I've been blogging at <a href="http://yuvalyeret.com/">YuvalYeret.com</a> for a while now. </div>
<div style="text-align: left;">
Looking forward to seeing you there!</div>
<div style="text-align: left;">
<br /></div>
</div>
Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-53279879856100996492006-09-28T14:35:00.001+03:002006-09-28T14:35:41.311+03:00Custom Fields - RegressionCan you tell off the top of your head (or via a simple query in your issue tracker) what is the regression ratio in your product for a specific version, and where are the regression areas?<br>Chances are, the answer is no. The reason is that out of the box, most issue trackers don't indicate an issue is a regression, and don't provide causality links. <br><br>This means that when you look at a new bug, you must rely on the description/comments if you want to note the regression source. This of course limits your ability to manipulate the data.<br><br>What can you do? Well, there are two things. <br>First, you can provide a simple "<span style="font-weight: bold;">Regression</span>" custom field which will be true in case the issue is understood to be a regression, or more accurately, a new issue caused by another change in the system (and not an issue which was there all along, just detected thru extended QA coverage). <br>This lets you know which issues are regressions, which usually points to issues you really want to deal with before releasing. <br><br>What it doesn't do is provide info as to the <span style="font-weight: bold;">regression source </span>. The only accurate way to track regression source is to provide links from the regression back to its cause. This can be done via a "Caused by/Caused" link. Hopefully your issue tracker allows custom links (JIRA does...). In case you know which specific issue caused it, fill that in. If you don't know, or its due to a large feature, just add a placeholder issue and link to that, even if its just a build number ( e.g. FeatureA, build1.2.1).<br><br>Lets assume this information is actually filled correctly most of the time (not a trivial assumption actually - those experienced in trying to convince all stakeholders to fill data they don't really think is useful to THEM will probably nod with agreement here). Now you can look at the SOURCES of regression and try to see if there is any intelligent conclusion that can be made. Is it the small stupid stuff that you feel will be trivial? Is it the hard fixes where you don't do enough code review and integration testing? Are the regressions local, or can an issue in one area cause a chain effect in different modules altogether? Are certain teams maintaining fewer/more regressions? Are certain modules pandora boxes for new bugs/regressions whenever they are touched? <br>These understandings should be leveraged to think where you want to improve in your development process. <br><br>NOTE: Even if you are afraid the data won't be collected, try to think along these lines via a less formal view of the regressions in your last big version. Hopefully you can make some conclusions with what you have at the moment. <br><br><br> Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-84506283880454292962006-09-28T14:18:00.001+03:002006-09-28T14:18:35.753+03:00Custom Fields - Detected In FieldThis is a first in a series of small short suggestions on stuff you might want to track in your issue tracer.<br><br>One of the important ways to measure effectiveness of your quality effort is to understand the ratio of issues detected in the field (versus the whole issue count). <br><br>To track this, add a custom field that will be True/Checked whenever an issue ORIGINATED in the field. Note you should NOT include issues which were detected internally, waved thru by PM decision, and later detected/experienced by someone in the field. This is a different type of issue which doesn't reflect on the quality of your "detection" effort, but more on the quality of your decision making process. <br><br>An alternative to this simple field is to provide a link to a trouble ticket in some CRM system, and to decide only to do the link when the issue originated in the field. Of course a reverse link from the CRM to the issue is always recommended both for those that originated in the field and those that were detected internally. <br><br><br> Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-50672297508150924452006-09-12T13:31:00.000+03:002006-09-12T13:43:18.851+03:00Edible versions - Tips for implementationSo you read my <a href="http://rndmgmttrenches.blogspot.com/2006/09/edible-versions.html">Edible versions</a> post and want to get the good stuff on how to make it happen in your organization. Well, to be honest, its not that difficult once all the parties sit together, talk about their expectations and design the protocols between the groups. See my earlier <a href="http://rndmgmttrenches.blogspot.com/2006/08/qadev-protocols-calling-developers-to.html">post</a> for some general pointers.<br /><br />Having said that - Maybe I CAN provide some tips that I saw working in the past:<br /><ul><li>Ensure all content in a delivery is tracked as a change request (bug/feature/other) in an issue tracker.<br /></li><li>Provide an "Impact Level" for each change, so QA can easily focus on the high impact changes first.</li><li>For complex changes or large builds get used to hold delivery meetings where the DEV and QA discuss the changes and exchange ideas on how to proceed with covering this build. Be effective - know when the changes are small and the process can be lighter.</li><li>Try to establish an environment which automatically generates release notes for a version. At a minimum, as a <a href="http://www.atlassian.com/software/jira/tour/step5.jsp">report</a> on whatever the issue tracker says. If possible, it should be based on actual deliveries to the SCM system. Use something like the <a href="http://confluence.atlassian.com/display/JIRAEXT/JIRA+QuickBuild+Plugin">integration</a> between <a href="http://www.atlassian.com/jira">JIRA</a> and<a href="http://www.pmease.com/"> Quickbuild</a>/<a href="http://luntbuild.javaforge.com/">LuntBuild</a><br /></li></ul>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-62457544537424963862006-09-12T13:16:00.000+03:002006-09-12T13:29:51.287+03:00Edible versionsAll you QA people out there -<br /><ul><li>How often does your QA group "choke" on versions delivered by the development group?</li><li>Are you used to "unedible" versions which just don't taste right?</li><li>How about versions which simply come as a blackbox where you have no idea what changed, therefore no idea what to do with the version, what to expect of it?</li></ul>Now all of you DEV people - think about the times where you installed 3rd party products/updates which caused you the same digestion problems...<br /><br />Those unedible deliveries cause a variety of problems. Lets start with the fact that whoever gets the delivery wastes a lost of time chewing it up, in the meantime not only delaying coverage of the new delivery but also NOT making progress on previous deliveries (the classic question of when to commit your QA organization to a new build delivered by R&D and risk coverage progress for the earlier but known build. Especially tasteful when delivered to your plate a few hours before the weekend)<br /><br />When the contents are unclear, QA people can only do general coverage, and the time it takes to verify regression concerns and make sure whatever we intended to fix was indeed fixed grows longer.<br /><br />What is the point here? same as a sane person would refuse to swallow unmarked pills coming from unmarked bottles, refuse to take a version/build/delivery that is not documented sufficiently. I'm not aware of many good reasons not to mandate Internal release notes.<br /><br />And DEV guys - consider some dogfooding on each delivery, even if its "just" to the friendly QA people next door. Lots of work you say? Well then, time to introduce Continuous Integration and Smoke Testing...Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-86496763230015166022006-09-07T16:10:00.000+03:002006-09-07T16:14:08.945+03:00Orcanos Product Life-cycle ManagementA friend refered me to <a href="http://www.orcanos.com/">Orcanos QPack</a>. This appears to be another candidate for the Product Life-cycle management segment. The company is based in Israel, and so far I've only briefly glanced at their documentation, and it seems interesting. Not much references in Google though...<br /><br />If anyone has looked into this tool and can compare it to the other tools I mentioned here, please come forward...Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com9tag:blogger.com,1999:blog-15467294.post-9910789052718952502006-08-31T10:04:00.000+03:002006-08-31T10:08:02.725+03:00Rosie has the test management blues as well...Rosie Sherry writes a very interesting <a href="http://rosiesherry.blogspot.com/">blog</a> focused on software testing.<br />One of these days I'll point to some of the interesting blogs I'm reading regularly.<br /><br />In any case, One of her recent posts was "<a href="http://rosiesherry.blogspot.com/2006/08/hunt-for-test-case-management-system.html">Hunt for Test Case Management System</a>" where she discusses the lack of a killer test management solution, but tries to outline some alternatives.<br /><br />Those interested should go over there and take a look...Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-44990182381646457462006-08-29T12:46:00.000+03:002006-08-29T13:20:49.903+03:00QA Effort EffectivenessHow do you know your <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_0">QA</span> effort is being effective ?<br /><br />Based on the different stakeholders which require input from the <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_1">QA</span> a typical answer might be that Product quality is high when released to customers.<br /><br />Assuming that is indeed more or less what someone expects (I'd say effective <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_2">QA</span> needs to answer to some other requirements as well) how does one go about checking whether the product quality is indeed high?<br /><br />Those who reached a fairly intermediate level of <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_3">QA</span> understanding would easily point out that the percentage "<span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_4">QA</span> Misses" (namely, the number of issues missed in <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_5">QA</span> and detected in the field) should be below a certain threshold. A high number here means simply that too many issues/bugs are not detected during the entire <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_6">QA</span> coverage only to be <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-corrected" id="SPELLING_ERROR_7">embarrassingly</span> detected by a customer.<br /><br />If one naively optimizes just for this variable, the obvious result is a prolonged <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_8">QA</span> effort, aiming to cover everything and minimize the risk. If no reasonable threshold is set, there is a danger of procrastinating and avoiding the release.<br />See <a href="http://www.hacknot.info/hacknot/action/showEntry?eid=88">The <span onclick="BLOG_clickHandler(this)" class="blsp-spelling-error" id="SPELLING_ERROR_9">Mismeasure</span> of Man</a> of a cool article on abusing measurements in the software world...<br /><br />Of course, a slightly more "advanced" optimization is to open many many bugs/issues so the miss ratio will become smaller due to the larger bugs found in QA, not due to missing less bugs. This can result in a lot of overhead for the QA/PM/DEV departments as they work on analyzing, prioritizing and processing all those bugs.<br />Did I forget to factor in the work to "resolve/close" those issues? NO! Several of those issues might indeed be resolved and verified/closed, but those are probably issues that were not part of the optimization but part of a good QA process (assuming your PM process manages the product contents effectively and knows how to enforce a code-freeze...).<br /><br />My point is that there are a lot of issues that are simply left there to rot as open issues, as their business priority is not high enough to warrant time for fixing them or risking the implications of introducing them to the version.<br /><br />A good friend has pointed this phenomena to me a couple of years ago, naming it "The Defect Junk Factory" (translated from hebrew). He meant that bugs which are not fixed for the version on which they were opened on, indicate that the QA effort was not focusing on the business priorities. The dangers of this factory is a waste of time processing them, and the direct assumption that either the QA effort took longer because it spent time on these bugs, or that it missed higher business priority bugs when focusing on these easy ones.<br />Kind of the argument regarding speed cameras being placed "under the streetlight" to easily catch speed offenders (with doubtful effect on overall safety), but all the while missing the really dangerous offenders.<br /><br />So what can be done? my friend suggested measuring the rate of defects that are NOT fixed for that version. The higher this number, the more your QA effort is focusing on the wrong issues.<br />Just remember that this is a statistical measure. Examining a specific defect might show that it was a good idea for the QA to focus there, and the fix was avoided due to other reasons. But when looking across a wide sample, its unreasonable that a high number of defects are simply not relevant. If not a QA focus issue, something else is stinking, and is worth looking at in any case.<br /><br />Another factor of an effective QA is fast coverage. What is fast? I don't have a ratio of QA time related to development time. Its probably a factor of the type of changes (Infrastructure, new features, Integration work) done in the new version as each type has a different ratio of QA to DEV effort. (e.g. kernel upgrade usually requires much more QA compared to DEV effort )<br />Maybe one of the readers has a number he's comfortable with - I'd love to hear.<br />What I do know is that version-to-version the coverage time should become shorter, and that the QA group should always aim to shorten this time further without significantly sacrificing overall quality. I expect QA groups to do risk-based coverage, automation for regression testing, and whatever measures which assist them in reducing the repeatable cost of QA coverage at the end of each version. The price/performance return on reducing the QA cycle is usually worth it to some extent.<br /><br />To sum up, a good QA effort should:<br /><ul><li>Minimize QA misses</li><li>Minimize the defect junk factory</li><li>Minimize QA cycle time without compromising quality</li></ul>What do you think is a good QA effort? How are you measuring it?<br /><br /><br /><br /><br /><br /><a href="http://www.hacknot.info/hacknot/action/showEntry?eid=88"></a>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-61623628502640098962006-08-29T11:33:00.000+03:002006-08-29T11:38:33.125+03:00Severity and Priority - The DebateThere are a couple of alternatives for managing severity and priority in the Issue Tracker.<br />Although there are many resources out there on this subject (see <a href="http://del.icio.us/yyeret/priority_severity">http://del.icio.us/yyeret/priority_severity</a>) I’ll try to consolidate them and provide my 2c on the matter, as I think its an important subject.<br /><br /><span style="font-weight: bold;font-size:130%;" >Single-field Priority</span><br />First, seemingly simpler alternative, is Single-field priority – representing Customer Impact<br />The idea here is to only have a single priority/severity field. The reporter assigns it according to his understanding of the customer impact (severity, likelihood, scenario relevance, etc.). Product Management or any other business stakeholder can shift priority according to current release state, his understanding of the customer impact according to the described scenario. The developers prioritize work accordingly.<br />The strength of this approach is in its simplicity, and the fact that several issue trackers adopt this methodology and therefore support it better “out of the box”.<br />The weakness is that once in the workflow the original reasoning for the priority can get lost, and there is no discerning between the customer impact and other considerations such as version stability, R&D preferences, etc.<br /><br />An example why this is bad? Lets say Keith opened bug #1031 with a Major priority. Julie the PM later decided that since there is some workaround and we are talking just about specific uses of a feature rarely used, the business priority is only Normal or Minor. Version1 is released with this bug unresolved. When doing planning for Version2 Julie missed this bug since its priority is lower.<br />Even if the feature its related to is now the main focus of the version. Even if not missed, looking for this bug and understanding the roots and history is very hard, especially consider the database structure of issue trackers. History is available, but its not as easy as fields on the main table of issues…<br /><br />Brian Beaver provides a clear description of this approach at <a href="http://www.stickyminds.com/s.asp?F=S3224_ART_2"><span class="Title"></span></a><a href="http://www.stickyminds.com/s.asp?F=S3224_ART_2" name="coltop">Categorizing Defects by Eliminating "Severity" and "Priority":</a><br /><span style="font-style: italic;">I recommend eliminating the Severity and Priority fields and replacing them with a single field that can encapsulate both types of information: call it the Customer Impact field. Every piece of software developed for sale by any company will have some sort of customer. Issues found when testing the software should be categorized based on the impact to the customer or the customer's view of the producer of the software. In fact, the testing team is a customer of the software as well. Having a Customer Impact field allows the testing team to combine documentation of outside-customer impact and testing-team impact. There would no longer be the need for Severity and Priority fields at all. The perceived impact and urgency given by both of those fields would be encapsulated in the Customer Impact field.</span><br /><br />Johanna Rothman in <a href="http://www.stickyminds.com/s.asp?F=S6288_COL_2">Clarify Your Ranking for System Problem Reports</a> talks about single-field risk/priority:<br /><br /><span style="font-style: italic;"> Instead of priority and severity, I use risk as a way to deal with problem reports, and how to know how to fix them. Here are the levels I choose:</span><br /><span style="font-style: italic;"> o Critical: We have to fix this before we release. We will lose substantial customers or money if we don't.</span><br /><span style="font-style: italic;"> o Important: We'd like to fix this before we release. It might perturb some customers, but we don't think they'll throw out the product or move to our competitors. If we don't fix it before we release, we either have to do something to that module or fix it in the next release.</span><br /><span style="font-style: italic;"> o Minor: We'd like to fix this before we throw out this product.</span><br /><br /><span style="font-weight: bold;font-size:130%;" >bugzilla-style Severity+Priority</span><br /><br />Here, the idea is to use a severity field for technical risk of the issue, and a priority field for the business impact. The Reporter assigns severity according to the technical description of the issue, and also provides all other relevant information - frequency, reproducability, likelihood, and whether its an important use-case/test-case or not. Optionally, the Reporter can assign priority based on the business impact of the issue to the testing progress. e.g. if its a blocker to significant coverage, suggest a high priority. If he thinks this is an isolated use case, suggest a lower priority. A business stakeholder, be it PM, R&D Management, etc. assigns priority based on all technical and business factors, including the version/release plan. Developers work by Priority. Severity can be used as a secondary index/sort only.<br />Developers/Testers/Everyone working on issues should avoid working on high-severity issues with unset or low priority. This is core to the effectiveness of the Triage mechanism and the Issue LifeCycle Process<br />Customers see descriptions in release notes, without priority or severity. Roadmap communicated to customers reflects the priority, but not in so many terms.<br /><br />Strengths of this approach are:<br />* Clear documentation of the business and technical risks, especially in face of changing priorities.<br />* Better reporting on product health when technical risk is available and not hidden by business impact glasses.<br />* Less drive for reporters to push for high priority to signify they found a critical issue. It’s legitimate to find a critical issue and still understand that due to business reasons it won't be high priority.<br />* Better accommodation of issues that transcend releases - where the priority might change significantly once in a new release.<br /><br />The weaknesses are that it’s a bit more complex, especially for newbies, and might require some customization of your issue tracker, although if your tool cannot do this quite easily, maybe you have the wrong tool…<br />In addition, customers have trouble understanding the difference between their priority for the issue and the priority assigned within the product organization. The root cause here is probably the lack of transparency regarding the reasoning behind the business priority. I’d guess that if a significant part of the picture is shared, most customers would probably understand (if not agree) with the priorities assigned to their issues. Its up to each organization to decide where it stands on the transparency issue. (see <a href="http://software.ericsink.com/bos/Transparency.html">Tenets of Transparency</a> for a very interesting discussion on the matter in the wonderful weblog of <a href="http://software.ericsink.com/">Eric Sink</a>)<br /><br />To see how our example works here – Keith will open the bug, assign a major severity, and a low priority since the bug blocks just one low-priority test case. Julie the PM sees the bug, and decides to assign a low priority value, so the bug is left for future versions for all practical matters. When planning V2 Julie goes over high-severity issues related to the features under focus for the version, and of course finds this issue as it’s a Major severity.<br /><br />See the following resources for this approach:<br />* <a href="http://c2.com/cgi/wiki?DifferentiatePriorityAndSeverity">http://c2.com/cgi/wiki?DifferentiatePriorityAndSeverity</a><br />* <a href="http://www.stickyminds.com/sitewide.asp?Function=edetail&ObjectType=COL&ObjectId=6323">Priority is Business; Severity is Technical</a>:<br /><br /><span style="font-style: italic;"> business priority: "How important is it to the business that we fix the bug?" technical severity: "How nasty is the bug from a technical perspective?" These two questions sometimes arrive at the same answer: a high severity bug is often also high priority, but not always. Allow me to suggest some definitions.</span><br /><br /><span style="font-style: italic;"> Severity is levels:</span><br /><span style="font-style: italic;"> o Critical: the software will not run</span><br /><span style="font-style: italic;"> o High: unexpected fatal errors (includes crashes and data corruption)</span><br /><span style="font-style: italic;"> o Medium: a feature is malfunctioning</span><br /><span style="font-style: italic;"> o Low: a cosmetic issue</span><br /><br /><span style="font-style: italic;"> Priority levels:</span><br /><span style="font-style: italic;"> o Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)</span><br /><span style="font-style: italic;"> o P1: fix before next build to test</span><br /><span style="font-style: italic;"> o P2: fix before final release</span><br /><span style="font-style: italic;"> o P3: we probably won't get to these, but we want to track them anyway</span><br /><br />* Corey Snow commented on <a href="http://www.stickyminds.com/s.asp?F=S6288_COL_2">Clarify Your Ranking for System Problem Reports</a>:<br /><span style="font-style: italic;"> Comment: Great subject. This is a perennial topic of debate in the profession. The question at hand is: Can a defect attribute that is ultimately irrelevant still serve an important function? Having implemented and/or managed perhaps a dozen different defect tracking systems over the years, I actually prefer having both Priority and Severity fields available for some (perhaps) unexpected reasons. Priority should be used as the 'risk scale' that the author describes. 3 levels, 5 levels, or whatever. Priority is used as a measure of risk. How important is it to fix this problem? Label the field 'Risk' if that makes it more clear. Not so complicated, right? So what good is Severity? Psychology! Its very existence makes the submitter pause to consider and differentiate between the Priority and Severity of the defect. In other words, without Severity, the submitter might be inclined to allow Severity attributes to influence the relative Priority value. Example 1: Defect causes total system meltdown. Only users in Time Zone GMT +5.45 (Kathmandu) are affected on leap years. There is one user in that time zone, but there is a manual workaround, and a year to fix it besides. Priority=Super Low, Severity=Ultra High Severity gives a place for the tester to 'vent' about their spectacular meltdown, without influencing the relative Priority rating. Example 2: Defect is a minor typo. Typo in on the 'Welcome to Our Product' screen, which is the first thing every user will see. Priority=Ultra High, Severity=Super Low Again, Severity gives a place for the tester to express how unimportant the defect is from a functional perspective, without clouding their Priority assessment. I once managed a defect tracking system with only a Priority field. This frequently led to a great deal of wasted time in defect discussion meetings as one side would argue about Severity attributes while another would argue about Priority attributes, but the parties were not even aware of the distinction that was actually dividing them. Having both fields serves to head off this communication problem, even if Severity is completely irrelevant when fix/no fix decisions are actually made. ~ Corey Snow (03/11/03)</span><br /><br /><span style="font-style: italic;"> Author's Response: Corey, Great counterpoint to my argument. ~ Johanna Rothman (03/12/03)</span><br /><br /><span style="font-size:130%;"><span style="font-weight: bold;">Personal Favorite</span></span><br />As can probably be understood by now, My personal favorite is the Severity+Priority approach. I confess I don’t have much experience with the single priority approach, but I really feel the Severity+Priority way is very effective, without significant costs once every stakeholder understands it.<br /><br />What is your favorite here?Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-63424989151081447892006-08-29T10:43:00.000+03:002006-08-29T10:53:48.297+03:00Favorite resources - round I<div class="delicious-posts" id="delicious-posts-yyeret"><h2 class="delicious-banner sidebar-title">Here is a first round of my favourite resources.<br /></h2>As those who read my posts probably noticed already, I'm quite a heavy user of <a href="http://del.icio.us/">del.icio.us</a>. I won't go into what it is, am sure those interested can go there or google it to see whether they like it or not.<br />I'm playing around with <a href="http://www.google.com/notebook">Google Notebook</a> as an alternative, with better google integration obviously, albeit less taxonomy/tagging capabilities.<br /><br />In any case, I highlighted some of my favourite resources under the <a href="http://del.icio.us/yyeret/rndblog_resources">rndblog_resources</a> tag, and provided some notes to accompany the links and explain why I find them essential in the favorites list of anyone interested in the contents of this blog (and probably for some people who are NOT that interested in this blog, but then again, they won't find be here...)<br /><br />There are more gems in my account, so probably expect future rounds based on existing and new resources I find. I'd love to hear about more resources along those lines - either comment or suggest them to me via the del.icio.us network.<br />Anyone interested to track my favorites is welcome to <a href="http://del.icio.us/network?add=yyeret">join my network</a><br /><br />Now for the resources themselves (copy paste from a <a href="http://del.icio.us/help/linkrolls">del.icio.us linkroll</a> page)<br /><h2 class="delicious-banner sidebar-title"><a href="http://del.icio.us/"><img src="http://del.icio.us/static/img/delicious.small.gif" alt="del.icio.us" height="10" width="10" /></a> <a href="http://del.icio.us/yyeret/rndblog_resources">Resources for Blog</a></h2><ul><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Articles and other resourcers from one of the thought leaders in the software testing/QA world." href="http://www.satisfice.com/articles.shtml">Articles - James Bach - Satisfice, Inc.</a> <p class="delicious-extended">Articles and other resourcers from one of the thought leaders in the software testing/QA world.</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="compilation of many interesting testing resources" href="http://www.io.com/%7Ewazmo/qa/">Bret Pettichords Software Testing Hotlist</a> <p class="delicious-extended">compilation of many interesting testing resources</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Articles and other resourcers from one of the thought leaders in the software testing/QA world." href="http://www.testing.com/writings.html">Brian Marick - Writings</a> <p class="delicious-extended">Articles and other resourcers from one of the thought leaders in the software testing/QA world.</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Very interesting view on the multiple streams/versions problem, counting on SCM capabilities to ease the problem." href="http://www.perforce.com/perforce/conf2001/rees/WPRees.html">Can we ship yet</a> <p class="delicious-extended">Very interesting view on the multiple streams/versions problem, counting on SCM capabilities to ease the problem.</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Articles and other resourcers from one of the thought leaders in the software testing/QA world." href="http://kaner.com/articles.html">Cem Kaner - Publications</a> <p class="delicious-extended">Articles and other resourcers from one of the thought leaders in the software testing/QA world.</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="The ultimate independent resource for CM material" href="http://www.cmcrossroads.com/">CM Crossroads - Home</a> <p class="delicious-extended">The ultimate independent resource for CM material</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="From the hiring expert, a focus on the cultural aspects (too often ignored)" href="http://hiring.inc.com/columns/jrothman/20050204.html">Cultural Fits and Starts</a> <p class="delicious-extended">From the hiring expert, a focus on the cultural aspects (too often ignored)</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Thought-inducing paper on how testing harnesses should look like. Very relevant when thinking about Automation" href="http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm">Data Driven Test Automation Frameworks</a> <p class="delicious-extended">Thought-inducing paper on how testing harnesses should look like. Very relevant when thinking about Automation</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Initial work on defect tracking patterns" href="http://c2.com/cgi/wiki?DefectTrackingPatterns">DefectTrackingPatterns</a> <p class="delicious-extended">Initial work on defect tracking patterns</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="A remarkable blog mixing development and business/marketing, especially applicable to all of us in small companies - both ISVs and startups." href="http://software.ericsink.com/">Eric Sinks Weblog</a> <p class="delicious-extended">A remarkable blog mixing development and business/marketing, especially applicable to all of us in small companies - both ISVs and startups.</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Great material, the highlight being an interview test for testers Ive been using successfully." href="http://www.grove.co.uk/">Grove Consultants Software Testing</a> <p class="delicious-extended">Great material, the highlight being an interview test for testers Ive been using successfully.</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Anonymous writer provides cynic homurous view on issues which will sound right at home for any manager of developers/testers" href="http://www.hacknot.info/hacknot/action/home">Hacknot</a> <p class="delicious-extended">Anonymous writer provides cynic homurous view on issues which will sound right at home for any manager of developers/testers</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="As usual, a cynic but beneficial view, this time a spin on interviewing." href="http://www.hacknot.info/hacknot/action/showEntry?eid=70">Hacknot - Interview With The Sociopath</a> <p class="delicious-extended">As usual, a cynic but beneficial view, this time a spin on interviewing.</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Interesting practices/patterns provided by Perforce but with quite a generic appeal" href="http://www.perforce.com/perforce/bestpractices.html">High-level Best Practices in SCM</a> <p class="delicious-extended">Interesting practices/patterns provided by Perforce but with quite a generic appeal</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Highlights of Interviews with leading people in the testing world. Use as pointers - if someone is interesting go read their works..." href="http://www.whatistesting.com/interviews.htm">Interviews</a> <p class="delicious-extended">Highlights of Interviews with leading people in the testing world. Use as pointers - if someone is interesting go read their works...</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" href="http://www.joelonsoftware.com/index.html">Joel on Software</a> </li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="A single location for the works of one of my favorite content-providers on stickyminds.com" href="http://www.qualitytree.com/feature/index.htm">Quality Tree Software, Inc. - Publications</a> <p class="delicious-extended">A single location for the works of one of my favorite content-providers on stickyminds.com</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Articles and other resourcers from one of the thought leaders in the Product Management / Software development/Testing / Agile world" href="http://www.jrothman.com/papers.html">Rothman Consulting Group, Inc. - Writings & Presentations</a> <p class="delicious-extended">Articles and other resourcers from one of the thought leaders in the Product Management / Software development/Testing / Agile world</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="Interesting view on how to handle the multiple code streams issue tracking problem. Not groundbreaking, but a good summary on what approaches are being used out there" href="http://ask.slashdot.org/article.pl?sid=05/10/06/2248259">Slashdot | Bug Tracking Across Multiple Code Streams?</a> <p class="delicious-extended">Interesting view on how to handle the multiple code streams issue tracking problem. Not groundbreaking, but a good summary on what approaches are being used out there</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="a repository for lots of development-related essays/articles, focusing on a pragmatic view of the complex world we work in" href="http://www.developerdotstar.com/index.html">Software Development, Computer Programming, Software Design - developer.* - DeveloperDotStar.com</a> <p class="delicious-extended">a repository for lots of development-related essays/articles, focusing on a pragmatic view of the complex world we work in</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="very interesting pragmatic work on the relatively uncharted domain of software builds" href="http://www.cmcrossroads.com/bradapp/acme/repro/SoftwareReconstruction.html">Software Reconstruction: Patterns for Reproducing Software Builds</a> <p class="delicious-extended">very interesting pragmatic work on the relatively uncharted domain of software builds</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Used mainly for looking for tool-related answers, quite legacy-focused in my oppinion." href="http://www.qaforums.com/">Software Testing and Quality Assurance Online Forums</a> <p class="delicious-extended">Used mainly for looking for tool-related answers, quite legacy-focused in my oppinion.</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="basic concepts of SCM from the developer of SourceVault ( A VSS replacement system)" href="http://software.ericsink.com/scm/source_control.html">Source Control HOWTO</a> <p class="delicious-extended">basic concepts of SCM from the developer of SourceVault ( A VSS replacement system)</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="Stickyminds is a vast repository of essays from various thought-leaders in the development/QA world. It’s the web presence of “Better Software Magazine” which is an interesting magazine (but probably not worth the subscription these days)" href="http://www.stickyminds.com/">StickyMinds Home Page</a> <p class="delicious-extended">Stickyminds is a vast repository of essays from various thought-leaders in the development/QA world. It’s the web presence of “Better Software Magazine” which is an interesting magazine (but probably not worth the subscription these days)</p></li><li class="delicious-post delicious-odd">» <a class="delicious-link" title="the groundbreaking pattern work for SCM world by Brad Appleton" href="http://www.cmcrossroads.com/bradapp/acme/branching/patterns.html">Streamed Lines: Branching Patterns for Parallel Software Development</a> <p class="delicious-extended">the groundbreaking pattern work for SCM world by Brad Appleton</p></li><li class="delicious-post delicious-even">» <a class="delicious-link" title="a classic from Joel on interviewing" href="http://www.joelonsoftware.com/articles/fog0000000073.html">The Guerrila Guide to interviewing</a> <p class="delicious-extended">a classic from Joel on interviewing</p></li></ul></div>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-41349662990104486682006-08-24T12:48:00.000+03:002006-08-24T13:04:50.913+03:00David V. Lorenzo posts favorite interviewing questions of people on his <a href="http://careerintensity.com/blog">Career Intensity Blog</a><br /><br />Here is his <a href="http://careerintensity.com/blog/2006/07/18/five-favorite-interview-questions-yuval-yeret/trackback/">post</a> about mine...<br /><br />At the risk of hinting the people who I interview in the future, also check out my<a href="http://del.icio.us/yyeret/interviewing"> interviewing</a> tag on <a href="http://del.icio.us/">del.icio.us</a> for a lot of resources on the matter.<br /><br />Why am I open about this?<br />One of my main beliefs in interviewing btw is to try and understand behaviourial aspects in addition to skills. Someone might get a head start for the skills questions if he prepares, but I think in that area if someone is diligent enough to research his interviewer, go and read multiple resources, learn enough to know the subject, he's getting extra credit right out of the gate...<br />For the behaviourial aspects the discussion is more flowing, and no preparation can really help you there.<br /><br />I cannot finish a post about interviewing without mentioning <a href="http://www.jrothman.com/">Johanna Rothman</a>. She's writing the <a href="http://www.jrothman.com/weblog/htpblogger.html">Hiring Technical People</a> blog, and wrote the <a href="http://www.amazon.com/exec/obidos/redirect?link_code=as2&path=ASIN/0932633595&amp;tag=rndmgmttrblog-20&camp=1789&creative=9325">Hiring The Best Knowledge Workers, Techies & Nerds: The Secrets & Science Of Hiring Technical People</a><img src="http://www.assoc-amazon.com/e/ir?t=rndmgmttrblog-20&l=as2&o=1&a=0932633595" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /> book. Check it out.<br /><br /><iframe src="http://rcm.amazon.com/e/cm?t=rndmgmttrblog-20&o=1&p=8&l=as1&asins=0932633595&fc1=000000&IS2=1<1=_blank&amp;amp;lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr" style="width: 120px; height: 240px;" marginwidth="0" marginheight="0" frameborder="0" scrolling="no"></iframe>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-57675450395598125042006-08-21T14:20:00.000+03:002006-08-21T14:46:48.999+03:00Fogbugz best practices and other resourcesI intend to post some reference to resources I'm fond of in the area of R&D, QA, methodology and the like.<br /><br />In the meantime, anyone who's interested in what I have to say will probably see some value in looking at <a href="http://www.fogcreek.com/FogBugz/docs/50/index.html">FogBugz Online Documentation. </a>I was referred there by <a href="http://www.testingreflections.com/node/view/4043">Zeljko Filipin's blog </a>which is aggregated by <a href="http://www.testingreflections.com/">http://www.testingreflections.com.</a><br /><br />In addition, my <a href="http://del.icio.us/yyeret">del.icio.us</a> links might provide good resources. see<br /><a href="http://del.icio.us/yyeret/DefectTracking">DefectTracking</a> <a href="http://del.icio.us/yyeret/QA">QA</a> <a href="http://del.icio.us/yyeret/SCM">SCM</a> <a href="http://del.icio.us/yyeret/accurev">accurev</a> <a href="http://del.icio.us/yyeret/bugtracking">bugtracking</a> <a href="http://del.icio.us/yyeret/bugzilla">bugzilla</a> <a href="http://del.icio.us/yyeret/build">build</a> <a href="http://del.icio.us/yyeret/ccb">ccb</a> <a href="http://del.icio.us/yyeret/continuous_integration">continuous_integration</a> <a href="http://del.icio.us/yyeret/continuousintegration">continuousintegration</a> <a href="http://del.icio.us/yyeret/defect_management">defect_management</a> <a href="http://del.icio.us/yyeret/defect_tracking">defect_tracking</a> <a href="http://del.icio.us/yyeret/defecttrackingmultiplebranches">defecttrackingmultiplebranches</a> <a href="http://del.icio.us/yyeret/developement">developement</a> <a href="http://del.icio.us/yyeret/development">development</a> <a href="http://del.icio.us/yyeret/eclipse">eclipse</a> <a href="http://del.icio.us/yyeret/issue_tracking">issue_tracking</a> <a href="http://del.icio.us/yyeret/jira">jira</a> <a href="http://del.icio.us/yyeret/keyword_driven_automation">keyword_driven_automation</a> <a href="http://del.icio.us/yyeret/methodologies">methodologies</a> <a href="http://del.icio.us/yyeret/methodology">methodology</a> <a href="http://del.icio.us/yyeret/multiple_versions">multiple_versions</a> <a href="http://del.icio.us/yyeret/patterns">patterns</a> <a href="http://del.icio.us/yyeret/perforce">perforce</a> <a href="http://del.icio.us/yyeret/priority_severity">priority_severity</a> <a href="http://del.icio.us/yyeret/product_management">product_management</a> <a href="http://del.icio.us/yyeret/software">software</a> <a href="http://del.icio.us/yyeret/subversion">subversion</a> <a href="http://del.icio.us/yyeret/test_automation">test_automation</a> <a href="http://del.icio.us/yyeret/test_labs">test_labs</a> <a href="http://del.icio.us/yyeret/test_management">test_management</a> <a href="http://del.icio.us/yyeret/testing">testing</a> <a href="http://del.icio.us/yyeret/testing_labs">testing_labs</a> <a href="http://del.icio.us/yyeret/vendorbranches">vendorbranches</a>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-43330320442685430882006-08-20T15:36:00.000+03:002006-08-20T16:09:19.239+03:00QA/DEV Protocols - Opening high quality bugsIn another post in the series about QA/DEV protocols, I'll talk about opening high quality bugs, why its important, what are the forces operating on each side of the trench here, and try to describe an approach that might improve the state of affairs a bit.<br /><br />First - a definition. What is a <span style="font-weight: bold;">high quality bug</span>? To be clear, we are talking a <span style="font-weight: bold;">bug report</span> of course. The quality here refers to the accuracy of the scenario, describing <span style="font-weight: bold;">exactly</span> what is necessary to reproduce, not more, not less. It refers to providing all the auxiliary information required to analyze the bug and start working to a resolution. It also aims to report <span style="font-weight: bold;">A single</span> bug, not several issues.<br /><br />It might be easier to convey the point by showcasing some examples for <span style="font-weight: bold;">low quality bug reports</span>:<br /><ul><li>Missing logs</li><li>Logs of different components are not time-synched, with no way to understand the time-space relationship. (This is relevant mainly for distributed systems )</li><li>Errors happened, but are not mentioned explicitly in the bug report<br /></li><li>Bug report focuses on analysis, not on reporting the facts. Analysis is a bonus for QA engineers, only relevant AFTER reporting the full details.</li><li>Much happened on the system, a couple of different scenarios, and the bug is hidden somewhere in piles of logs/information.<br /></li><li>Unclear bug report, leading to difficulty to prioritize and understand by business people (PM) and DEVs.</li><li>A complex long scenario is reported while the bug is reproducable via a simple short one.</li><li>The reported severity doesn't match what really happened, leading to "cry wolf" or serious issues masked as trivialities.</li><li>Multiple bugs in the same report<br /></li><li>Numbers - Avoid using statements like "very large" or "a lot of time". Always include the numbers you are talking about. What seem large to you may seem small to someone else, or vice versa.</li></ul>Also check out <a href="http://www.fogcreek.com/FogBugz/docs/50/Articles/TheBasicsofBugTracking.html">FogBugz - The Basics of Bug Tracking</a><br /><br />Now that we have deducted what a<span style="font-weight: bold;"> high quality bug report </span>is, we can try to understand the forces influencing the people opening bugs and why sometimes low quality bug reports do happen:<br /><ul><li>When QA people find a bug, they want to report it and move on. Sometimes they feel they are metered by quantity not quality, sometimes they actually are...</li><li>Especially for hard cases, the scenario is not that clear, and indeed there is some mix of events (including a full moon on a friday the 13th for the real nut cases) that cannot be easily reduced to a simple scenario. Trying to do this without the internal understanding of a DEV guy might take very long without being very effective.<br /></li><li>QA engineers are human. When the test setup/teardown is complex and requires attention to many small details (clear logs, sync time, grep for patterns in logs, etc.), things will get lost from time to time.<br /></li><li>In some cases, the QA group or a specific engineer is not aware of the price of <span style="font-weight: bold;">low quality bug reports</span>. (point him here...). DEV guys might not be able to put a finger on it either, or are just entrenched and prefer to point fingers and exchange emails instead of working to establish a protocol.</li></ul>So what can be done?<br /><ul><li>Discuss and educate - like I hinted, sometimes the most important step is to talk, map the expectations and root causes, and agree on a protocol, with the relevant SLAs.<br /></li><li>Assist QA by providing small automated snippets that can assist with test setup/teardown/analysis, guides them thru the steps to a high quality report, and really leaves them with the important step of reducing the scenario to the minimum. (btw, its possible to do the scenario reduction in automated testing harnesses as well, by retracting steps and verifying health and expected results very frequently)</li><li>Work with very granular test cases - minimizing the scenario length. Still, combining different test cases in parallel will add complexity, but when the building blocks are small, its better than nothing. </li><li>Issue tracking system should guide the reporter thru the important information/steps to a high quality report. </li><li>DEVs should provide contructive feedback - when bug reports are below par quality, and when they are above. Do it privately when below quality, and publicly when above.<br /></li><li>Do "peer review" of bug reports when relevant - for rookie QA engineers, for difficult bugs, etc.<br /></li><li>In hard cases call in a DEV and get his advice on what needs to be done to make sure the report has its best chance to become a high quality one.<br /></li></ul>Any other ideas?Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-74189549364552799262006-08-20T14:39:00.000+03:002006-08-20T15:28:24.236+03:00QA/DEV Protocols - Calling developers to the lab<h1 style="font-weight: normal;"><span style="font-size:100%;">I'm going to dedicate a couple of posts to the relationships between QA and Development (DEV) organizations.</span></h1>Anyone who's ever been in either of those organizations knows that sometimes there seems to be a conflict of interest between QA and DEV, which can lead to friction between the groups and the people. Obviously when both organizations are running under the same roof, there must be some joint interest/goal, but the challenge is to identify the expectations of each group in order to work toward their goal and accomplish their mission effectively.<br /><br />The difficult cases are those that put more strain on one party, in order to optimize the effectiveness of another. Example - developers are asked to unit/integrate/system test their software before handing over a build to QA. Some developers might say that this is work that can be done by QA, and their time is better spent developing software. The QA engineers will say that they need to recieve stable input from the DEVs in order to streamline the coverage progression, and that the sooner issues are found, the lower the cost to fix them.<br /><br />One way to look at these "protocols" between the groups is via the glasses of <a href="http://en.wikipedia.org/wiki/Theory_of_constraints">TOC (Theory of Constraints)</a>, identify the bottlenecks of the overall system/process, and fine-tune the protocol to relieve the bottleneck. People in those groups, and especially the leaders, should be mature enough to know that sometimes doing the "right thing" might be to take on more work, sometimes even not native work for their group.<br /><br />One example is the issue of when to ask DEV guys to see problems the QA engineers have discovered.<br />Reasons for calling DEV might be:<br /><ul><li>Wish to reopen a bug</li><li>Bug was reproduced and a developer was interested to see the reproduction.</li><li>New severe bug<br /></li></ul>There are a couple of forces affecting this issue:<br /><ul><li>QA wishes to finish the context of the specific problem/defect, open the bug, and get on with their work.<br /></li><li>DEV wishes to finish the context of their specific task, and wish to avoid the "context switch" of looking at the QA issue.<br /></li><li>In general, both QA and DEV have learned to wave the "Context Switching Overhead" flag quite effectively. (A more pragmatic conclusion is that some context switching overhead is unavoidable, and sometimes the alternative is more expensive...)</li><li>In some cases, "saving" the state of the problem for asynchronous later processing by DEV is difficult or takes too many resources to be a practical alternative.</li></ul>A possible compromise between all those forces is to define some sort of SLA between the groups, stating the expected service provided by DEV to QA according to the specific situation (Reopen, Reproduced, New Severe, etc.). This SLA can provide QA a scope of time they can expect answers in, without feeling they are asking for personal favours or "bothering" the DEVs. The developers get some reasonable time to finish up the context they were in without feeling they are "avoiding" QA. The SLA can also cover the expected actions to be taken by QA before calling in the DEV, or in parallel to waiting for them. This maximizes the effectiveness of the DEV person when he does free up for looking at the issue, while better utilizing the time of the QA while waiting. (for example - fill the bug description, look for existing similar bugs, provide the connectivity information into the test environment, log excerpts/screenshots, etc.)<br /><br />Another question is who to call on when QA needs help. The options here depend on the way the DEV group/teams share responsibility on the different modules of the system.<br /><ul><li>In case there are strict "owners" for each module, and they are the only ones capable of effectively assisting QA, the only reasonable choice is to call on them... this requires everyone to always be available at some level.<br />I have to say though that I strongly advise against such an ownership mode. Look at <a href="http://bradapp.blogspot.com/2005/03/individual-vs-collective-code.html">code stewardship</a> for a better alternative (in my oppinion) and see below how it looks better for this use case and in general...<br /></li><li>In case there is a group of people that can look at each issue, one alternative is to have an "on call" cycle where people know they have QA duty for a day/week. In this case there will be issues which will require some learning on their part, and perhaps assistance from the expert on a specific area. This incurs overhead, but is worth its price in gold, when the time comes and you need to support that area in real time, need to send the DEVs to the field to survive on their own, or when the owner/expert moves on...</li></ul><br />To sum up, like in many h2h (human-to-human) protocols, understanding the forces affecting both sides of the transaction is key to create a win-win solution. A pragmatic view trying to minimize the prices paid and showing the advantages of the solution to both sides and to the overall organization can solve some hard problems, as long as people are willing to openly discuss their issues and differences. I've seen this work in my organization, hopefully it helps others as well.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-55204522049822859472006-08-20T12:07:00.000+03:002006-08-20T12:14:26.863+03:00Some greenpepperAs I previously hinted in <a href="http://rndmgmttrenches.blogspot.com/2005/08/building-test-case-management-solution.html">"Building a test case management solution"</a>, I'm personally of the opinion that the holy grail in test case management is in finding the way to manage tests via an issue tracker database.<br /><br />In the time since that post I didn't find much information about this, and didn't see tools taking this approach.<br /><br />Therefore, it was great to stumble upon <a class="postTitle" href="http://agiletoolkit.libsyn.com/index.php?post_id=116677">Agile06 - François Beauregard - GreenPepper Software</a> - a podcast discussion where I learned about <a href="http://www.huisken.com/site/website/home.page">greenpepper</a>, a test automation and management system developed by <a href="http://confluence.atlassian.com/display/APW/Pyxis+Technologies">Pyxis Technologies </a>which closely integrates with <a href="http://atlassian.com/software/jira">JIRA</a> to create an issue tracker solution for managing test cases, and adopts a <a href="http://fitnesse.org/">Fitnesse</a>-like approach (on drugs) to table-driven testing, over the <a href="http://www.atlassian.com/software/confluence/">Confluence </a>wiki. I liked the choice of tools to integrate with as well as the pragmatic simple ideas.<br />Last week Frank and Christian demonstrated the system to me and a colleague. I was impressed, and would recommend anyone with interest in the test management or issue tracking domain to track those guys. I know I will.<br /><br />One issue I've been thinking over regarding both Fitnesse and GreenPepper is how to take those tools that are focused on one-shot automated testing, and adapt them to track manual testing documentation, results, etc. Finding a good way to solve this problem might assist with adoption of those tools/frameworks in environments which are not the classic agile web development environment I suspect are the majority of adopters at this point.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1155555315063006232006-08-14T14:35:00.000+03:002006-08-14T14:35:15.120+03:00Tracking Issues for Multiple Releases<strong><u>Pattern: TrackingIssuesForMultipleReleases</u></strong><br/><strong>Context: </strong>Multiple versions are being actively developed/maintained. New issues are discovered on one version, and their status and progress needs to be tracked on multiple versions, with minimal overhead but maximal accuracy.<br/>Active versions – active branches where a build was already issued and documented, and new builds are planned.<br/><strong>Problem: </strong>When a new issue is discovered, need to understand start of applicability (when it was introduced into the product) and end of applicability on each version branch (when it was solved). Due to branching, the state might be complex. Tracking the workflow for solving the issue on each branch/version is also complex when working with a naïve model. In addition, managing this can add considerable overhead – unchecked, this can lead to an explosion of bureaucracy and tracking overhead, leading to lack of faithful representation as a backlash. <br/><strong>Forces: </strong><br/><ul><li>Want to accurately visualize the status of each issue on each active version, and whenever new versions are created.</li><br/><li>Want to preserve a relationship between the same issue in the various versions, so progress/understanding from previous work can be reused (reproduction efforts/success, solution, workarounds, etc.)</li><br/><li>Assist project/product management in tracking issues that are active in each version. </li><br/><li>Allow workflow to proceed for each version on a standalone basis (e.g. QA wants to verify issue is closed on all active versions)</li></ul><strong>Solution: </strong><br/>When applicability is determined (usually when R&D analyzes the issue), use Cloning capability in issue tracker to create a new issue for each active version. The clone has all of the data of the original issue, including a clone link to the original issue. The “applicable in version” for the clone should be the active version that the clone was created for. The “Fix for version” should be a milestone/planned version on the same version branch.<br/><br/><strong>Resulting Context: </strong><br/><ul><li>List of open issues for each version lists all issues. No need to calculate list of issues based on input from other versions.</li><br/><li>Workflow can proceed on each version in parallel.</li><br/><li>Naively, even if issue is about to be solved, clones are still created, and will be resolved independently even if based on the same promoted changeset.</li></ul><br/><strong>Variants: (see below)</strong><br/><strong>Related Patterns:</strong><br/><br/><br/><strong><u>Pattern: JustInTimeCloning</u></strong><br/><strong>Context: </strong>See <strong><u>TrackingIssuesForMultipleReleases</u></strong><br/><strong>Problem: </strong>When working with <strong><u>TrackingIssuesForMultipleReleases</u></strong>, cloning overhead is substantial, and not always necessary. Unchecked, this can lead to an explosion of bureaucracy and tracking overhead, leading to lack of faithful representation as a backlash. <br/><strong>Forces: </strong><br/><ul><br/><li>Allow <strong><u>TrackingIssuesForMultipleReleases</u></strong></li><br/><li>Want to minimize overhead for reporters, developers, QA verification. Aim to avoid O(N) processing overhead per the number of active versions, whenever possible.</li><br/><li>Separate workflow is needed only for versions which were already delivered to QA. </li><br/><li>Visualizing version contents at this level is needed only when delivering to QA and beyond.</li><br/><li>Need to track which versions contain a fix and which don’t. </li><br/><li>Motivation to merge original fix to all applicable version branches as soon as possible while still in context</li><br/><li>Motivation to focus on version branch you are working on, and avoid the overhead of merging/integrating/testing to other versions. </li><br/><li>For each version, the following might be the case regarding original fix applicability:</li><br/><li>It might apply cleanly or with minor modifications, in which case the motivation is to apply it as soon as possible while still in context, and in which case the need for QA verification is lower (while still required depending on the version state)</li><br/><li>It might not be applicable, and require a whole new solution. In this case the motivation is usually to track the issue as open for the version, and leave it to the appropriate time.</li><br/><li>It might not be applicable, due to irrelevance of the issue on the version (e.g. feature cancelled, whole new behaviour). </li></ul><strong>Solution: </strong><br/>Add a “Next version state” field in the issue tracker, with the following options:<br/><ul><li>OPEN – issue is open for the next version</li><br/><li>INTEGRATED – a fix for the issue was applied in the next version</li><br/><li>CLONED – the issues was already cloned to the next version so no need to track it here</li><br/><li>UNKNOWN – state in the next version is unknown</li><br/><li>N/A – issue is non-applicable in next version due to irrelevance (see forces above)</li><br/><li>CLOSED – optional. In case QA/others want to signify that the solution was not only integrated but already verified/closed, so no need to do verification once its cloned to the new version.</li></ul><br/><br/>Lets assume 2 version branches – V1, V2, with V1 currently at V1.10. V2 first build will be V2.1.<br/>When applicability is determined (usually when R&D analyzes the issue), decide how to proceed with marking/cloning based on the following criteria:<br/><ul><li>If the issue was detected on V1 branch, BEFORE V2.1 was created (meaning the version is still only being developed, QA didn’t see it yet, no release notes, etc.), mark the issue as OPEN in next version, but don’t clone yet.</li><br/><li>If the issue was detected on V1 branch, AFTER V2.1 was created (meaning the version is being actively tested, and the contents of each build are being tracked, regression is monitored, etc.), clone the issue to V2, and mark it as CLONED</li><br/><li>If the issue was detected on V2 branch, clone it to V1, since V1 is already being tracked closely. </li><br/><li>Whenever an issue is not applicable to the next version, mark it as N/A in the next version.</li></ul><br/>NOTE: Most issues on V1 branch will be detected BEFORE V2.1 is created, but several will indeed be detected while both versions are being actively maintained. (Hopefully 80%/20% rule applies here). <br/>NOTE: Issues detected on the newer V2 branch before they were seen on V1 are usually a result of additional QA coverage, or a stroke of luck (another type of QA coverage). This is the minority case here.<br/><br/>When a solution is being integrated to V1, aim to promote it to V2 branch as well. If the issue was cloned, do the relevant workflow for the clone as well. If only marked as OPEN, mark the issue as INTEGRATED in next version. <br/>If a solution was found for V1 but its integration is delayed due to CCB approval or any other process which is heavier for a frozen branch, integrate it to V2 and mark it as INTEGRATED. <br/><br/>When delivering the first V2.1 build to QA go over all issues marked as OPEN on next version (those for which a fix wasn’t already integrated on both V1 and V2) and clone them to V2.<br/>When QA wants to reverify all issues that were integrated, clone all INTEGRATED issues as well, but avoid cloning CLOSED issues (optional)<br/><br/><strong>Resulting Context: </strong><br/><ul><li>Versions already being tracked show the full list of applicable issues and their state</li><br/><li>Versions yet to be tracked will show the full list of issues once tracking is started.</li><br/><li>Overhead of cloning is minimized to the periods of time when two or more versions are being tested/tracked concurrently. </li></ul><br/><br/><br/>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-1155551058195903382006-08-14T13:24:00.000+03:002006-08-14T13:24:43.310+03:00Process Flow PatternsFollowing up on my "patterns for issue tracking" post, here is Deeper documentation of some of the Process Flow patterns. I will try to follow up from time to time with documentation of the patterns. Once the knowledge base is more or less complete I will probably consolidate it into an article/whitepaper/wiki format.<br /><br /><strong><u>Pattern: Configuration Control Board</u></strong><br /><strong>Aliases</strong>: Change Control Board, Configuration Management Board, CCB<br /><strong>Context</strong>: A development group is trying to control content/configuration of its product<br /><strong>Problem: </strong><br />Conflicts between different stakeholders (PM, QA, DEV, etc.) and their motives can make the answer to “what is best” for the product version a complex one, and the group needs to provide the best business answer considering all aspects.<br /><strong>Forces</strong>:<br />• Every Change Request (CR) has a price – some sort of regression risk depending on scope and delicacy of the change. The risk is accompanied by the testing effort needed to verify/close the CR.<br />• Some CRs are required to meet release criteria.<br />• Change Requests (CRs) for Bug fixes potentially improve stability<br />• CRs for enhancement/features potentially increase user satisfaction or open new markets.<br />• Time to market tries to force fastest possible implementation of each CR.<br />• Developers want to implement CRs according to “good architecture practices”<br /><strong>Solution</strong>: <br />Classical CCB (Need to find the authoritative definition…)<br />But in general:<br />Define a board comprised of stakeholders for the product including engineering, business/user (PM), QA, management. Stakeholders should be knowledgeable and with enough authority in their domain. This is the CCB<br />CRs will be submitted to the CCB by engineering. They will be discussed by the CCB, in either periodical or ad-hoc meetings, and a decision will be made and communicated to the relevant parties.<br />Decisions take into consideration the pros and cons of each CR, the context of the product/version, and make a business decision.<br /><strong>Resulting Context: </strong><br />• CRs cannot be committed/completed but need to be queued<br />• Once approved CRs should be completed/committed by either the original engineer or a RE (Release Engineer)<br />• Rejected CRs will be completed/committed for future versions or dropped altogether.<br />An issue tracking system enables streamlined CCB operation and tracking of its decisions.<br /><br /><strong><u>Pattern - Distributed Configuration Control Board</u></strong><br /><strong>Aliases </strong>- CCB Proxy<br /><strong>Context </strong>- <br />A development group is trying to control content/configuration of its product, without slowing down or losing context too much<br /><strong>Problem </strong>- <br />In classic CCB the latency between submitting an issue to CCB and its approval/rejection takes significant time (there is a limit to the feasible frequency of CCB meetings, even when willing to be ad-hoc).<br />This time the CR is not integrated, losing ContinuousIntegration time and conflicting with “Merge Early and Often”.<br />In addition the engineer gets farther and farther away from the context of the CR as he assumes other work.<br />In addition the time necessary for discussing all CRs in CCB meetings is expensive when considering the number of members and the depth required to make intelligent decisions.<br /><strong>Forces</strong>:<br />• Wish to minimize time between CR readiness and commit time:<br />o Meet other possibly conflicting CRs as soon as possible (Merge Early and Often)<br />o Deal with issues as closer to context as possible (minimize context switch cost)<br />o Raise engineers satisfaction of “completed” work. Minimize “friction”.<br />• Many issues are “no brainer” decisions that don’t require a full CCB<br />• Wish to minimize time spent on CCB meetings<br />• Wish to minimize mistaken judgment calls due to lack of the full picture or mature consideration.<br /><strong>Solution: </strong><br />Train/Assign CCB Proxies which should be aware of the CCB criteria for decision and should be able to either reach a decision or know when to wait for full CCB.<br />These CCB Proxies should monitor the queue of CRs submitted to CCB and dispatch CRs according to the CCB criteria, or converse with the CR owner to get more information, or other stakeholders in case necessary.<br />CCB Proxy effectiveness should be reviewed periodically according to the following criteria:<br />• Adherence to CCB Criteria<br />• Results – How many regressions, whether CCB would have made a different decision<br />• Intimate review of random interesting decisions.<br /><br /><strong>Variants</strong>:<br />• Dispatch the CR queue according to engineering domain – a proxy for each domain, usually a manager in that domain.<br />• Dispatch the CR queue using a peer system – a peer proxy for each domain, to avoid the situation where a manager approves his own group work. (sort of “peer review” system)<br />• PM is the CCB Proxy<br />• Lead QA stakeholder is the CCB Proxy<br /><strong>Resulting Context</strong><br /> • 80% of CRs should be dispatched/approved very quickly (decide on SLA). 20% will be according to classic CCB frequency.<br />• CCB Meetings will be shorter and more focused (to the relief of the attendees…), and potentially the frequency can be increased.<br /><strong>Related Patterns - </strong>CCB, Merge Early and Often (SCM)<br /><br /><strong><u>Pattern: Heirarchical Triage for Incoming issues</u></strong><br /><strong>Context</strong>: New issues (bugs/feature requests) are opened by interested stakeholders (QA, Customer Support, DEV, PM). Since resources are limited some business intelligence should be applied to decide which issues should be accepted into the work queue of which version (if any), and with what priority compared to other issues.<br /><strong>Problem</strong>: Cannot rely on engineering alone to come up with the business decision, OTOH waiting for PM or some sort of CCB committee introduces much latency/bureaucracy into the process<br /><strong>Forces:</strong><br />• Wish to start working on high priority issues soon, to avoid working on lower priority issues while waiting for processing.<br />• Wish to have correct priorities and control the version contents (See CCB)<br />• Wish to minimize time in the decision queue.<br /><strong>Solution: </strong><br />Priority decision should be assigned to the CCB process, using the same CCB Proxies described in “Distributed CCB” to dispatch the incoming issues queue.<br />Criteria for priority and version contents should again be decided and documented beforehand. They consist part of the “values” for decisions made by the proxies.<br />Issues which require more elaborate discussion shall be discussed in a periodical “Triage” meeting (can be in CCB meeting, or separate meeting)<br /><strong>Resulting Context: </strong><br />• 80% of issues should be prioritized very quickly (decide on SLA). 20% will be according to “triage” meeting frequency.<br />• Minimum numbers of issues enter the work queue by mistake.<br />• Minimized feeling of bureaucracy among issue reporters and assignees.<br /><strong>Related Patterns</strong>: Distributed CCB, CCB<br /><br /><strong><u>Pattern: UnderstandBeforeSchedule</u></strong><br /><strong>Context: </strong>In classic issue tracking environments, issues are reported, and then scheduled for work (in a version). Some of the aspects of an issue include scope of change, estimated effort, impact on stability. This pattern deals with having sufficient input into the scheduling decision.<br /><strong>Problem: </strong>When scheduling is done without sufficient information regarding scope/estimated effort/impact, time will be spent on handling them, only to understand later on that they cannot be committed to the version (mainly due to CCB criteria). This is a waste of resources, and a source for frustration among the staff.<br /><strong>Forces: </strong><br />• Scheduling effectively requires considerable input, which might require actual investigation/analysis by an engineer/developer<br />• Investigation/Analysis by engineers/developers is usually part of the work done AFTER scheduling the issue for one of the versions.<br />• Engineers/Developers apply pressure to commit issues they already solved, even to the detriment of the project health. Part of human nature.<br />• Tracking issues which require analysis is difficult when they are all in the same “unscheduled/new” state/queue.<br /><strong>Solution: </strong><br />Add an “investigating” state/queue to the workflow. Issues should be in this state when they are pending an investigation by their owner. Exit criteria from this state is to have the required input into the scheduling process.<br />New issues can go to this state when insufficient scheduling input is available. When the scheduling input is available (either when reported, due to analysis, etc.) the next step is to schedule. Who schedules and according to what flow is out of the context of this pattern.<br />“Investigation” work stops when scheduling input is available, unless the work necessary to solve the issue is another minimal step, in which the work can be done all the way up to “resolve” (commit depends on the codeline policy and whether CCB approval is required).<br /><strong>Resulting Context: </strong><br />• Added “investigating” state/phase/queue in the issue workflow<br />• Use either custom fields or comments to track the relevant scheduling input, according to the level of formality/tracking required.<br />• Engineers/Developers are comfortable with providing the analysis/investigation data, without going all the way to resolve the issue, knowing that the aim of the process is to utilize their time effectively.<br />• Shortcuts can be made whenever investigation is redundant.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1155030205554562702006-08-08T12:43:00.000+03:002006-08-09T12:17:11.800+03:00Patterns for issue trackingI recently spent some time on devising methodologies for software development lifecycle in my company, dealing with SCM (Version Control) and Issue Tracking.<br /><br />I'm a big fan of patterns. my first encounter with them was with the POSA series (<a href="http://www.amazon.com/exec/obidos/redirect?link_code=as2&path=ASIN/0471958697&amp;tag=rndmgmttrblog-20&camp=1789&creative=9325">Pattern-Oriented Software Architecture, Volume 1: A System of Patterns</a><img src="http://www.assoc-amazon.com/e/ir?t=rndmgmttrblog-20&l=as2&o=1&a=0471958697" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /><br />/ http://www.cs.wustl.edu/~schmidt/patterns-ace.html) when working on distributed systems.<br /><br />As a fan of reuse, this was quite an important finding.<br /><br />Later I encountered the SCM patterns. I read <a href="http://www.cmcrossroads.com/bradapp/acme/branching/">http://www.cmcrossroads.com/bradapp/acme/branching/</a> by Brad Appleton and understood, yet again, that much of what we were doing good was a pattern, and what we were doing wrong and were looking to improve was an anti-pattern. I also read his book <a href="http://www.amazon.com/exec/obidos/redirect?link_code=as2&path=ASIN/0201741172&amp;tag=rndmgmttrblog-20&camp=1789&creative=9325">Software Configuration Management Patterns: Effective Teamwork, Practical Integration</a><img src="http://www.assoc-amazon.com/e/ir?t=rndmgmttrblog-20&l=as2&o=1&a=0201741172" alt="" style="border: medium none ! important; margin: 0px ! important;" border="0" height="1" width="1" /><br /><br />Software Reconstruction Patterns (<a href="http://www.cmcrossroads.com/bradapp/acme/repro/SoftwareReconstruction.html">http://www.cmcrossroads.com/bradapp/acme/repro/SoftwareReconstruction.html</a>) are a related useful family of patterns.<br /><br />I also encountered organizational/process patterns, but I admit to not grepping the concept fully so far (in the todo list...). See <a href="http://www.ambysoft.com/processPatternsPage.html#FAQ%20What%20are%20Process%20Patterns"> http://www.ambysoft.com/processPatternsPage.html#FAQ%20What%20are%20Process%20Patterns</a>.<br /><br />Now while trying to devise the Issue Tracking methodology, starting with a baseline documentation of how each group (recall we are one R&D group acquired by another) does its work, I felt the need for patterns in this domain, and wasn't able to find any so far.<br />So, I decided that while keeping on the lookout for a pattern repository for this domain, I will start documenting patterns on my own, and try to come up with a draft of the issue tracking pattern family. I'm sure it will be useful to myself in the future. Hopefully via discussion in the right community, it can evolve into a public body of knowledge.<br /><br />Anyhow - the patterns I've thought of so far are below. I now see that one of the greatest challenges are naming them right - so they can be generic enough and still specific to what the context you are talking about. I'm trying to take some guidelines from the Gang of Four definition (see <a href="http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29">http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29</a>)<br /><br /><h1><a name="IssueTrackingPatterns-TaxonomoyCategorie"></a>Taxonomoy - Categories<o:p></o:p></h1> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-ProcessFlow"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Deliverables Generation</span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">AutoInternalReleaseNotes </span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">AutoApplicableWorkaroundsList<o:p></o:p></span></h3> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Process Flow </span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Heirarchical Triage for Incoming issues<o:p></o:p></span></h3> <p style="margin-left: 0.5in;">aim to make a distributed decision on 80% of the issues according to pre-discussed policies, but have a streamlined process for tracking and reaching a wise decision on the rest 20%.<o:p></o:p></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Purpose"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Resolve->Integrate->Release completed issues<o:p></o:p></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Understand scope and impact before committing to Schedule<o:p></o:p></span></h3> <p style="margin-left: 0.5in;">Be able to track issues which need work in order to schedule, but are NOT to be solved unless are really trivial, instead are to be raised for a schedule decision/discussion<o:p></o:p></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Close everything<o:p></o:p></span></h3> <p style="margin-left: 0.5in;">Having a closure phase for completed (fixed) issues as well as duplicates, invalids, wontfixes, etc.<o:p></o:p></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Match and document the actual workflow between people<o:p></o:p></span></h3> <p class="MsoNormal" style="margin-left: 0.75in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family:Wingdings;"><span style="">§<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Give leads/managers the ability to review work by their people and sign off on it (or reject it)<o:p></o:p></span></p> <p class="MsoNormal" style="margin-left: 0.75in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family:Wingdings;"><span style="">§<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">QA Lead confirms new bugs from QA<o:p></o:p></span></p> <p class="MsoNormal" style="margin-left: 0.75in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family:Wingdings;"><span style="">§<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">DEV Lead integrates fixes resolved by his people<o:p></o:p></span></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">commit approval / CCB activity / code review process should be enabled by the issue tracker workflow </span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Ownership is NOT a state. Current action phase IS.<o:p></o:p></span></h3> <p class="MsoNormal" style="margin-left: 0.5in;">Waiting for QA Reproduction - State or Ownership? <o:p></o:p></p> <h2><o:p> </o:p></h2> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Relationshipsbetwe"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Relationships between issues<a name="IssueTrackingPatterns-Datatotrack"></a></span></h2> <h2 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Track symptoms separately from change tasks ?<o:p></o:p></span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">How/When to divide issues</span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Issue equivalent of "Release branch"</span></h3> <p style="margin-left: 0.5in;">How to deal with issues that are relevant on multiple version, where their state might be different for each version, but most of the data is shared?<o:p></o:p></p> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Issue Meta-Data<o:p></o:p></span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Track"resolvedin"v"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Track "resolved in" version automatically<o:p></o:p></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Interfacetootherpr"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Establishing priority based (among other things) on Severity<o:p></o:p></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Track stage at which the bug was opened<o:p></o:p></span></h3> <p style="margin-left: 0.5in;">Allows understanding of QA/DEV effectiveness at developing/releasing quality software.<o:p></o:p></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Trackreproducabili"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Track reproducability and reproduces of the issue<o:p></o:p></span></h3> <p class="MsoNormal" style="margin-left: 0.5in;">reproduce cases - via link to the test management ? Via sub-issues linked to the parent? <o:p></o:p></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Keywords might be better than Custom Fields</span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Discern "introduced in" from "detected in"<o:p></o:p></span></h3> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Interface to other processes<o:p></o:p></span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-InterfacetoSCM"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Interface to SCM<o:p></o:p></span></h3> <h3 style="margin-left: 0.75in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-InterfacetoTestcas"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">§<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Integration with Task-Level Commit<o:p></o:p></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Interface to Test case management<o:p></o:p></span></h3> <h3 style="margin-left: 0.75in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Tracktestcaseforea"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">§<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Track test case for each issue<o:p></o:p></span></h3> <p class="MsoNormal" style="margin-left: 1in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family:Symbol;"><span style="">·<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">if test case opened the issue - to know what to run to test/verify/close<o:p></o:p></span></p> <p class="MsoNormal" style="margin-left: 1in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-family:Symbol;"><span style="">·<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">if from field or exploratory - to track process of adding this to regression suite<o:p></o:p></span></p> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-InterfacetoProject"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Interface to Project Management / Use as project management<o:p></o:p></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-InterfacetoCRM&nbs"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Interface to CRM <o:p></o:p></span></h3> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Usability"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Usability<o:p></o:p></span></h2> <h2 style="margin-left: 0.25in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-UsefulMetrics"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Useful Metrics</span></h2> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">AutoCustomerReleaseNotes</span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span class="nobr"><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span></span><!--[endif]--><span dir="ltr"><span class="nobr"><a href="http://www.ayeconference.com/wiki/scribble.cgi?read=FaultFeedbackRatio" title="Visit page outside Confluence">FaultFeedbackRatio<sup><span style="text-decoration: none;"><!--[if gte vml 1]><v:shapetype id="_x0000_t75" coordsize="21600,21600" spt="75" preferrelative="t" path="m@4@5l@4@11@9@11@9@5xe" filled="f" stroked="f"> <v:stroke joinstyle="miter"> <v:formulas> <v:f eqn="if lineDrawn pixelLineWidth 0"> <v:f eqn="sum @0 1 0"> <v:f eqn="sum 0 0 @1"> <v:f eqn="prod @2 1 2"> <v:f eqn="prod @3 21600 pixelWidth"> <v:f eqn="prod @3 21600 pixelHeight"> <v:f eqn="sum @0 0 1"> <v:f eqn="prod @6 1 2"> <v:f eqn="prod @7 21600 pixelWidth"> <v:f eqn="sum @8 21600 0"> <v:f eqn="prod @7 21600 pixelHeight"> <v:f eqn="sum @10 21600 0"> </v:formulas> <v:path extrusionok="f" gradientshapeok="t" connecttype="rect"> <o:lock ext="edit" aspectratio="t"> </v:shapetype><v:shape id="_x0000_i1025" type="#_x0000_t75" alt="" href="http://www.ayeconference.com/wiki/scribble.cgi?read=FaultFeedbackRatio" title=""Visit page outside Confluence"" style="'width:5.25pt;" button="t"> <v:imagedata src="file:///C:\DOCUME~1\YUVALY~1.EXP\LOCALS~1\Temp\msohtml1\01\clip_image001.gif" href="http://confluence/confluence/images/icons/linkext7.gif"> </v:shape><![endif]--><!--[if !vml]--><span style=""><img src="file:///C:/DOCUME%7E1/YUVALY%7E1.EXP/LOCALS%7E1/Temp/msohtml1/01/clip_image001.gif" class="rendericon" shapes="_x0000_i1025" align="absmiddle" border="0" height="7" width="7" /></span><!--[endif]--></span></sup></a> (Regression rate)<o:p></o:p></span></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span class="nobr"><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span></span><!--[endif]--><span dir="ltr"><span class="nobr">Rate of bugs fixed in version they were opened for (>70%)<o:p></o:p></span></span></h3> <h3 style="margin-left: 0.5in; text-indent: -0.25in;"><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr"><span class="nobr">Rate of bugs detected in the field (<5%?)</span><o:p></o:p></span></h3> <h1 style="margin-left: 0.25in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Antipatterns"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">v<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Anti-patterns<o:p></o:p></span></h1> <h2 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Meteringpeopleviab"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Metering people via bug counts<o:p></o:p></span></h2> <h2 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Overloadingstates/"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Overloading states/fields for multiple purposes<o:p></o:p></span></h2> <h2 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Overcentralization"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Over centralization of decision making<o:p></o:p></span></h2> <p style="margin-left: 0.5in;">Don't let a workflow with many steps fool you into thinking that it requires many people. Use steps to track where you are. Use assignment to track who holds the issue. Don't assign upwards unless necessary.<o:p></o:p></p> <h2 style="margin-left: 0.5in; text-indent: -0.25in;"><a name="IssueTrackingPatterns-Trackingandmeterin"></a><!--[if !supportLists]--><span style="font-weight: normal;font-family:Wingdings;" ><span style="">Ø<span style=";font-family:";font-size:7;" > </span></span></span><!--[endif]--><span dir="ltr">Tracking and metering by Components?<o:p></o:p></span></h2> <p style="margin-left: 0.5in;">see <span class="nobr"><a href="http://www.anyware.co.uk/2005/2006/07/27/jira-issue-tracking-meets-tagging/" title="Visit page outside Confluence" linktype="raw" linktext="http://www.anyware.co.uk/2005/2006/07/27/jira-issue-tracking-meets-tagging/"><a href="http://www.anyware.co.uk/2005/2006/07/27/jira-issue-tracking-meets-tagging/trackback/">http://www.anyware.co.uk/2005/2006/07/27/jira-issue-tracking-meets-tagging/</a><sup><span style="text-decoration: none;"><!--[if gte vml 1]><v:shape id="_x0000_i1026" type="#_x0000_t75" alt="" href="http://www.anyware.co.uk/2005/2006/07/27/jira-issue-tracking-meets-tagging/" title=""Visit page outside Confluence"" style="'width:5.25pt;" button="t"> <v:imagedata src="file:///C:\DOCUME~1\YUVALY~1.EXP\LOCALS~1\Temp\msohtml1\01\clip_image001.gif" href="http://confluence/confluence/images/icons/linkext7.gif"> </v:shape><![endif]--><!--[if !vml]--><span style=""><img src="file:///C:/DOCUME%7E1/YUVALY%7E1.EXP/LOCALS%7E1/Temp/msohtml1/01/clip_image001.gif" class="rendericon" shapes="_x0000_i1026" align="absmiddle" border="0" height="7" width="7" /></span><!--[endif]--></span></sup></a></span><span style=";font-family:";" > </span></p><p style="margin-left: 0.5in;"><br /><o:p></o:p></p> <p class="MsoNormal"><o:p> </o:p></p><br /><div id="PageContent"><table class="pagecontent" border="0" cellpadding="0" cellspacing="0" width="100%"><tbody><tr><td class="pagebody" valign="top"><div class="monospaceInput"><div class="padded">This is still very much a work in progress, but any comment or help is very much welcome.<br /></div></div> </td></tr> </tbody></table> </div>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-1155008435477681852006-08-08T06:40:00.000+03:002006-08-08T07:14:53.240+03:00What the other guys brought into the party...As I mentioned earlier, before we were able to finalize our new development environment we were gobbled up (acquired) by another company, about 4 times the size of our group.<br /><br />In the area of issue tracking, the bigger company was using TestDirector, with some customizations, but actually their processes were quite simplistic, and weren't enabling a truly effective R&D process.<br /> For Test case management, they are using plain old Word/Excel but are now open to other options.<br /><br />In the area of SCM btw both companies were using good ol' CVS and were quite sick of it. more on that later...<br /><br /><br />With this as the baseline, upcoming posts will try to describe the process for integrating SCM, Issue Tracking and choosing a Test Case Management solution agreeable and effective to all.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1155008359020499902006-08-08T06:39:00.000+03:002006-08-08T07:11:48.546+03:00Test Automation!At some point we understood we must have a robust test automation harness which can at least cover our smoke test and regression. This will help us feel more confident in our releases in less time, and allow meeting the business needs.<br /><br />As we are an appliance based file-system product, in essence an IT infrastructure product, all of the commercially available harnesses from CA, Mercury and the like are useless, as they focus on GUI/Web automation, and we need API automation and the ability to run and control file system operations and file system testing tools. <br />We considered home-grown approaches but decided that the time-to-market is too long for our needs.<br />We considered adopting STAF/STAX (<a href="http://staf.sourceforge.net/index.php">http://staf.sourceforge.net/index.php </a>) but again the custom work needed around it was estimated to be too long and required human resources and expertise we didn't have, and weren't available in the neighbourhood.<br /><br />What we eventually chose was a testing <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.aquasw.com/Images/RunnerReporterSmall.gif"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 320px;" src="http://www.aquasw.com/Images/RunnerReporterSmall.gif" alt="" border="0" /></a><br />automation harness called Aqua<br />(<a href="http://www.aquasw.com/">http://www.aquasw.com/</a>) and we are very satisfied with it.<br /><br />Its still requires significant customizations/development in a project mode, rather than an off-the-shelf product, but for some situations its the best and only available approach today, and is much better than developing on your own from scratch, or testing manually.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com1tag:blogger.com,1999:blog-15467294.post-1155007961371117312006-08-08T06:32:00.000+03:002006-08-08T06:54:03.013+03:00Test Case Management progress - TD?About 8 months ago we chose TD (TestDirector) for Test Case Management as we needed to employ something fast and the Director of QA I brought in had experience with it, while the open source tools we looked at didn't convince us at that point, and seemed a risk we didn't want to take, considering the other work necessary in other QA areas at the time. <br><br>NOTE: I wonder how much money Mercury made just from people in this situation... <br> Anyhow we are now documenting test cases and managing testing progress with the tool, with moderate satisfaction. The lack of integration to our issue tracker (both Bugzilla and JIRA) is a concern, as well as the high admission price per user, and the feeling that we are working with a glorified MSAccess inside a web browser (I wonder why...). <br><br>Lack of support for Linux machines, Firefox, and in general the fact that Mercury (sorry, HP now... <a href="http://www.mercury.com/us/company/pr/press-releases/072506-hp-acquires-mercury.html">http://www.mercury.com/us/company/pr/press-releases/072506-hp-acquires-mercury.html</a>) is evidently considering itself more of an IT Governance / BTO company, and outgrew its QA roots, doesn't make this move seem very strategic. <br><br>I'm not saying TD is bad, its just not the right solution for small dynamic groups, which want to integrate new solutions when they solve business problems, and where much of the environment is open-source or small vendors, not the gorillas of the Enterprise Tools segment. <br><br>We made a tactical move knowingly, but there might be room for change, as we are not fully invested in the tool at this point.<br><br>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1155007486317070922006-08-08T06:24:00.000+03:002006-08-08T07:15:18.126+03:00JIRA Progress - an easy decision...Sometime before the acquisition, we decided on JIRA as the issue tracker and started evaluating it, including history migration, a bit of customization, and trying to understand what processes need to be in place in order to be effective and deal with the complexities we were seeing regarding real world software development and maintenance.<br />This was an easy decision, as we really really like JIRA the more we look at it. The main advance in the JIRA space this year was in my view the plethora of plugins that became available, and provide functionality we needed, as well as convince us of the strategicness of the JIRA/Atlassian ecosystem.<br />Another easy decision was migrating to Confluence, the atlassian enterprise Wiki, for our knowledge base and documentation platform.<br /><br /><br /><br /><br /><br />Another thing we are seeing is commercial vendors in the development tools space (SCM, Build management, etc.) integrating with atlassian. This is very positive and not surprising, considering the openness of the platform, the reach into many customers, and I guess the BizDev efforts of the atlassian guys, which I think try to be Best of Breed for issue tracking and wiki, while relying on very strong partnerships with every sexy major player/community in the space (not necessarily the heavy-weight guys like Mercury/CA) to provide other functionality people expect.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1155006922357402972006-08-08T06:15:00.000+03:002006-08-08T06:53:19.693+03:00Update on many fronts... JIRA, Aqua, org changes and the like...Long time no post. <br><br>I'll try to recap where I stand regarding the issues I started talking about a year ago, at least for a bit of closure regarding the issue tracking and test case management areas.<br>Lets see if I can keep it up this time... <br><br>In any case, my company was gobbled up by another software company, and so somewhere in the middle of upgrading our development environment infrastructure, our plans were disrupted, but one cannot complain... Now the new company R&D department needs to have an issue tracking and test case management, with the added challenge of integrating two methodologies, histories, and views on the world. I was tasked with this project. in the following posts I will try to address several aspects of this project. <br><br>Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0tag:blogger.com,1999:blog-15467294.post-1124179148606139572005-08-16T10:59:00.000+03:002005-08-16T10:59:08.610+03:00Building a Test Case Management solutionI've recently been looking at how to build a reasonable test case management solution (good!=word documents) for our company. I quickly learned this is not a very developed field. <a href="http://www.mercury.com/us/products/quality-center/testdirector/ ">Mercury TestDirector</a> seems to dominate the commercial field, with the other QA product companies (CompuWare, IBM-Rational) following suite, but not there yet.
<br />
<br />Test Case Management as I understand it deals with the following objects:
<br /> * Test Plans
<br /> * Test Cases
<br /> * Test Labs
<br /> * Test Schedule
<br />
<br />The main deliverables expected from a Test Case Management solution are:
<br /> * It should provide visibility into the testing process - both the plan and the actual execution.
<br /> * Is risk-oriented, focusing on the riskier aspects of the system first.
<br /> * Facilitates day to day management of the testing team.
<br /> * It should present a coherent picture connected to the Issue Tracking tool (what bugs are blocking tests, what tests need to be ready for a certain release, etc.)
<br /> * It should present a coherent picture connected to the Automated testing frameworks (If the smoke test / sanity failed during the night, the Test Case Managment should reflect that without requiring the QA engineer to manually copy+paste the information).
<br /> * Tracking pass/fail status for each test
<br /> * Tracking pass percentage per module per version per milestone requirements
<br /> * Dynamic priority management which affects to do list for testers
<br /> * Coverage of requirements by test cases (each change request should be linked to at least one test case ? Closing a change request requires successful test run ? )
<br /> * Management of test beds relevant for each test case
<br /> * Manage test cases that are blocked by other change requests (bugs/enhancements)
<br /> * Accessibility to test case information from each bug / change request.
<br /> * Accessibility to test logs from the relevant test case instance
<br /> * Manage the relationship between different test cases - its quite useful to create dependencies - e.g. run the login test case then run the change password test case.
<br /> * MANY test case instances for ONE test case in ONE version
<br /> * ONE requirement can be tested by MANY test cases
<br /> * ONE test case may test MANY requirements ?
<br /> * ONE test script may be used in MANY test cases
<br /> * ONE test case may run in MANY configurations
<br /> * Time estimate for coverage of a version
<br /> * Last time a specific test was run, by who, what results
<br /> * specific test case results across builds
<br /> * Ability to share the test cases with an OEM or remote team
<br /> * Version control for test scripts (link to the SCM)
<br /> * What version of the test script was used for each test case instance ?
<br /> * What version of the test case was used for each test case instance ? (if we added a sequence, and suddenly tests started to fail, it doesn't mean regression in software)
<br />
<br />My understanding of this is based on some resources I've been monitoring (see the full list on <a href="http://del.icio.us/yyeret/test_management"></a>).
<br />Some of the noteworthy ones are:
<br /> * <a href="http://www.stickyminds.com/s.asp?F=S6268_ART_2">StickyMinds.com : Article info : Reengineering Test Management</a>
<br /> * <a href="http://www.stickyminds.com/s.asp?F=S5071_MAGAZINE_2">StickyMinds.com: Bringing Your Test Data to Life</a>
<br /> * <a href="http://www.rhonabwy.com/mt/archives/2005_03.html">Rhonabwy</a> writes about his own experience with the open source test case management tools from time to time
<br /> * <a href="http://opensourcetesting.org/testmgt.php">OpenSourceTesting - Test Management Tools</a> is the list of test case management tools everyone refers to.
<br />
<br />Based on the information I found I've been looking at <a href="http://testmaster.sourceforge.net/">TestMaster</a>, <a href="http://testlink.sourceforge.net/docs/docs/features.php">TestLink</a> and <a href="http://sourceforge.net/projects/qatraq/">QATraq</a>, but didn't install any of them yet. The other ones really seem either dead or not ready yet.
<br />
<br />I'm still trying to understand whether the correct approach is to get a test management tool, and try to connect it to your issue tracker, or to get a really customizable issue tracker (e.g. <a href="http://www.atlassian.com/software/jira/">JIRA</a>) and build what you need of a test management tool there. I'm still contemplating the pros and cons, and trying to understand how much of test case management is actually an issue tracking type activity, and what are the parts that are not. This is quite uncharted ground from what I've found so far, and I understand that part of having a "reasonable" solution is to skip some of the requirements and vote for simplicity.
<br />
<br />A good friend which knows what he's doing when it comes to managing QA efforts repeatedly tells me to aviod the bells and whistles and the complex reports metrics and processes, and go for simple worthwhile metrics, the reports and flows that are necessary to support them, and to focus on the substance. Thats a big part of what I consider to be a "reasonable" solution.
<br />
<br />
<br />Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com2tag:blogger.com,1999:blog-15467294.post-1124176482486714062005-08-16T10:14:00.000+03:002005-08-16T10:23:05.590+03:00Customer Relationship Management for a small growing startupEvery startup which reaches a stage where you have customers, realizes at some point that managing the customer relationship throughout the sales life cycle (not just pre-sale but also post-sale) is a process which requires attention.<br />I've personally seen a few cases where this realization comes as a reflection on some dropped balls and hurt feelings on all sides.<br /><br />Anyhow, we are now looking for a solution that will allow tracking customer issues, known product issues, and will also interface with the internal issue tracker (we use bugzilla but are considering JIRA for that). We are looking at <a href="http://supportforce.com">SupportForce</a> since we already use <a href="http://www.salesforce.com/">SalesForce</a> for the sales tracking aspects.<br />While reading <a href="http://www.infoworld.com/article/05/08/08/32FEosscrm_1.html"></a> I found <a href="http://www.sugarcrm.com/crm/">SugarCRM</a> which seems to be an Open-Source alternative. No idea how it compares to the big-league players yet, and no idea what we really need.<br />I admit this CRM area is kind of new to me, and I think I need to learn some more about it in order to make sure we build the right foundation here so everyone is satisfied.<br /><br />In another company I worked for the financial/IT guys chose <a href="http://www.eshbel.com/crm.htm">PriorityCRM</a> which all of us engineering guys thought was quite pathetic and hopeless. Sort of like Magic on bad drugs. I guess the choices in the CRM space is affected by the other adjacent modules, and all too often the choice is made according to the convinience of the financials/operations people which need to track bills, inventory, etc., and not sufficiently according to the requirements of the Professional Services and Engineering departments. I honestly cannot tell what is more important based on my current perspective. Need to take a more complete look at the picture to really say.<br /><br />I decided I will try to learn a bit about <a href="http://www.salesforce.com/">SalesForce</a> from one of our sales guys, and try to see what he gains from using such a product, and what are his expectations. That will give me some perspective.<br /><br />I'll probably continue this thread as we progress.Yuvalhttp://www.blogger.com/profile/02726357777842527103noreply@blogger.com0