Custom Fields - Regression
Can you tell off the top of your head (or via a simple query in your issue tracker) what is the regression ratio in your product for a specific version, and where are the regression areas?
Chances are, the answer is no. The reason is that out of the box, most issue trackers don't indicate an issue is a regression, and don't provide causality links.
This means that when you look at a new bug, you must rely on the description/comments if you want to note the regression source. This of course limits your ability to manipulate the data.
What can you do? Well, there are two things.
First, you can provide a simple "Regression" custom field which will be true in case the issue is understood to be a regression, or more accurately, a new issue caused by another change in the system (and not an issue which was there all along, just detected thru extended QA coverage).
This lets you know which issues are regressions, which usually points to issues you really want to deal with before releasing.
What it doesn't do is provide info as to the regression source . The only accurate way to track regression source is to provide links from the regression back to its cause. This can be done via a "Caused by/Caused" link. Hopefully your issue tracker allows custom links (JIRA does...). In case you know which specific issue caused it, fill that in. If you don't know, or its due to a large feature, just add a placeholder issue and link to that, even if its just a build number ( e.g. FeatureA, build1.2.1).
Lets assume this information is actually filled correctly most of the time (not a trivial assumption actually - those experienced in trying to convince all stakeholders to fill data they don't really think is useful to THEM will probably nod with agreement here). Now you can look at the SOURCES of regression and try to see if there is any intelligent conclusion that can be made. Is it the small stupid stuff that you feel will be trivial? Is it the hard fixes where you don't do enough code review and integration testing? Are the regressions local, or can an issue in one area cause a chain effect in different modules altogether? Are certain teams maintaining fewer/more regressions? Are certain modules pandora boxes for new bugs/regressions whenever they are touched?
These understandings should be leveraged to think where you want to improve in your development process.
NOTE: Even if you are afraid the data won't be collected, try to think along these lines via a less formal view of the regressions in your last big version. Hopefully you can make some conclusions with what you have at the moment.