The idea of biomarking, a technique used to follow individual molecules around in the laboratory, can also be applied to survey design. By creating one item, which captures the overall meaning or bottom line of a survey, we can examine its variance statistically as it interacts with other items and set the stage for leverage and resource allocation via multiple regression.

Marking a molecule of interest with a radioactive isotope, and then determining where it goes and what happens to it, is a widely used technique in the biological sciences. For example, the cornea of the eye is constantly undergoing new growth and regeneration. This intracellular process requires water. If heavy water (a very weak radioactive isotope of water) is consumed, the amount of radiation emanating from the cornea can be measured, and a baseline cellular proliferation rate can be determined. A next step might be to introduce experimental drugs and measure their impact on cellular proliferation and regeneration in the cornea. A drug that inhibits growth in the cornea may have value in cancer treatment as it may inhibit growth in other tissues.

Survey Design

In an analogous fashion this procedure can be applied to survey design. Good surveys are created from a pool of items that are somehow related to the research question at hand. The research question(s) will in turn have come from a hypothesis, a strategic business plan, a departmental objective, a process requirement, or similar source addressing an area that requires data. Assuming that concerns with validity and reliability have been addressed, the final draft will be comprised of various items and elements that represent the best thinking regarding the necessary components required to answer the question.

For example, if I am concerned with customer satisfaction regarding one of my services, I need to consider quality, quantity, timing, availability and the transaction environment, just to name a few. My survey will be composed of various items measuring each of these things. Similarly, I may want to measure a theoretical construct, such as problem solving: I would select a problem solving model, create an item pool for measuring each of the steps involved in solving problems according to this model, and create the survey from the best items in my pool. A typical analysis of either of these surveys would include item ranges, frequency distributions, central tendency, variances, standard error and correlations among the items.

Variance Marking

If we take the survey design process one step further, a final item will be generated which captures the overall intention of the survey’s purpose. In a customer satisfaction survey it might be “I am satisfied with my experience” or “I will do business here again;” in an employee satisfaction survey it could be “I like my job” or “I like working here.” This kind of item serves as the biomarker, or variance marker in this case, as we are going to follow it around statistically and see how its variance interacts with the variance of the other items.

One intuitive way to do this is to generate a correlation matrix of our variance marker item (VMI) with each of the other items in the survey. A simple form of leverage analysis, this procedure indicates which items are most important to the VMI by the strength of their correlations with it. Squaring these correlations yields the amount of variance shared between the two, hence we are looking at the shared variance of each item with the VMI. From a business leverage perspective, the item with the greatest amount of shared variance with the VMI should be exploited if it has a high mean rating, or targeted for scrutiny if the mean rating is low.

Multiple Regression

We shouldn’t stop here, however, if at all possible. The foregoing is a bivariate analysis, meaning that only one variable at a time was compared to the VMI. However, we have a multivariate data set with the possibility of examining all variables at once. An extension of the bivariate correlation is the multiple regression, which, in general terms examines all sources of variation simultaneously, determines the variance overlap among the items and the VMI, and cancels out duplicate information. The end result is in the form of an equation, which indicates which items are most important, in order and degree of priority, when all items are considered as a whole. The VMI is the marker item, and the regression equation describes in detail how the variance of each survey item interacts with it.

An additional output of a multiple regression analysis is the amount of variance all items taken together have in common with the VMI. In examining the VMI variance, we see how much of it is shared, or explained, by all of the other items. This statistic is called the multiple R square. If we run a survey and the explained variance of the VMI is 87 percent, the results are 87 percent accurate. We can think of it as an indicator of precision or accuracy of the results.

Actionable Information

As a pragmatic exercise in leverage and resource allocation, consider an internal customer satisfaction survey in which managers were asked to evaluate internal processes like recruitment, purchasing, personnel actions, etc. Process components consisted of items such as staff responsiveness, competency, responsibility, and process ease of use, efficiency and tracking. The VMI was “Are you satisfied with this process?” (This is a particularly difficult case, in that processes often cross-functional lines, and the customer – manager – is required to know more about the process than the service provider, and is usually held responsible for the outcome.)

The mean rating of the VMI in this survey was low for all processes, suggesting improvement was necessary, and the multiple R square was 75 percent. If the company has budgeted resources to improve these processes, the multiple regression results suggest how the resources should be allocated. First, 25 percent of the budget should be held in abeyance for later use since 25 percent of the variance in the VMI was not explained. If the regression equation yields something like process efficiency: 43 percent, staff responsibility: 39 percent, and process ease of use: 15 percent, with other items contributing negligibly to the VMI, we have a specific formula for action. If the staff responsibility rating were high, with the other important items being low, we know exactly what to exploit, what to fix, and how to allocate the remainder of the budget for process improvement. This formula tells us how much time, money and effort we should be spending on each process component, and implicitly suggests where not to allocate any resources.

Results

Assuming we have a good survey and that bias plays no role, the results of the variance marker method will prioritize and weight every item on the survey in order of importance to the VMI. If one item is weighted twice as much as the next item, it is indeed twice as important to the VMI. (We can’t do this in a bivariate analysis since variance overlap, or duplicate information, is not removed.) If the VMI is created as a bottom line type of measure or a lead or lag indicator, the multiple regression results will yield strategic information with a specified precision, indicate what to leverage and what to improve, and a good idea of how to allocate resources.

Caution

We know that if 87 percent of the variance is accounted for, then, by subtraction, 13 percent of the variance in the VMI was not explained by the survey items. In other words, we don’t have 100 percent of the information that we need to determine why customer satisfaction or employee satisfaction was rated the way it was – we’re missing 13 percent. This missing variance is due to error in measurement: we haven’t asked all the right questions, the survey created a response bias (e.g., older customers/employees answered differently than younger ones), sampling error, etc. Error variance presents a nominal problem in that it reduces the accuracy of our results. It can also destroy our results altogether.

There are two kinds error variance: random error and bias, or unsystematic error and systematic error, respectively. Random error is assumed to be distributed normally across all survey respondents and survey items, and is quantifiable. Its impact is a reduction in accuracy without loss of information integrity. Systematic error variance, on the other hand, cannot be quantified, has an unknown distribution curve, and wreaks havoc on any results to the point of misinformation and misguidance. Response bias is a typical problem (leading questions, poorly worded items, improper selection of respondents, etc.). Information integrity is compromised to an unknown degree. This applies to any measurement device, not just surveys. A knowledgeable survey designer will be aware of these pitfalls and take all necessary steps to reduce systematic error variance to the greatest extent possible.

About the Author