Surveys surround us. The questions come via the mail, face to face in shopping malls, on the telephone, and increasingly over the Internet – a multitude of surveys, but many of them fail to achieve their goals. The fault often lies with those who design and administer the surveys with good intentions, but without the necessary professional background or survey experience.

A few tips may provide some help. And the real-life examples of survey mistakes cited here are caution lights for those involved in gathering information by survey for Six Sigma projects.

Surveys Need Clear and Measurable Goals

Surveys are a powerful tool that can aid effective decision making. They are not best as an end in themselves. A good survey must have a clear-cut and measurable goal, which is directly linked to the organization’s policy. Here are examples of an effective and a non-effective survey goal:

  • Effective An organization that operates primary health clinics needs to decide on the best allocation of limited funds for improvement. Specifically, the organization wants to decide between investing in waiting room facilities or in training and educating the clerical staff.
  • Non-effective The “how-are-we-doing?” surveys rampant in government and some businesses. These surveys give a “satisfaction grade” to various departments or units. The results of such surveys are often presented and then shelved, with little or no impact on the organization’s future policies.

Be Sure You Can Deliver, Before You Ask

Surveys are often used by the service sector to ascertain customers’ needs and wants. Asking customers what they would like naturally raises customer expectations. Thus you should not ask about services unless you are willing to listen and provide what customers say they want. A company unable or unwilling to eventually provide those services faces a significant drop in customer satisfaction.

Also, this type of survey is not best done when the service provided by the company is already seriously flawed. Asking about new, state-of-the-art services in a survey, when the basics are not adequately taken care of, will only damage the customers’ faith.

All Too Easy to Get a Non-Representative Sample

Countless volumes have been written about the need for sample representiveness, but non-representive samples can occur in devious ways. A woman who tends to respond to telephone surveys recently has been phoned on a daily basis by survey-takers. A check among (a non-random, non-representative sample of) her colleagues gives credence to the suspicion that pollsters may be making note of those who are willing to respond, in order to poll them again and again. This type of behavior creates biased samples, as people who respond to polls are likely to differ from those who don’t in other key aspects.

Make Sure Respondents Can Really Answer

Sometimes the survey questions cannot be answered by the public. A market research firm asked cell phone owners which carrier had the best customer service. Thirty-nine percent of those asked responded “I don’t know.” The reason for this is simple. Most people use only one cell phone and one carrier, and therefore cannot compare between the carriers. Information gleaned from these types of questions is likely to be worthless.

It is important to provide the responder with the entire range of possible responses. One service group asked, “How much did we improve relative to last year?” The available responses ranged from “somewhat” to “very much.” There was no way for anyone to respond that the service quality had not improved or had worsened. Such optimistic questioning not only leads to incomplete information but also damages customer faith.

Construct Those Questionnaires with Care

Sometimes the best planned survey can fail due to a poorly constructed questionnaire form. A vast majority of customer satisfaction surveys utilizes the “Likert scale” (1=very low, 2=low, 3=medium, 4=high, 5=very high, 9=non-relevant). This practice causes some customers to expect this scale on all survey forms. Any deviation from the scale can seriously damage the validity of the answers. Here are two actual examples of this type of mistake:

  • One researcher used this scale: 1=non-relevant, 2=very low, 3=low, 4=medium, 5=high, 6=very high. The relatively high number of the “1=non-relevant” responses signaled to this author the problem with the scale, and the survey was redone.
  • Another researcher used the standard Likert scale for part of the survey form, but reversed the scale for another part of the form. As a large number of respondents didn’t notice this reversal, the data gathered from this survey was trashed.

Clarity Can Avoid Compounding Troubles

People dislike spending time answering surveys, so the questionnaire must be as simple, clear and attention-grabbing as possible. One survey was built in order to assess patient satisfaction in secondary clinics. The first question in the survey asked for the patient’s overall satisfaction with the clinic. This question was in plain, unmarked text. This question was followed by a highlighted table of questions that checked the various aspects of the patient’s experience in the clinic. Roughly half of the respondents didn’t notice the first question and answered only the highlighted questions in the table.

The first question in this survey is of the type termed a “variance marker” by Dr. George Chynoweth, a consultant and teacher of research methodology. Its purpose was to mark those service aspects in the clinic which were most important to the customers. As a result of poor planning of the survey form, not only was the effective sample size for analysis cut in half, it also is entirely possible that the meticulous group which answered the first question differs significantly from the others on their satisfaction profile in the clinic. The result: effective sample size is too small and non-representative.

Surveying Not a Job for the Inexperienced

In summary, survey planning and administration are not easy tasks. To circumvent serious errors in any of its phases and ensure valuable data is collected, surveying should be done by those who are well-studied, well-trained and experienced. If a survey must be done by those without such qualifications, help should be obtained from survey professionals, either from within the company or without.

About the Author