NDUS-IR Supports High-Quality Survey Designs & Analyses

 

The NDUS-IR team takes pride in assisting our collaborators in the creation and analysis of high-quality surveys. To provide the highest quality assistance, we ask our collaborators to involve us in the development of any survey where our assistance with analysis will be requested. We ask to be involved early in the process due to our extensive experience with the strengths and challenges of survey research.

Through our diverse experiences, we have seen situations in which a survey was poorly executed. As a result, we have seen cases where some data could not be used for meaningful analysis due to low quality. This issue is troubling for obvious reasons, but a non-obvious reason as well. Namely, the presence of poorly designed questions or too many questions can cause fatigue in survey respondents. As a result, respondents may be more likely to abstain from or respond imprecisely on even the survey’s well-written questions.

Ultimately, survey design matters more than many realize. Survey design is both an art and science with several non-obvious best practices. Here are some examples:

  • Rather than using checkboxes so that a survey respondent should check it to answer “yes” to a question or leave it unchecked to answer “no,” your data will be more accurate if you create a multiple-choice question in which a respondent needs to choose “yes” or “no” explicitly.
  • “Double-barreled questions” are those that ask for someone’s position on more than one issue while allowing only one answer. These are to be avoided because it will not be possible to know which topic reflects the basis of the respondent’s answer. For example, “Should the federal government reduce military and foreign aid spending?” is not a good question because it is not possible to know if respondents are reporting their attitude toward reducing spending on the military, foreign aid, or both.
  • It might seem that giving respondents many answer choices might help them provide the most accurate self-report of their attitudes. However, researchers have identified that the most reliable data come from limiting respondents’ options in specific ways. For example, if you want to ask respondents a “unipolar” questions (such as, to what extent they believe climate change is human-made), it would be ideal for giving them give options like “not at all,” “a little,” “somewhat,” “quite a bit,” and “entirely.” You could then analyze the data as a continuous variable ranging in value from 1 to 5, or as a variable with five ordered categories.
  • If you wish to ask respondents a “bipolar” question – where the range is not from “not at all” to “entirely,” but rather from two opposing ideas on a spectrum – your best bet is to use seven options. For instance, you might ask respondents how they are voting on a ballot measure. The answers might be, “definitely vote against,” “very likely to vote against,” “slightly likely to vote against,” “equally likely to vote against or in favor,” “slightly likely to vote in favor,” “very likely to vote in favor,” and “definitely vote in favor.” You would then analyze the data as a continuous variable ranging in value from 1-7, or as a variable with seven ordered categories.
  • Open-ended questions should be used sparingly and only if there are plans to employ a systematic method of analyzing qualitative data. Casually throwing in open-ended questions results in more significant survey fatigue for respondents, altogether lowering the data quality for the entire survey. We have seen many occasions in which open-ended questions’ answers were not even ultimately used in any meaningful way. It is best to try to create scaled response options before jumping into an open-ended question. If an open-ended survey question is an absolute must, we recommend putting it near the end of the survey. That way, if the respondent disengages when seeing open-ended questions, at least the prior questions will have been answered.
  • Be strategic about “forced” responses in online surveys. Forcing an answer can be a good idea if you need someone’s answer to a first question in order to display the right follow-up questions on a subsequent survey page. However, when it comes to sensitive topics, forcing an answer may lead someone to answer inaccurately. If forcing an answer could potentially lead to low data quality or could create ethical concerns, making the question optional is ideal.

 
NDUS-IR has provided a checklist to end users to assist in development of surveys.  The checklist is located here.

============================================================================

 

Dr. Ellie Shockley is an Educational Data Warehouse Specialist on the NDUS institutional researcher team. In this capacity, she also works closely with the Department of Public Instruction (PK-12) and the Information Technology Department. She often responds to education data requests that come from state agencies or from outside of state government. Her work ranges from pulling raw data and sharing it according to our best practices, to conducting complex statistical analyses in order to answer research questions, to assisting with inter-agency collaborations related to education data, and more.