upgrade
upgrade

🔬Communication Research Methods

Survey Design Best Practices

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Survey research is one of the most common methods you'll encounter in communication studies—and one of the easiest to mess up. Whether you're measuring media effects, audience attitudes, or communication behaviors, the quality of your data depends entirely on how well you design your instrument. You're being tested on your ability to recognize validity threats, sampling logic, question construction principles, and the difference between surveys that produce meaningful data versus those that produce noise.

Don't just memorize a checklist of "good survey tips." Understand why each practice matters: What bias does it prevent? What type of validity does it protect? How does it affect your ability to generalize findings? When you can connect each best practice to its underlying methodological principle, you'll be ready for any exam question—whether it asks you to critique a flawed survey or design one from scratch.


Foundation: Research Design Principles

Before writing a single question, strong surveys require clear conceptual groundwork. These decisions determine whether your survey can actually answer your research question.

Define Clear Research Objectives

  • Operationalize your concepts first—vague goals like "understand attitudes" won't guide question construction; specify exactly what constructs you're measuring
  • Align each question with a specific objective to avoid collecting data you can't use and burdening respondents unnecessarily
  • Measurable objectives enable validity assessment—if you can't articulate what a "successful" answer looks like, you can't evaluate whether your instrument works

Select a Representative Sample

  • Random sampling protects external validity—it's the only way to legitimately generalize findings to a larger population
  • Sample size affects statistical power, determining whether you can detect real effects; too small and you risk Type II errors
  • Sampling frame must match your target population—surveying only smartphone users about "general media habits" introduces systematic bias

Choose Appropriate Survey Type

  • Mode effects influence response quality—online surveys offer anonymity but lower response rates; phone surveys allow clarification but introduce interviewer bias
  • Match mode to your population's accessibility—elderly respondents may prefer phone; younger demographics expect mobile-optimized designs
  • Resource constraints shape feasibility—in-person surveys yield rich data but cost significantly more per response

Compare: Online vs. in-person surveys—both can achieve representative samples, but online surveys risk coverage bias (excluding those without internet access) while in-person surveys risk interviewer effects. If an FRQ asks about trade-offs in survey mode selection, discuss both validity and practical constraints.


Question Construction: Clarity and Neutrality

The way you word questions directly affects measurement validity—whether your questions actually capture the concepts you intend to measure. Poor wording introduces systematic error that no statistical technique can fix.

Use Simple, Unambiguous Language

  • Avoid jargon and technical terms that respondents may interpret differently or not understand at all
  • Concrete language reduces measurement error—"How many hours did you watch TV yesterday?" beats "How much television do you typically consume?"
  • Match vocabulary to your sample—a survey for communication scholars can use different terminology than one for general audiences

Ask One Question at a Time

  • Double-barreled questions destroy interpretability—if someone disagrees with "TV news is biased and untrustworthy," you can't tell which part they're rejecting
  • Single-focus questions isolate variables, allowing you to identify which specific attitudes or behaviors relate to your outcomes
  • Compound questions inflate measurement error because respondents must mentally average their answers to different concepts

Avoid Leading or Biased Questions

  • Neutral framing protects construct validity—leading questions measure acquiescence, not true attitudes
  • Balance positive and negative wording across items measuring the same construct to detect response sets
  • Watch for prestige bias—questions that imply a "correct" answer push respondents toward socially desirable responses

Compare: "Don't you agree that social media harms democracy?" vs. "To what extent do you believe social media affects democratic processes?"—both address the same topic, but the first is leading (presupposes harm) while the second allows respondents to indicate direction and magnitude. Classic exam example of question bias.


Response Options and Measurement Scales

Your response options determine what kind of data you collect and what analyses you can perform. The choice between nominal, ordinal, and interval-level measurement has direct implications for statistical validity.

Include a Mix of Question Types

  • Closed-ended questions enable quantitative analysis and comparability across respondents but limit response depth
  • Likert scales measure intensity of attitudes—typically 5 or 7 points, with debate over whether to include a neutral midpoint
  • Open-ended questions capture unanticipated responses and provide qualitative richness but require coding and reduce comparability

Use Appropriate Response Options

  • Exhaustive and mutually exclusive categories are non-negotiable—every respondent must fit exactly one option
  • Balanced scales prevent acquiescence bias—equal numbers of positive and negative options keep the scale's midpoint meaningful
  • Include "Don't know" or "Not applicable" options when forcing a choice would produce invalid data

Include Demographic Questions

  • Demographics enable subgroup analysis—you can't examine gender differences if you don't collect gender data
  • Place sensitive demographics at the end to build rapport before asking potentially intrusive questions
  • Only collect what you'll analyze—unnecessary demographic questions raise privacy concerns without research benefit

Compare: 5-point vs. 7-point Likert scales—both measure ordinal attitudes, but 7-point scales offer finer discrimination while 5-point scales reduce cognitive burden. Research shows minimal reliability differences, so choose based on your population's sophistication and survey length.


Survey Structure and Flow

How you organize questions affects both data quality and response rates. Order effects and respondent fatigue are real threats to validity that good structure minimizes.

Organize Questions Logically

  • Funnel structure works best—start with general, easy questions before moving to specific or sensitive topics
  • Group related items together to reduce cognitive switching and maintain respondent engagement
  • Use clear section transitions so respondents understand why topics are shifting

Consider the Order Effect

  • Question order can prime responses—asking about negative news experiences before measuring media trust will likely depress trust scores
  • Randomization controls for order effects in experimental designs but may feel disjointed to respondents
  • Context effects are especially strong for attitude questions—earlier items create a frame for interpreting later ones

Keep Surveys Concise

  • Respondent fatigue degrades data quality—attention drops sharply after 10-15 minutes for most online surveys
  • Every question should earn its place—if it doesn't directly serve a research objective, cut it
  • Satisficing increases with length—tired respondents start selecting random or midpoint responses to finish faster

Compare: Randomized vs. fixed question order—randomization eliminates systematic order effects but can create jarring transitions. Fixed order allows logical flow but means all respondents experience the same priming. Best practice: randomize within sections while keeping section order fixed.


Quality Assurance and Ethics

These practices protect both your data's reliability and your participants' rights. Skipping these steps is the fastest way to produce unusable data or face IRB problems.

Pre-test the Survey

  • Pilot testing reveals problems you can't see—ambiguous wording, confusing skip logic, and technical glitches emerge when real people attempt your survey
  • Cognitive interviews go deeper—ask pilot respondents to think aloud while answering to identify interpretation issues
  • Test with your actual population, not just convenient colleagues who share your expertise

Provide Clear Instructions

  • Explicit instructions reduce measurement error—respondents shouldn't have to guess what you're asking
  • Include examples for unfamiliar formats, especially for scales or ranking tasks
  • Specify the reference period clearly—"in the past week" vs. "in general" produces very different data

Ensure Confidentiality and Anonymity

  • Distinguish confidentiality from anonymity—confidential means you know who responded but won't disclose; anonymous means you can't identify respondents at all
  • Privacy assurances increase honest responding, especially for sensitive topics like media habits or political views
  • Informed consent is ethically required—participants must understand how their data will be used before agreeing to participate

Compare: Confidential vs. anonymous surveys—both protect privacy, but anonymous surveys can't link responses across time (no longitudinal analysis) while confidential surveys require secure data storage. Choose based on your research design needs.


Quick Reference Table

ConceptBest Practices
Construct ValidityClear objectives, simple language, avoid leading questions
External ValidityRepresentative sampling, appropriate sample size
Measurement QualitySingle-focus questions, balanced response options, mixed question types
ReliabilityPre-testing, clear instructions, unambiguous wording
Response QualityConcise length, logical organization, order effect awareness
Research EthicsConfidentiality/anonymity, informed consent, minimal demographic intrusion
Mode SelectionMatch to population accessibility, consider interviewer effects, resource feasibility

Self-Check Questions

  1. A survey asks: "How satisfied are you with the quality and frequency of local news coverage?" What specific problem does this question have, and how would you fix it?

  2. Which two best practices most directly protect a survey's external validity, and why do they work together?

  3. Compare and contrast the use of Likert scales versus open-ended questions: What does each sacrifice, and when would you prioritize one over the other?

  4. A researcher places sensitive questions about political ideology at the beginning of a survey measuring media trust. Identify two potential problems this creates and explain the underlying mechanisms.

  5. If an FRQ presents a survey with high response rates but questionable data quality, which best practices would you examine first, and what specific validity threats would you look for?