Why This Matters
Survey research is one of the most common methods in communication studies, and one of the easiest to mess up. Whether you're measuring media effects, audience attitudes, or communication behaviors, the quality of your data depends entirely on how well you design your instrument. You're being tested on your ability to recognize validity threats, sampling logic, question construction principles, and the difference between surveys that produce meaningful data versus those that produce noise.
Don't just memorize a checklist of "good survey tips." Understand why each practice matters: What bias does it prevent? What type of validity does it protect? How does it affect your ability to generalize findings? When you can connect each best practice to its underlying methodological principle, you'll be ready for any exam question, whether it asks you to critique a flawed survey or design one from scratch.
Foundation: Research Design Principles
Before writing a single question, strong surveys require clear conceptual groundwork. These decisions determine whether your survey can actually answer your research question.
Define Clear Research Objectives
- Operationalize your concepts first. A vague goal like "understand attitudes" won't guide question construction. Specify exactly what constructs you're measuring and how you'll know you've measured them.
- Align each question with a specific objective to avoid collecting data you can't use and burdening respondents unnecessarily.
- Measurable objectives enable validity assessment. If you can't articulate what a "successful" answer looks like, you can't evaluate whether your instrument works.
Select a Representative Sample
- Random sampling protects external validity. It's the only method that legitimately supports generalizing findings to a larger population.
- Sample size affects statistical power, which determines whether you can detect real effects. Too small a sample and you risk Type II errors (failing to find an effect that actually exists).
- Your sampling frame must match your target population. Surveying only smartphone users about "general media habits" introduces systematic coverage bias, because you've excluded everyone who doesn't use a smartphone.
Choose Appropriate Survey Type
- Mode effects influence response quality. Online surveys offer anonymity but tend toward lower response rates. Phone surveys allow clarification of confusing items but introduce interviewer bias.
- Match mode to your population's accessibility. Elderly respondents may prefer phone or mail; younger demographics expect mobile-optimized designs.
- Resource constraints shape feasibility. In-person surveys yield rich data but cost significantly more per response.
Compare: Online vs. in-person surveys can both achieve representative samples, but online surveys risk coverage bias (excluding those without internet access) while in-person surveys risk interviewer effects. If an FRQ asks about trade-offs in survey mode selection, discuss both validity and practical constraints.
Question Construction: Clarity and Neutrality
The way you word questions directly affects measurement validity, which is whether your questions actually capture the concepts you intend to measure. Poor wording introduces systematic error that no statistical technique can fix after the fact.
Use Simple, Unambiguous Language
- Avoid jargon and technical terms that respondents may interpret differently or not understand at all.
- Concrete language reduces measurement error. "How many hours did you watch TV yesterday?" is far better than "How much television do you typically consume?" The first targets a specific, recallable behavior; the second is vague about both quantity and time frame.
- Match vocabulary to your sample. A survey for communication scholars can use different terminology than one for general audiences.
Ask One Question at a Time
- Double-barreled questions destroy interpretability. If someone disagrees with "TV news is biased and untrustworthy," you can't tell which part they're rejecting. Maybe they think it's biased but still trustworthy, or vice versa.
- Single-focus questions isolate variables, allowing you to identify which specific attitudes or behaviors relate to your outcomes.
- Compound questions inflate measurement error because respondents must mentally average their answers to two different concepts into one response.
Avoid Leading or Biased Questions
- Neutral framing protects construct validity. Leading questions measure acquiescence (the tendency to agree with whatever is stated), not true attitudes.
- Balance positive and negative wording across items measuring the same construct to detect response sets (patterns of mindless agreement or disagreement).
- Watch for prestige bias. Questions that imply a "correct" or socially approved answer push respondents toward socially desirable responses rather than honest ones.
Compare: "Don't you agree that social media harms democracy?" vs. "To what extent do you believe social media affects democratic processes?" Both address the same topic, but the first is leading (it presupposes harm and pressures agreement) while the second allows respondents to indicate both direction and magnitude. This is a classic exam example of question bias.
Response Options and Measurement Scales
Your response options determine what kind of data you collect and what analyses you can perform. The choice between nominal, ordinal, and interval-level measurement has direct implications for statistical validity.
Include a Mix of Question Types
- Closed-ended questions enable quantitative analysis and comparability across respondents, but they limit response depth because you've predetermined the possible answers.
- Likert scales measure intensity of attitudes, typically using 5 or 7 points. There's ongoing debate over whether to include a neutral midpoint: including it gives genuinely neutral respondents a home, but it can also become a dumping ground for people who don't want to think carefully.
- Open-ended questions capture unanticipated responses and provide qualitative richness, but they require coding (which introduces coder reliability concerns) and reduce comparability across respondents.
Use Appropriate Response Options
- Exhaustive and mutually exclusive categories are non-negotiable. Every respondent must fit exactly one option. If your income categories skip a range or your media use categories overlap, you've created a measurement problem.
- Balanced scales prevent acquiescence bias. Equal numbers of positive and negative options keep the scale's midpoint meaningful.
- Include "Don't know" or "Not applicable" options when forcing a choice would produce invalid data. A respondent who has no opinion but is forced to pick one adds noise, not signal.
Include Demographic Questions
- Demographics enable subgroup analysis. You can't examine gender differences in media trust if you don't collect gender data.
- Place sensitive demographics at the end to build rapport before asking potentially intrusive questions. Starting with income or political affiliation can cause early drop-off.
- Only collect what you'll analyze. Unnecessary demographic questions raise privacy concerns without research benefit.
Compare: 5-point vs. 7-point Likert scales both measure ordinal attitudes, but 7-point scales offer finer discrimination while 5-point scales reduce cognitive burden. Research shows minimal reliability differences between the two, so choose based on your population's sophistication and overall survey length.
Survey Structure and Flow
How you organize questions affects both data quality and response rates. Order effects and respondent fatigue are real threats to validity that good structure minimizes.
Organize Questions Logically
- Funnel structure works best. Start with general, easy questions before moving to specific or sensitive topics. This eases respondents into the survey and builds their comfort.
- Group related items together to reduce cognitive switching and maintain respondent engagement.
- Use clear section transitions so respondents understand why topics are shifting. A brief sentence like "The next few questions ask about your social media use" helps orient them.
Consider the Order Effect
- Question order can prime responses. Asking about negative news experiences before measuring media trust will likely depress trust scores, because you've just made negative experiences salient.
- Randomization controls for order effects in experimental designs, but it may feel disjointed to respondents who expect a logical flow.
- Context effects are especially strong for attitude questions. Earlier items create a mental frame for interpreting later ones.
Keep Surveys Concise
- Respondent fatigue degrades data quality. Attention drops sharply after 10-15 minutes for most online surveys.
- Every question should earn its place. If it doesn't directly serve a research objective, cut it.
- Satisficing increases with length. Tired respondents start selecting random or midpoint responses just to finish faster, which introduces systematic noise into your data.
Compare: Randomized vs. fixed question order. Randomization eliminates systematic order effects but can create jarring transitions. Fixed order allows logical flow but means all respondents experience the same priming. Best practice: randomize within sections while keeping section order fixed.
Quality Assurance and Ethics
These practices protect both your data's reliability and your participants' rights. Skipping these steps is the fastest way to produce unusable data or face IRB problems.
Pre-test the Survey
Pre-testing is where you catch problems before they contaminate your actual data. There are two main approaches:
- Pilot testing involves administering the full survey to a small group from your target population. This reveals ambiguous wording, confusing skip logic, and technical glitches.
- Cognitive interviews go deeper. You ask pilot respondents to think aloud while answering, which surfaces interpretation issues that you'd never notice just by reading responses.
Test with your actual population, not just convenient colleagues who share your expertise. A question that's clear to a fellow researcher may confuse a general audience.
Provide Clear Instructions
- Explicit instructions reduce measurement error. Respondents shouldn't have to guess what you're asking or how to respond.
- Include examples for unfamiliar formats, especially for scales or ranking tasks where the mechanics aren't obvious.
- Specify the reference period clearly. "In the past week" vs. "in general" produces very different data, and leaving it ambiguous means each respondent picks their own time frame.
Ensure Confidentiality and Anonymity
These two terms are distinct, and exams frequently test whether you know the difference:
- Confidential means you know who responded but won't disclose their identity. You can link their responses to their name, but you keep that link private.
- Anonymous means you can't identify respondents at all, even if you wanted to. There's no link between a response and a person.
Privacy assurances increase honest responding, especially for sensitive topics like media habits or political views. Informed consent is ethically required: participants must understand how their data will be used before agreeing to participate.
Compare: Confidential vs. anonymous surveys both protect privacy, but anonymous surveys can't link responses across time (ruling out longitudinal analysis), while confidential surveys require secure data storage protocols. Choose based on your research design needs.
Quick Reference Table
|
| Construct Validity | Clear objectives, simple language, avoid leading questions |
| External Validity | Representative sampling, appropriate sample size |
| Measurement Quality | Single-focus questions, balanced response options, mixed question types |
| Reliability | Pre-testing, clear instructions, unambiguous wording |
| Response Quality | Concise length, logical organization, order effect awareness |
| Research Ethics | Confidentiality/anonymity, informed consent, minimal demographic intrusion |
| Mode Selection | Match to population accessibility, consider interviewer effects, resource feasibility |
Self-Check Questions
-
A survey asks: "How satisfied are you with the quality and frequency of local news coverage?" What specific problem does this question have, and how would you fix it?
-
Which two best practices most directly protect a survey's external validity, and why do they work together?
-
Compare and contrast the use of Likert scales versus open-ended questions: What does each sacrifice, and when would you prioritize one over the other?
-
A researcher places sensitive questions about political ideology at the beginning of a survey measuring media trust. Identify two potential problems this creates and explain the underlying mechanisms.
-
If an FRQ presents a survey with high response rates but questionable data quality, which best practices would you examine first, and what specific validity threats would you look for?