Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
Usability testing isn't just a checkbox on your project timeline—it's the bridge between what designers think users want and what users actually need. You're being tested on your ability to select the right method for specific design challenges, whether that means choosing qualitative approaches for deep insight into user cognition or quantitative methods for statistically valid performance data. Understanding when to deploy each technique separates competent designers from strategic ones.
These methods demonstrate core principles you'll encounter throughout advanced design work: user-centered design, iterative refinement, cognitive load theory, and evidence-based decision-making. The exam will push you to justify method selection, combine complementary approaches, and interpret findings to drive design improvements. Don't just memorize what each method does—know why you'd choose it over alternatives and what type of insight it uniquely provides.
These methods capture what users actually do—their actions, attention patterns, and real-time decision-making. Direct observation reveals the gap between intended and actual user behavior.
Compare: Think-Aloud Protocol vs. Eye Tracking—both reveal user attention, but think-aloud captures why users focus somewhere while eye tracking captures where with physiological precision. If an exam question asks about validating visual hierarchy, eye tracking provides the objective data; for understanding user reasoning, think-aloud wins.
These methods leverage evaluator expertise to identify problems without requiring user participants. Expert reviews catch obvious issues early, preserving user testing resources for deeper insights.
Compare: Heuristic Evaluation vs. Cognitive Walkthrough—both are expert-based, but heuristic evaluation assesses general usability principles while cognitive walkthrough specifically targets learnability for new users. Choose cognitive walkthrough when onboarding experience is your primary concern.
These methods generate measurable data for statistical analysis and design optimization. Quantitative approaches answer "which performs better" rather than "why."
Compare: A/B Testing vs. Usability Benchmarking—A/B testing optimizes specific design choices between variants, while benchmarking tracks overall performance over time. Use A/B for micro-decisions (button color, copy variations); use benchmarking for macro-assessment (release-over-release improvement).
These methods capture user preferences, mental models, and satisfaction through direct input. User research methods reveal what users think and feel about their experience.
Compare: Surveys vs. Card Sorting—surveys capture attitudes and satisfaction, while card sorting reveals cognitive organization. Both are user-reported, but card sorting produces structural design guidance while surveys measure experiential outcomes.
These methods decompose user workflows to identify friction points and optimization opportunities. Understanding task structure reveals where design can reduce cognitive load.
Compare: Task Analysis vs. Cognitive Walkthrough—task analysis maps what users must do to complete goals, while cognitive walkthrough evaluates whether the interface supports those actions for novice users. Task analysis is descriptive; cognitive walkthrough is evaluative.
| Concept | Best Examples |
|---|---|
| Qualitative insight into user cognition | Think-Aloud Protocol, Cognitive Walkthrough |
| Quantitative performance measurement | A/B Testing, Usability Benchmarking, Eye Tracking |
| Expert-based evaluation (no users required) | Heuristic Evaluation, Cognitive Walkthrough |
| Information architecture design | Card Sorting, Task Analysis |
| Early-stage/low-fidelity testing | Think-Aloud Protocol, Heuristic Evaluation |
| Statistical optimization | A/B Testing, Surveys |
| Ecological validity (natural context) | Remote Usability Testing |
| Learnability assessment | Cognitive Walkthrough |
You're designing a new onboarding flow for first-time users. Which two methods would you combine to evaluate learnability before and during user testing, and why?
A stakeholder wants "data" to prove one navigation design outperforms another. What method provides statistical evidence, and what's the key requirement for valid results?
Compare and contrast heuristic evaluation and think-aloud protocol: What type of insight does each provide, and at what stage of the design process is each most valuable?
Your eye tracking study shows users fixate on a call-to-action button, but conversion remains low. Which complementary method would help explain the disconnect, and what would you expect to learn?
An FRQ asks you to design a usability testing plan for a content-heavy website redesign. Which method specifically informs information architecture decisions, and how does it capture user mental models?