Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
User testing isn't just a checkbox in the design process—it's the foundation of creating experiences that actually work for real people. You're being tested on understanding when to deploy specific methodologies, what type of data each generates, and how findings translate into design decisions. The best designers don't just know these methods exist; they know which tool fits which problem.
Think of user testing methodologies as falling into distinct categories: some capture behavioral data (what users do), others reveal attitudinal data (what users think and feel), and still others leverage expert judgment to catch issues before users ever encounter them. When you're analyzing a design scenario on an exam or in practice, don't just memorize method names—know what concept each method illuminates and when it's the right choice.
These methodologies focus on watching what users actually do, not what they say they do. The gap between reported behavior and actual behavior is one of the most important insights in UX research.
Compare: Eye Tracking vs. Think-Aloud Protocol—both reveal why users struggle, but eye tracking shows unconscious attention patterns while think-aloud captures conscious reasoning. Use eye tracking for visual hierarchy questions; use think-aloud for understanding decision-making logic.
These approaches capture what users think, feel, and believe about a product or experience. Attitudes don't always predict behavior, but they reveal motivations and emotional responses that behavioral data alone can't explain.
Compare: Focus Groups vs. Surveys—both capture attitudes, but focus groups provide depth through discussion while surveys provide breadth through scale. If an FRQ asks about understanding user motivations early in a project, focus groups are your answer; for validating findings across a large user base, surveys are the tool.
These methodologies leverage trained evaluators rather than end users. They're faster and cheaper than user testing but depend entirely on evaluator expertise and established principles.
Compare: Heuristic Evaluation vs. Cognitive Walkthrough—both use experts instead of users, but heuristic evaluation broadly scans for violations of principles while cognitive walkthrough deeply examines specific task flows. Use heuristic evaluation for general interface audits; use cognitive walkthrough when learnability of critical tasks is the concern.
These methodologies focus on optimization and organization—testing variations or understanding how users mentally structure information.
Compare: A/B Testing vs. Usability Testing—both involve real users, but A/B testing measures which design performs better quantitatively while usability testing reveals why users struggle qualitatively. A/B testing requires large sample sizes and live products; usability testing works with 5-8 participants on prototypes.
| Concept | Best Examples |
|---|---|
| Behavioral data (what users do) | Usability Testing, Eye Tracking, A/B Testing |
| Attitudinal data (what users think) | Surveys, Focus Groups, Contextual Inquiry |
| Expert-based evaluation | Heuristic Evaluation, Cognitive Walkthrough |
| Verbalized reasoning | Think-Aloud Protocol, Contextual Inquiry |
| Large-scale quantitative data | A/B Testing, Surveys |
| Early-stage exploration | Focus Groups, Card Sorting, Contextual Inquiry |
| Information architecture | Card Sorting |
| Visual attention analysis | Eye Tracking |
Which two methods both capture attitudinal data but differ significantly in sample size and depth of insight? How would you decide between them for a project?
A client wants to understand why users abandon their checkout flow. Which methodology would reveal where users look during checkout, and which would reveal what they're thinking as they abandon? What's the trade-off?
Compare heuristic evaluation and usability testing: What are the advantages of each, and in what project phase would you recommend each approach?
You're designing a new navigation system for a content-heavy website. Which methodology specifically informs information architecture, and what's the difference between its open and closed variants?
An FRQ asks you to recommend a testing approach for optimizing button color on a live e-commerce site with 50,000 daily visitors. Which method is appropriate, and why wouldn't usability testing work here?