upgrade
upgrade

🖥️Design and Interactive Experiences

Key UX Research Methods

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

UX research methods aren't just a checklist to memorize—they're the foundation of every design decision you'll defend on exams and in practice. You're being tested on your ability to choose the right method for the right situation, understand when qualitative beats quantitative (and vice versa), and recognize how different methods reveal different types of user insights. The best designers don't just know what these methods are; they know which tool to reach for when faced with specific design challenges.

Think of research methods as falling into distinct categories: generative vs. evaluative, behavioral vs. attitudinal, and qualitative vs. quantitative. When an exam question asks you to recommend a research approach, you need to understand these underlying frameworks. Don't just memorize method names—know what type of data each produces, when in the design process it's most useful, and what questions it can (and can't) answer.


Generative Research: Understanding Users Before You Design

These methods help you discover user needs, motivations, and contexts before you start designing. They answer the question: "What should we build?"

User Interviews

  • One-on-one qualitative conversations—the gold standard for understanding user motivations, frustrations, and mental models in depth
  • Open-ended questions reveal unexpected insights that surveys miss; users explain the why behind their behaviors
  • Early-stage research tool that informs personas, journey maps, and feature prioritization before any wireframes exist

Contextual Inquiry

  • Field research in users' natural environments—observe people where they actually work, live, or use products
  • Combines observation with interviewing to capture both what users say and what they do (often different!)
  • Reveals environmental constraints like interruptions, workarounds, and real-world conditions that lab testing misses

Focus Groups

  • Group discussions with 5-8 participants—efficient for exploring attitudes, reactions, and initial concepts
  • Social dynamics generate richer debate as participants build on each other's ideas and challenge assumptions
  • Best for early exploration, not validation; groupthink can skew results if not moderated carefully

Compare: User Interviews vs. Focus Groups—both gather qualitative attitudinal data, but interviews provide depth while focus groups provide breadth and social dynamics. If an FRQ asks about exploring sensitive topics, interviews are your answer; for gauging group reactions to concepts, choose focus groups.


Evaluative Research: Testing What You've Built

These methods assess existing designs or prototypes. They answer: "Does this solution work?"

Usability Testing

  • Task-based observation where real users attempt specific goals while you watch and record struggles
  • Identifies friction points through metrics like task completion rate, time on task, and error frequency
  • Works at any fidelity—test paper prototypes, wireframes, or finished products; earlier testing catches problems cheaper

Heuristic Evaluation

  • Expert review against established principles—typically Nielsen's 10 usability heuristics
  • Fast and cost-effective because it doesn't require recruiting users; 3-5 evaluators catch most issues
  • Finds obvious problems quickly but misses context-specific issues that only real users encounter

A/B Testing

  • Controlled experiments comparing design variations—users randomly see Version A or B
  • Quantitative metrics drive decisions: conversion rates, click-through rates, engagement time
  • Requires significant traffic to reach statistical significance; best for optimizing existing products, not exploring new concepts

Compare: Usability Testing vs. A/B Testing—both evaluate designs, but usability testing tells you why something fails (qualitative), while A/B testing tells you which option performs better (quantitative). Use usability testing to diagnose problems; use A/B testing to validate solutions at scale.


Data Collection at Scale: Quantitative Methods

When you need numbers, patterns, and statistical confidence, these methods deliver.

Surveys and Questionnaires

  • Scalable data collection from hundreds or thousands of users to identify trends and validate hypotheses
  • Mix closed-ended questions (Likert scales, multiple choice) for statistics with open-ended questions for context
  • Sampling matters—biased recruitment or poorly worded questions produce misleading data

Compare: Surveys vs. User Interviews—surveys sacrifice depth for breadth. Use surveys to quantify how many users experience a problem; use interviews to understand why they experience it. Strong research often pairs both methods.


Synthesis and Communication Tools

These methods organize and communicate research findings to guide design decisions.

Personas

  • Fictional user archetypes built from real research data—not assumptions or stereotypes
  • Keep user needs visible throughout the design process; teams ask "Would Sarah use this?"
  • Include goals, behaviors, and pain points—demographics alone don't drive useful design decisions

Journey Mapping

  • Visual timeline of user experience across all touchpoints, from awareness through long-term use
  • Surfaces emotional highs and lows—where users feel frustrated, delighted, or confused
  • Identifies opportunity gaps between what users expect and what they currently experience

Compare: Personas vs. Journey Maps—personas represent who your users are; journey maps represent what they experience over time. Use personas to align your team on target users; use journey maps to identify where the experience breaks down.


Information Architecture Research

These methods specifically inform how content and navigation should be structured.

Card Sorting

  • Users group and label content cards to reveal their mental models for organizing information
  • Open sorting lets users create their own categories; closed sorting tests predefined structures
  • Directly informs navigation design and menu labels—use the language users actually use

Compare: Card Sorting vs. Usability Testing—card sorting shapes information architecture before you build navigation; usability testing validates whether that navigation works after implementation. Card sorting is generative; usability testing is evaluative.


Quick Reference Table

ConceptBest Examples
Generative/Discovery ResearchUser Interviews, Contextual Inquiry, Focus Groups
Evaluative/Validation ResearchUsability Testing, A/B Testing, Heuristic Evaluation
Qualitative MethodsUser Interviews, Contextual Inquiry, Usability Testing
Quantitative MethodsSurveys, A/B Testing, Analytics
Expert-Based (No Users Required)Heuristic Evaluation, Cognitive Walkthrough
Synthesis/Communication ToolsPersonas, Journey Mapping
Information ArchitectureCard Sorting, Tree Testing
Field ResearchContextual Inquiry, Diary Studies

Self-Check Questions

  1. A product team wants to understand why users abandon their checkout flow. Which two methods would provide complementary insights, and what would each reveal?

  2. You have a working prototype and need to identify usability problems quickly with a limited budget. Compare heuristic evaluation and usability testing—which would you choose and why?

  3. When would you use card sorting versus usability testing to improve navigation? Explain where each fits in the design process.

  4. A stakeholder asks for "data" to support a design decision. What's the key difference between the data you'd get from surveys versus user interviews, and when is each more appropriate?

  5. Compare and contrast personas and journey maps: What unique value does each provide, and how might you use them together in a design project?