Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
User research isn't just a checkbox in the design process—it's the foundation that separates intuition-driven design from evidence-based strategy. You're being tested on your ability to select the right methodology for the right research question, understand when qualitative depth beats quantitative breadth, and recognize how different methods reveal different types of user insights. The methodologies here demonstrate core principles like triangulation, ecological validity, iterative validation, and participatory design thinking.
Don't just memorize what each method does. Know when to deploy it, what type of data it generates, and how it connects to strategic design decisions. An exam question won't ask you to define card sorting—it'll ask you to justify why card sorting beats interviews for an information architecture problem, or when A/B testing fails to capture the insights you actually need.
These methodologies prioritize ecological validity—understanding users in their real environments rather than artificial lab settings. The underlying principle is that user behavior changes when removed from context, so researchers must go to the user.
Compare: Ethnographic research vs. Contextual inquiry—both prioritize real-world context, but ethnography is passive observation over time while contextual inquiry is active questioning during specific tasks. Use ethnography for discovery; use contextual inquiry when you've identified specific workflows to investigate.
These methods ask users directly about their experiences, needs, and preferences. The core principle is that users possess knowledge about their own needs—but researchers must structure the elicitation carefully to extract actionable insights rather than surface-level opinions.
Compare: User interviews vs. Focus groups—interviews reveal individual depth and private frustrations; focus groups reveal social dynamics and shared language. If an FRQ asks about sensitive topics or individual workflows, interviews win. For understanding how a community thinks about a product category, focus groups provide richer context.
These methodologies observe what users actually do rather than what they say they do. The principle here is that self-reported preferences often diverge from actual behavior—so direct observation of task performance yields more reliable design guidance.
Compare: Usability testing vs. A/B testing—usability testing is qualitative and diagnostic (why do users struggle?), while A/B testing is quantitative and evaluative (which version performs better?). Use usability testing to identify problems; use A/B testing to validate solutions at scale.
These methods specifically address how users organize, categorize, and navigate information. The underlying principle is that designers' mental models often differ from users'—so IA decisions must be grounded in user cognition, not internal logic.
Compare: Card sorting vs. Heuristic evaluation—card sorting captures user mental models for information organization, while heuristic evaluation applies expert knowledge of usability principles. Card sorting answers "how do users think about this content?" Heuristic evaluation answers "does this interface follow established best practices?"
These methodologies transform raw research data into actionable design tools. The principle is that insights must be synthesized into formats that guide ongoing design decisions—research findings alone don't drive action.
Compare: Personas vs. Journey maps—personas represent who your users are; journey maps represent what they experience over time. Both are synthesis tools, but personas guide feature prioritization while journey maps guide experience optimization across touchpoints.
| Concept | Best Examples |
|---|---|
| Ecological validity (real-world context) | Ethnographic research, Contextual inquiry, Diary studies |
| Quantitative measurement at scale | Surveys, A/B testing |
| Qualitative depth and discovery | User interviews, Ethnographic research, Focus groups |
| Behavioral observation (what users do) | Usability testing, Eye tracking, A/B testing |
| Information architecture decisions | Card sorting, Heuristic evaluation |
| Longitudinal/temporal insights | Diary studies, Journey mapping |
| Synthesis and modeling | Personas, Journey maps, User scenarios |
| Collaborative/participatory approaches | Participatory design, Contextual inquiry |
You need to understand why users abandon a checkout flow, but you don't know where the friction occurs. Which two methodologies would you combine, and why does each contribute something the other can't?
Compare and contrast ethnographic research and contextual inquiry. When would you choose immersive observation over the hybrid interview-observation approach?
A stakeholder wants to run an A/B test to decide between two navigation structures. What's the limitation of this approach, and which methodology should precede it?
Which three methodologies are best suited for discovery phases when you don't yet know what questions to ask? What do they have in common?
Your team has completed user interviews and usability testing. Now you need to synthesize findings into artifacts that will guide design decisions for the next six months. Which two synthesis methods would you create, and what different purposes do they serve?