upgrade
upgrade

🪞Marketing Research

Key Quantitative Research Techniques

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Quantitative research is the backbone of data-driven marketing decisions—and you're being tested on your ability to choose the right technique for the right research question. Understanding these methods isn't just about definitions; it's about knowing when to use surveys versus experiments, why regression analysis reveals different insights than factor analysis, and how sampling decisions affect everything that follows. These techniques connect directly to broader marketing concepts like segmentation, positioning, product development, and campaign evaluation.

The key to mastering this material is recognizing that quantitative techniques fall into distinct categories based on their purpose: data collection, statistical description, relationship testing, and market structure analysis. Don't just memorize what each technique does—know what research problem it solves and how it connects to marketing strategy. When you can explain why a researcher would choose conjoint analysis over cluster analysis, you've moved from memorization to genuine understanding.


Data Collection Foundations

Before any analysis can happen, researchers must gather quality data. These techniques determine what information enters your research and how representative it is of your target population.

Survey Research

  • Primary method for collecting original quantitative data—allows researchers to gather information directly from consumers about attitudes, behaviors, and preferences
  • Multiple administration modes including online surveys, telephone interviews, mail questionnaires, and face-to-face interactions—each with different cost, speed, and response quality tradeoffs
  • Foundation for most marketing research because it generates the numerical data that statistical techniques require for analysis

Sampling Techniques

  • Probability sampling (random, stratified, cluster) gives every population member a known chance of selection—essential for making valid generalizations
  • Non-probability sampling (convenience, quota, purposive) is faster and cheaper but limits how broadly you can apply findings
  • Sample representativeness directly affects external validity—poor sampling undermines even the most sophisticated analysis

Questionnaire Design

  • Question structure shapes data quality—poorly worded questions introduce bias and reduce response accuracy
  • Question types include open-ended, closed-ended, and scaled responses—each serving different measurement objectives
  • Response rate optimization depends on length, clarity, and logical flow—design flaws can tank participation and skew results

Data Collection Methods

  • Primary data comes from original research (surveys, experiments, observations)—more expensive but tailored to your specific questions
  • Secondary data leverages existing sources (government statistics, industry reports, internal records)—faster and cheaper but may not fit your exact needs
  • Method selection must align with research objectives, budget constraints, and required data precision

Compare: Survey research vs. experimental design—both collect primary data, but surveys describe what is while experiments test what causes what. If an exam question asks about establishing causation, experiments are your answer; for describing market attitudes, it's surveys.


Measurement and Scaling

Turning abstract concepts like "brand attitude" or "purchase intent" into measurable numbers requires systematic scaling approaches. The quality of your measurement determines the quality of your analysis.

Scaling Techniques

  • Likert scales measure agreement levels (strongly disagree to strongly agree)—the most common approach for attitude measurement in marketing
  • Semantic differential scales use bipolar adjectives (modern/traditional, high quality/low quality) to capture brand perceptions and positioning
  • Interval and ratio scales enable more sophisticated statistical analysis than nominal or ordinal measures—know the measurement level your analysis requires

Compare: Likert scales vs. semantic differential scales—both measure attitudes, but Likert captures agreement with statements while semantic differential maps perceptions along attribute dimensions. Use semantic differential for brand image studies and positioning research.


Descriptive and Inferential Statistics

These techniques transform raw data into meaningful insights. Descriptive statistics tell you what happened; inferential statistics tell you whether it matters.

Descriptive Statistics

  • Measures of central tendency (mean, median, mode) summarize where data clusters—mean for interval/ratio data, median for skewed distributions, mode for categorical data
  • Measures of dispersion (standard deviation, variance, range) reveal how spread out responses are—high dispersion suggests market heterogeneity
  • Foundation for all further analysis—you must describe your data before you can draw inferences from it

Inferential Statistics

  • Enables generalization from sample findings to the broader population—the bridge between your data and actionable marketing insights
  • Confidence intervals quantify uncertainty around estimates—a 95% confidence interval means you'd get similar results 95 times out of 100 if you repeated the study
  • Statistical significance indicates whether findings reflect real patterns or random chance—essential for justifying marketing investments

Hypothesis Testing

  • Null hypothesis (H0H_0) assumes no effect or relationship exists; alternative hypothesis (H1H_1) states your research expectation
  • P-values indicate probability of observing your results if the null hypothesis were true—typically, p<0.05p < 0.05 leads to rejecting the null
  • Type I errors (false positives) and Type II errors (false negatives) represent the risks in any hypothesis test—understanding these tradeoffs is exam-critical

Compare: Descriptive vs. inferential statistics—descriptive summarizes your sample, inferential extends conclusions to the population. An FRQ might ask when each is appropriate: use descriptive for reporting survey results, inferential for claiming those results apply to all consumers.


Relationship and Difference Testing

These techniques answer questions about how variables relate to each other and whether group differences are meaningful. This is where you move from description to explanation.

Regression Analysis

  • Examines relationships between a dependent variable (e.g., sales) and one or more independent variables (e.g., advertising spend, price)—Y=a+bX+eY = a + bX + e
  • Predictive power allows forecasting outcomes based on input changes—"if we increase ad spend by 10%, sales should increase by X"
  • Multiple regression handles several predictors simultaneously, revealing which factors matter most while controlling for others

ANOVA (Analysis of Variance)

  • Compares means across three or more groups—determines whether differences between groups (e.g., response to three ad campaigns) are statistically significant
  • F-statistic measures variance between groups relative to variance within groups—larger F-values suggest real group differences
  • Post-hoc tests identify which specific groups differ after ANOVA finds overall significance—essential for actionable recommendations

Compare: Regression vs. ANOVA—regression predicts continuous outcomes from continuous or categorical predictors, while ANOVA tests whether group means differ significantly. Use regression for "how much does X affect Y?" and ANOVA for "do these groups perform differently?"


Advanced Multivariate Techniques

When marketing problems involve multiple variables interacting simultaneously, these techniques reveal hidden patterns and structures. Multivariate analysis is where sophisticated market insights emerge.

Multivariate Analysis

  • Analyzes multiple variables simultaneously—captures complex relationships that single-variable approaches miss
  • Accounts for variable interactions—real consumer behavior involves many factors operating together, not in isolation
  • Powers segmentation, targeting, and positioning strategies by revealing how multiple attributes combine to influence preferences

Factor Analysis

  • Reduces data complexity by identifying underlying dimensions (factors) that explain correlations among many variables
  • Data reduction tool—transforms 20 survey questions into 4-5 interpretable factors like "quality perception" or "value consciousness"
  • Exploratory vs. confirmatory—exploratory discovers factor structures; confirmatory tests whether data fits a hypothesized structure

Cluster Analysis

  • Groups similar respondents or objects based on multiple characteristics—the statistical engine behind market segmentation
  • Distance-based algorithms (hierarchical, k-means) determine how "close" observations are across multiple dimensions
  • Segment identification reveals naturally occurring consumer groups for targeted marketing strategies

Conjoint Analysis

  • Measures attribute trade-offs—reveals how consumers value different product features relative to each other
  • Part-worth utilities quantify the value consumers place on specific attribute levels (e.g., how much extra they'd pay for faster shipping)
  • Optimal product design emerges from understanding which feature combinations maximize preference—critical for product development and pricing

Compare: Factor analysis vs. cluster analysis—factor analysis groups variables to find underlying constructs, while cluster analysis groups respondents to find market segments. Both simplify complexity, but factor analysis reduces your questionnaire while cluster analysis reduces your market into targetable segments.


Experimental Approaches

Experiments are the gold standard for establishing causation. When you need to prove that your marketing intervention actually caused an outcome, experimental design is your tool.

Experimental Design

  • Manipulates independent variables while controlling extraneous factors—the only way to establish true cause-and-effect relationships
  • Treatment and control groups allow comparison between those exposed to a marketing intervention and those who weren't
  • Internal validity depends on randomization and control—threats include selection bias, history effects, and maturation

Compare: Experimental design vs. regression analysis—both examine relationships between variables, but experiments manipulate variables to prove causation while regression observes existing relationships (correlation). If the exam asks about proving a campaign caused sales increases, experiments are the answer.


Quick Reference Table

ConceptBest Examples
Data CollectionSurvey research, sampling techniques, data collection methods
MeasurementScaling techniques, questionnaire design
Describing DataDescriptive statistics
Testing SignificanceInferential statistics, hypothesis testing, ANOVA
Predicting OutcomesRegression analysis
Finding StructureFactor analysis, cluster analysis, multivariate analysis
Measuring PreferencesConjoint analysis
Proving CausationExperimental design

Self-Check Questions

  1. A researcher wants to determine whether three different packaging designs lead to different purchase intentions. Which technique should they use, and why wouldn't regression analysis be appropriate here?

  2. Compare factor analysis and cluster analysis: both simplify complex data, but what fundamental difference determines when you'd use each one?

  3. Your survey includes 25 questions measuring brand perception. Before running cluster analysis to segment consumers, why might you first run factor analysis?

  4. A marketing manager claims their new ad campaign increased sales by 15%. What research design would provide the strongest evidence for this causal claim, and what elements would it need to include?

  5. When would a researcher choose non-probability sampling despite its limitations for generalization? Identify two scenarios where this tradeoff makes sense.