Fiveable

❤️‍🩹Intro to Public Health Unit 13 Review

QR code for Intro to Public Health practice questions

13.2 Types of Program Evaluation

13.2 Types of Program Evaluation

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
❤️‍🩹Intro to Public Health
Unit & Topic Study Guides

Formative vs Outcome Evaluation

Program evaluation tells you whether a public health intervention is working, why it's working (or not), and how to make it better. Without evaluation, programs operate on guesswork. With it, you get evidence to guide decisions at every stage.

There are three main types of evaluation, each tied to a different phase of a program's life.

Types of Program Evaluation

Formative evaluation happens during planning and development, before full implementation. Its job is to assess feasibility and improve the program's design so it actually fits the target population.

  • Uses needs assessments, focus groups, and pilot studies
  • Helps refine interventions to be culturally appropriate and more likely to succeed
  • Example: Before launching a diabetes prevention program in a rural community, you might run focus groups with residents to find out what barriers they face (transportation, cost, language) and adjust the program accordingly

Process evaluation happens during implementation. It asks: "Are we doing what we said we'd do, and is it reaching the right people?"

  • Uses observation, surveys, and program records to track activities
  • Checks whether the program is being carried out as planned (this is called program fidelity)
  • Example: A school-based nutrition program tracks how many classrooms received the curriculum, whether teachers delivered it correctly, and how many students participated

Outcome evaluation happens after enough time has passed for effects to show up. It measures whether the program actually achieved its intended goals.

  • Often uses pre-post designs, comparison groups, or time-series analyses
  • Example: Six months after a smoking cessation program ends, you compare quit rates between participants and a comparison group

Key Differences Between Evaluation Types

FeatureFormativeProcessOutcome
WhenBefore/during early implementationThroughout implementationAfter effects have time to appear
FocusProgram design and planningProgram execution and deliveryProgram impact and results
Key question"Will this work for this population?""Are we doing it right?""Did it work?"

These types can be used independently, but combining all three gives you the most complete picture of a program's development, delivery, and impact.

Purpose of Program Evaluation

Each evaluation type serves a distinct purpose, and together they support evidence-based decision-making in public health.

Formative evaluation prevents costly mistakes by catching problems early. If a program isn't designed for the community it's supposed to serve, even perfect implementation won't produce results. Formative work increases the likelihood of success before significant resources are committed.

Process evaluation provides the "how" and "why" behind a program's performance. If outcomes are disappointing, process data can reveal whether the problem was a flawed design or poor execution. It also enables real-time adjustments, so you don't have to wait until the end to fix delivery issues.

Outcome evaluation provides the evidence that stakeholders and funders need. It answers the accountability question: "Did this program produce measurable change?" Those findings also contribute to the broader evidence base, helping other practitioners decide which interventions are worth adopting.

Types of Program Evaluation, Frontiers | Prioritisation processes for programme implementation and evaluation in public ...

Why This Matters for Public Health Practice

  • Public health resources are limited, so evaluation helps direct funding toward strategies that actually work
  • Combining evaluation types gives a comprehensive view across the full program lifecycle
  • Outcome data informs policy decisions, while process data explains what made a program succeed or fail
  • Identifying successful strategies in one context helps guide adaptation for other populations or settings

Timing of Evaluation Types

Planning your evaluation timeline early is critical. If you wait until a program is already running to think about evaluation, you'll miss the chance to collect baseline data or build in the right data collection methods.

Formative evaluation takes place during planning and early implementation. You'd use it when developing a brand-new intervention or adapting an existing one for a new population or context.

Process evaluation runs throughout the entire implementation period. Because it's ongoing, it allows for real-time course corrections when something isn't working as planned.

Outcome evaluation comes after enough time has elapsed for the intervention to produce measurable effects. How long that takes depends on the program. A hand-washing campaign might show results in weeks; a childhood obesity prevention program might need years.

Planning Considerations

  • Build evaluation into the program design from the start so data collection systems are ready when you need them
  • Allocate budget and staff time specifically for evaluation activities
  • Account for the program's expected timeframe for observable impacts (short-term behavior change vs. long-term health outcomes)
  • Align evaluation timelines with reporting requirements and funder expectations
  • Plan for potential delays in implementation that could shift your evaluation schedule
Types of Program Evaluation, Program evaluation - Wikipedia

Evaluation Designs: Benefits vs Limitations

Choosing the right evaluation design involves trade-offs between rigor, feasibility, cost, and ethics.

Experimental and Quasi-Experimental Designs

Randomized controlled trials (RCTs) randomly assign participants to intervention or control groups, which gives them strong internal validity (confidence that the program caused the observed effect).

  • Limitation: They can be expensive and sometimes ethically problematic in public health, since a control group may be denied a potentially beneficial intervention
  • They may also have limited external validity, meaning results from a controlled trial don't always translate to messy real-world settings

Quasi-experimental designs are used when randomization isn't feasible. These include approaches like difference-in-differences and regression discontinuity.

  • They're more practical for real-world public health settings
  • Trade-off: They're more vulnerable to selection bias and confounding factors, making it harder to establish that the program caused the outcome

Pre-post designs measure outcomes before and after an intervention in the same group. They're simple to implement, but they can't account for external factors (like a new policy or seasonal trend) that might explain the change. On their own, they're weak for establishing causality.

Alternative Evaluation Approaches

Mixed-methods approaches combine quantitative data (numbers, rates, survey scores) with qualitative data (interviews, open-ended responses). This pairing helps you measure what happened and understand why. The downside is that they require diverse expertise and more resources.

Participatory evaluation involves community members and stakeholders in the evaluation process itself. This increases the relevance and use of findings, but it can introduce bias and takes more time to coordinate.

Natural experiments occur when a policy change, disaster, or other event creates conditions that mimic an experiment without anyone designing it. They offer insights into real-world effectiveness, but researchers have no control over implementation or data collection, so the evidence is less tidy.

Longitudinal designs track participants over extended periods to assess long-term impacts. They're valuable for understanding whether effects last, but they face challenges with participant dropout (attrition) and require sustained funding to maintain.