Fiveable

🧑🏽‍🔬History of Science Unit 3 Review

QR code for History of Science practice questions

3.4 Scientific Method and Experimentation

3.4 Scientific Method and Experimentation

Written by the Fiveable Content Team • Last updated August 2025
Written by the Fiveable Content Team • Last updated August 2025
🧑🏽‍🔬History of Science
Unit & Topic Study Guides

Steps of the Scientific Method

Overview of the Scientific Method

The scientific method is an iterative process for investigating natural phenomena through empirical evidence. Rather than accepting claims on authority, it requires you to observe, hypothesize, test, and revise. This approach emerged most clearly during the Scientific Revolution of the 16th and 17th centuries, when thinkers began insisting that knowledge should be grounded in observation and experiment.

The process begins with identifying a question or problem based on observations of the natural world. That question needs to be empirically testable, meaning you can design an experiment or gather data to answer it.

Formulating a Hypothesis

A hypothesis is a tentative, testable explanation for what you've observed. It makes a specific prediction about what will happen under certain conditions, often stated in an "if-then" format.

  • Independent variable: the factor you manipulate
  • Dependent variable: the outcome you measure
  • Example: If plants receive fertilizer (independent variable), then they will grow taller (dependent variable) compared to plants not receiving fertilizer.

The hypothesis isn't a guess. It's an informed prediction based on prior observation or existing knowledge, and it must be falsifiable, meaning there has to be some possible outcome that would prove it wrong. If no conceivable result could disprove a claim, it isn't a scientific hypothesis.

Designing and Conducting an Experiment

Once you have a hypothesis, you design an experiment to test it. The key steps are:

  1. Identify your variables: determine which factor you'll manipulate (independent variable), what outcome you'll measure (dependent variable), and what you'll hold constant (controlled variables) to prevent confounding.
  2. Write operational definitions: describe precisely how each variable will be manipulated and measured. These let other researchers replicate your experiment exactly.
  3. Collect data: conduct the experiment under controlled conditions, producing either quantitative data (numbers, measurements) or qualitative data (descriptions, observations).

Analyzing Data and Drawing Conclusions

After collecting data, you determine whether the results support or refute your hypothesis.

  • Statistical tests (such as t-tests or ANOVA) help determine whether results are significant or likely due to chance.
  • Conclusions assess the validity of the hypothesis and point toward next steps: revising the hypothesis, identifying new questions, or replicating the study.
  • Results are always interpreted in the context of existing knowledge. Researchers consider alternative explanations and acknowledge limitations of the study.

Communicating Results

Science depends on sharing findings so others can evaluate and build on them.

  • Scientists publish results in scholarly journals and present at conferences.
  • Peer review is the process where experts in the field evaluate a study's methods, results, and conclusions before publication. This serves as a quality check.
  • Replication by independent researchers validates findings and reduces the chance that bias or error influenced the original conclusions.
Overview of the Scientific Method, Scientific method - Wikipedia

Importance of Experimentation

Role of Experimentation in the Scientific Method

Experimentation is what separates scientific claims from speculation. By manipulating one variable while holding others constant, experiments establish cause-and-effect relationships. Without this controlled manipulation, you can only show that two things are correlated, not that one causes the other.

This distinction matters historically. Before the Scientific Revolution, natural philosophers often reasoned about causes from observation alone. Galileo's inclined plane experiments, for instance, were groundbreaking precisely because he didn't just watch falling objects; he systematically varied conditions and measured outcomes to test specific predictions about motion.

Characteristics of Well-Designed Experiments

  • Replicability: detailed protocols ensure other scientists can repeat the experiment and verify results.
  • Adequate sample size: larger samples better represent the population and reduce sampling error, making it easier to detect real effects.
  • Random assignment: participants are randomly placed into treatment groups, which minimizes pre-existing differences between groups and reduces bias.
  • Appropriate controls: control groups provide a baseline for comparison. In medical research, for instance, placebo controls help distinguish the effect of a treatment from the effect of simply receiving any treatment at all.

Advancing Scientific Knowledge through Experimentation

Experimental outcomes push science forward in both directions. When results support a hypothesis, confidence in it grows. When they don't, the hypothesis gets revised or discarded, and new questions emerge. Both outcomes are productive.

  • Experiments build on previous findings to refine theories and models.
  • Replication in different populations or settings tests whether findings generalize beyond the original study.
  • Meta-analyses synthesize results across many experiments, providing a broader and more reliable picture of a research question.

Contributions of the Scientific Revolution

Overview of the Scientific Method, Reading: The Scientific Method | Introduction to Sociology (Waymaker)

Shift Toward Empiricism and Inductive Reasoning

Before the Scientific Revolution, European scholars largely relied on the authority of classical texts (especially Aristotle) and religious doctrine. The 16th and 17th centuries brought a fundamental shift: knowledge should come from sensory experience, not inherited authority.

Francis Bacon was a central figure in this shift. He championed inductive reasoning, which starts with specific observations and builds toward general conclusions. This contrasted sharply with the dominant Aristotelian approach of deductive reasoning, which started from accepted general principles and reasoned downward to specific cases.

Here's the difference in practice:

  • Deductive (Aristotelian): "Heavy objects fall faster than light ones" (accepted principle) → therefore this cannonball will fall faster than this musket ball.
  • Inductive (Baconian): Drop objects of different weights many times, record what happens → build a general conclusion from the pattern in your data.

Bacon argued that you had to gather data through direct observation first, then draw conclusions. Empiricism, the philosophical position that knowledge comes from sensory experience, became the foundation of the new scientific approach.

Key Figures and Their Contributions

Galileo Galilei contributed several ideas that shaped the scientific method:

  • Using mathematics to describe physical phenomena quantitatively. He famously wrote that the book of nature "is written in the language of mathematics."
  • Emphasizing systematic experimentation to test hypotheses. His inclined plane experiments on motion are a classic example: by rolling balls down ramps of varying angles and timing their descent, he established that falling bodies accelerate uniformly regardless of weight.
  • Favoring parsimony (sometimes called Occam's Razor), the idea that the simplest adequate explanation is preferred.

René Descartes introduced methodological skepticism, the practice of doubting all previous assumptions and rebuilding knowledge from what cannot be doubted. His famous starting point, "I think, therefore I am," was an attempt to find one indubitable foundation. This philosophical stance reinforced the need for empirical testing rather than accepting inherited claims.

Isaac Newton demonstrated the power of the scientific method at its fullest. His Principia Mathematica (1687) derived the laws of motion and universal gravitation, showing that a small set of mathematical principles could explain an enormous range of physical phenomena, from falling apples to planetary orbits. This work laid the foundations of classical mechanics and became a model for what rigorous science could achieve.

Emergence of Scientific Institutions and Norms

The Scientific Revolution also created new institutions for organizing and validating knowledge.

  • The Royal Society of London, founded in 1660, was among the first scientific societies. Its motto, Nullius in verba ("Take nobody's word for it"), captured the new emphasis on empirical verification. It became a model for similar groups across Europe.
  • Peer review emerged as societies invited members to present findings for critique and attempted replication.
  • Scientific journals appeared, most notably the Philosophical Transactions (first published in 1665), which allowed researchers to share experimental results and build on each other's work far more efficiently than private correspondence allowed.
  • Core norms of openness, organized skepticism, and empiricism became defining features of science, replacing the secrecy and deference to authority that had characterized earlier scholarship.

Applying the Scientific Method

Developing Testable Hypotheses

Every experiment starts with a research question and a hypothesis that predicts the effect of the independent variable on the dependent variable. Two requirements matter most:

  • The hypothesis must be specific and testable. Vague claims or unfalsifiable statements (such as purely metaphysical claims) fall outside the scope of scientific investigation.
  • Operational definitions must specify exactly how variables will be manipulated and measured. For example, if you're studying aggression in children, you might operationally define "aggression" as the number of times a child hits a Bobo doll in a 10-minute session. This precision is what makes replication possible.

Identifying and Controlling Variables

  • Independent variable (IV): the factor you manipulate. It may have multiple levels or conditions. For example, testing the effect of caffeine on reaction time might involve giving participants 0mg, 100mg, or 200mg of caffeine.
  • Dependent variable (DV): the outcome you measure. It should be quantifiable using reliable, valid instruments. In the caffeine example, you might measure reaction time in milliseconds using a computerized test.
  • Controlled variables: extraneous factors held constant to prevent confounding. For instance, testing all participants at the same time of day controls for circadian rhythm effects on alertness.

Evaluating Experimental Designs and Results

Strong experiments share several features:

  • Random assignment of participants to groups reduces bias and ensures groups start out equivalent. Larger sample sizes reduce the likelihood that chance differences between groups will distort results.
  • Statistical analysis determines whether observed differences are likely real or just due to random variation. The choice of test (t-test, ANOVA, regression) depends on the research design and types of variables involved.

When evaluating an experiment, consider:

  • Soundness of the methodology and controls for confounds
  • Potential sources of bias (demand characteristics, experimenter bias) or error (measurement error)
  • Limitations to generalizability (sample characteristics, experimental setting)
  • Practical significance of the findings, not just statistical significance

Replication and meta-analysis are essential for building confidence in results. Direct replication tests whether the original finding is reliable. Conceptual replication tests whether it generalizes to new contexts or populations. Together, they're what turn a single experiment's results into established scientific knowledge.