AP Psychology AMSCO Guided Notes

0.2: Research Methods and Design

AP Psychology
AMSCO Guided Notes

AP Psychology Guided Notes

AMSCO 0.2 - Research Methods and Design

I. Types of Research Designs

1. What is the scientific method and how does it guide psychological research?

2. What is a hypothesis and why must it be stated clearly before conducting research?

A. The Scientific Method in Psychology

1. What are the key steps of the scientific method and how do they ensure systematic investigation?

2. What is empirical data and why is it essential to the scientific process?

B. Hypotheses

1. What is a hypothesis and what must it specify about variables and their relationships?

C. Experimental Methodology

1. Why is experimental methodology the only research type that can establish cause-and-effect relationships?

2. What is the difference between an independent variable and a dependent variable?

3. What is random assignment and why is it essential for eliminating group differences?

4. How do experimental and control groups differ, and what role does each play in testing hypotheses?

D. Non-Experimental Methodologies

1. What are the four main types of non-experimental methodologies and how do they differ from experimental research?

2. What is a case study and what types of data sources do researchers use in case studies?

3. How does correlation research examine relationships between variables?

4. What is meta-analysis and what advantage does it provide over individual studies?

5. How does naturalistic observation differ from experimental research in terms of researcher intervention?

II. Design Elements in Experimental Methodology

A. Falsifiable Hypotheses

1. What does it mean for a hypothesis to be falsifiable and why is this essential for scientific research?

2. What five qualities must a falsifiable hypothesis possess?

B. Experimental Hypotheses, Independent Variables, and Dependent Variables

1. How does the 'if-then' format help researchers identify variables and their relationships?

2. What are operational definitions and why are they critical for replicating studies?

C. Populations and Sampling

1. What is the difference between random sampling and random assignment?

2. What is sampling bias and how does it affect the generalizability of research findings?

3. When and why might researchers use convenience sampling instead of random sampling?

4. What is external validity and how does random sampling contribute to it?

D. Experimental and Control Groups, Placebos, and Limiting Experimenter Bias

1. What is the placebo effect and how can researchers control for it in studies?

2. How do single-blind and double-blind studies differ in their approach to eliminating bias?

3. What is experimenter bias and how can it interfere with research outcomes?

E. Qualitative and Quantitative Measures

1. What are the key differences between qualitative and quantitative measures in research?

2. What is a Likert scale and how does it allow researchers to quantify subjective variables?

3. How do generalizability and accuracy differ between qualitative and quantitative research?

F. Representation, Peer Review, and Replication

1. How does appropriate representation of participants affect both research conduct and outcomes?

2. What is peer review and why is it essential to maintaining scientific integrity?

3. What is replication and why is it vital to the scientific process?

III. Design Elements in Non-Experimental Methodologies

A. Case Studies

1. What is a case study and what types of data sources do researchers typically use?

2. What are the advantages and limitations of case studies in terms of generalizability and replication?

B. Correlational Studies

1. What is a positive correlation and what is a negative correlation?

2. Why can correlational studies not determine cause and effect?

3. What is the directionality problem and how does it complicate interpretation of correlational data?

4. What is the third variable problem and how might it affect correlational findings?

C. Collecting Data Through Surveys

1. What are the advantages and challenges of using surveys for data collection?

2. How can wording and framing of survey questions influence participant responses?

3. What is self-report bias and what is social desirability bias in survey research?

D. Collecting Data Through Interviews

1. What is a structured interview and what are its key features?

2. How do structured interviews help reduce bias and ensure consistency in data collection?

E. Meta-Analyses

1. What is meta-analysis and what advantage does it provide in understanding research findings?

2. What complications can arise when conducting a meta-analysis across multiple studies?

F. Naturalistic Observation

1. What is naturalistic observation and how does it avoid the artificiality of laboratory research?

2. What are the advantages and disadvantages of naturalistic observation as a research method?

IV. Ethics Guidelines for Conducting Psychological Research

A. Informed Consent

1. What is informed consent and what information must participants receive before a study begins?

2. What is informed assent and when is it required in research with minors?

3. What rights do participants have regarding their participation in a study?

B. Minimizing Harm

1. What is the central ethical guideline regarding harm in psychological research?

2. How do researchers assess and justify risks in their studies?

C. Confidentiality and Anonymity

1. How do researchers protect participant confidentiality and anonymity during and after a study?

D. Deception

1. When is deception acceptable in psychological research and what are the limitations?

2. What is debriefing and why is it essential when deception has been used?

3. What are research confederates and what role do they play in deceptive studies?

E. Animal Research

1. What is the Animal Care and Use Committee and how does it protect animals in research?

2. How has technology changed the use of animals in psychological research?

Key Terms

qualitative research

quantitative research

experimental methodology

non-experimental methodology

quantitative research method

variable

independent variable

dependent variable

random assignment

participant

falsifiable

hypothesis

operational definitions

replicate

confounding variable

case study

correlation

positive correlation

negative correlation

meta-analysis

naturalistic observation

population

representative sample

generalize

random sample

sampling bias

convenience sample

placebo

placebo effect

single-blind study

double-blind study

experimenter bias

control group

Likert scales

correlation studies

directionality problem

third variable problem

self-report bias

social desirability bias

structured interviews

institutional review

informed consent

informed assent

deception

research confederates

debriefing