upgrade
upgrade

📅Curriculum Development

Assessment Types in Education

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

Understanding assessment types isn't just about knowing definitions—it's about recognizing how each assessment serves a distinct purpose in the teaching-learning cycle. You're being tested on your ability to match assessment types to their appropriate uses, explain how they inform curriculum decisions, and evaluate their strengths and limitations in different educational contexts. The key concepts here include timing and purpose, measurement approaches, and authenticity of tasks.

When you encounter exam questions about assessment, think beyond "what is it?" to "when would I use it?" and "what does it tell me that other assessments can't?" Don't just memorize the names—know what instructional problem each assessment type solves and how it connects to curriculum alignment, student feedback, and educational accountability.


Assessments by Timing and Purpose

These assessments are distinguished by when they occur in the instructional sequence and what decisions they inform. The timing determines whether the data serves planning, adjustment, or evaluation purposes.

Diagnostic Assessment

  • Administered before instruction begins—reveals prior knowledge, misconceptions, and skill gaps that should shape lesson planning
  • Identifies individual learning needs including potential disabilities, language barriers, or advanced readiness requiring differentiation
  • Directly informs curriculum development by showing teachers where to start instruction and what prerequisite skills need reinforcement

Formative Assessment

  • Occurs during instruction to provide real-time data on student understanding while learning is still in progress
  • Enables immediate feedback loops—teachers adjust instruction and students correct misunderstandings before they solidify
  • Low-stakes by design, encouraging risk-taking and active engagement without the pressure of grades

Summative Assessment

  • Administered at the end of a unit, course, or program—measures cumulative learning against established objectives
  • High-stakes and evaluative, typically used for grades, promotion decisions, or certification of competency
  • Provides accountability data but offers limited opportunity for instructional adjustment since learning has concluded

Compare: Formative vs. Summative—both measure student learning, but formative assessment is for learning (ongoing adjustment) while summative is of learning (final evaluation). If asked how assessment drives instruction, formative is your go-to example.


Assessments by Measurement Approach

These assessments differ in how they interpret scores—either by comparing students to each other or by measuring against fixed standards. This distinction affects what conclusions you can draw from results.

Norm-Referenced Assessment

  • Compares student performance to a representative peer group—results reported as percentiles, stanines, or standard scores
  • Designed to rank and sort students rather than determine mastery; questions are calibrated to spread scores across a bell curve
  • Useful for placement decisions and identifying relative standing, but tells you nothing about what specific content a student has mastered

Criterion-Referenced Assessment

  • Measures performance against fixed learning standards—the question is "can this student do X?" not "how does this student compare to others?"
  • Supports curriculum alignment by directly linking assessment items to specific instructional objectives
  • All students can theoretically achieve mastery, unlike norm-referenced tests where some must score below average by design

Compare: Norm-Referenced vs. Criterion-Referenced—norm-referenced tells you where a student ranks; criterion-referenced tells you what a student knows. State standards tests are criterion-referenced because they measure mastery of specific standards, not relative standing.


Assessments Emphasizing Authentic Application

These assessments prioritize demonstration of skills in meaningful contexts over recall of isolated facts. They reflect constructivist principles and measure deeper understanding through real-world tasks.

Performance-Based Assessment

  • Requires demonstration through real-world tasks—students apply knowledge to solve problems, create products, or complete complex procedures
  • Emphasizes process and application rather than recognition or recall; examples include lab experiments, debates, and design challenges
  • Provides richer diagnostic information about student thinking but requires more time to administer and score

Authentic Assessment

  • Tasks mirror genuine challenges students will encounter outside school—writing for real audiences, solving community problems, or creating functional products
  • Encourages transfer of learning by embedding assessment in meaningful contexts that require critical thinking and problem-solving
  • Contrasts sharply with traditional testing, which often assesses decontextualized knowledge that students struggle to apply

Portfolio Assessment

  • Collects student work over time to document growth, effort, and achievement across multiple dimensions
  • Emphasizes reflection and metacognition—students analyze their own progress and select artifacts that demonstrate learning
  • Assesses both process and product, capturing development that single-point assessments miss

Compare: Performance-Based vs. Authentic—all authentic assessments are performance-based, but not all performance-based assessments are authentic. A chemistry lab is performance-based; testing water quality for a local stream is authentic. The distinction matters when discussing real-world relevance.


Assessments for Accountability and Standardization

These assessments prioritize consistency and comparability across large populations, often serving policy and accountability purposes beyond individual classrooms.

Standardized Testing

  • Administered and scored uniformly across all test-takers to ensure comparability of results
  • Serves large-scale accountability purposes—results influence school ratings, funding allocation, and educational policy decisions
  • Trade-off between reliability and validity: high consistency in measurement but may not capture the full range of student abilities or curricular goals

Compare: Standardized Testing vs. Authentic Assessment—standardized tests maximize reliability and efficiency; authentic assessments maximize validity and depth. Curriculum developers must balance these competing values based on assessment purpose.


Student-Centered Assessment Approaches

These assessments shift evaluation responsibility toward learners, developing metacognitive skills and promoting ownership of the learning process.

Self-Assessment

  • Students evaluate their own work against criteria, developing metacognitive awareness and self-regulation skills
  • Promotes learner autonomy by encouraging goal-setting, reflection, and identification of personal growth areas
  • Provides unique insight into student thinking and self-perception that external assessments cannot capture

Compare: Self-Assessment vs. Portfolio Assessment—both involve student reflection, but self-assessment focuses on evaluative judgment while portfolios emphasize evidence collection. Portfolios often include self-assessment as a component.


Quick Reference Table

ConceptBest Examples
Timing: Before instructionDiagnostic Assessment
Timing: During instructionFormative Assessment
Timing: After instructionSummative Assessment
Measurement: Peer comparisonNorm-Referenced Assessment
Measurement: Standards-basedCriterion-Referenced Assessment
Task authenticityPerformance-Based, Authentic, Portfolio Assessment
Large-scale accountabilityStandardized Testing
Student-centeredSelf-Assessment, Portfolio Assessment

Self-Check Questions

  1. A teacher wants to know what students already understand about fractions before starting a new unit. Which assessment type is most appropriate, and why does timing matter here?

  2. Compare norm-referenced and criterion-referenced assessments: How would the same student's performance be interpreted differently under each approach?

  3. Which two assessment types both emphasize real-world application but differ in their degree of authenticity? Explain the distinction.

  4. If a curriculum developer wants to ensure assessments align directly with state learning standards, which measurement approach should guide test design? What are the implications for how scores are reported?

  5. A district is debating between standardized tests and portfolio assessments for accountability purposes. What trade-offs in reliability, validity, and practicality should inform this decision?