upgrade
upgrade

💻Design Strategy and Software I

Essential Software Testing Techniques

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Why This Matters

In Design Strategy And Software I, you're not just learning to write code—you're learning to build software that actually works reliably for real users. Testing techniques are the bridge between "it works on my machine" and "it works everywhere, every time." When exams ask about software quality assurance, they're testing whether you understand why different testing approaches exist and when to apply each one in the development lifecycle.

These techniques demonstrate core principles: the V-model of development, verification versus validation, and the trade-offs between test coverage and development speed. You'll encounter questions about choosing appropriate testing strategies, identifying defects at different system levels, and understanding how testing fits into both waterfall and agile methodologies. Don't just memorize definitions—know what problem each technique solves and where it fits in the development process.


Testing by Scope Level

These techniques form a hierarchy from smallest to largest scope. Each level catches different types of defects, and skipping levels creates gaps in quality assurance.

Unit Testing

  • Tests individual functions or methods in isolation—the smallest testable pieces of your codebase
  • Automated and developer-written, typically using frameworks like JUnit, pytest, or Jest
  • Catches bugs earliest when they're cheapest to fix; forms the foundation of the testing pyramid

Integration Testing

  • Validates interactions between combined components—ensuring modules communicate correctly through their interfaces
  • Exposes data flow defects and API mismatches that unit tests can't catch
  • Two main approaches: incremental (adding components gradually) versus big bang (testing all at once)

System Testing

  • Tests the complete, integrated application in an environment mimicking production
  • Validates end-to-end requirements—does the whole system do what it's supposed to do?
  • Black box approach where testers interact with the system as users would, not as developers

Acceptance Testing

  • Determines deployment readiness by validating against business requirements, not just technical specs
  • Involves stakeholders and end-users—the people who will actually use the software
  • Alpha testing happens in-house; beta testing releases to limited external users

Compare: Unit Testing vs. System Testing—both validate functionality, but unit testing isolates individual components while system testing evaluates the complete application. FRQs often ask you to identify which testing level would catch a specific type of defect.


Testing by Knowledge of Code

These approaches differ based on whether the tester can see inside the system. The distinction matters because each reveals different categories of defects.

Black Box Testing

  • No knowledge of internal code structure—testers only see inputs and outputs
  • Focuses on functional requirements and user-facing behavior
  • Ideal for acceptance and functional testing where user perspective matters most

White Box Testing

  • Requires full access to source code and understanding of internal logic
  • Enables code coverage analysis—testing every branch, path, and statement
  • Essential for security audits and performance optimization at the code level

Compare: Black Box vs. White Box—black box tests what the system does; white box tests how it does it. Use black box when validating requirements, white box when hunting for logic errors or security vulnerabilities.


Testing for Quality Attributes

Beyond functionality, software must meet non-functional requirements. These techniques validate the "-ilities": usability, reliability, security, and performance.

Performance Testing

  • Measures speed, scalability, and stability under various load conditions
  • Includes subtypes: load testing (expected traffic), stress testing (beyond capacity), endurance testing (sustained load)
  • Identifies bottlenecks before they impact real users in production

Usability Testing

  • Real users perform realistic tasks while observers note friction points
  • Evaluates learnability, efficiency, and satisfaction—core UX metrics
  • Cannot be fully automated—requires human judgment about user experience quality

Security Testing

  • Identifies vulnerabilities before attackers do—including SQL injection, XSS, and authentication flaws
  • Penetration testing simulates attacks; vulnerability scanning automates detection
  • Ensures compliance with standards like OWASP, GDPR, and industry-specific regulations

Compare: Performance Testing vs. Usability Testing—both affect user satisfaction, but performance testing measures system behavior under load while usability testing measures human behavior during interaction. A fast but confusing interface fails usability; a clear but slow interface fails performance.


Testing Approaches and Strategies

These define how testing gets executed rather than what gets tested. Choosing between them involves trade-offs in speed, coverage, and cost.

Automated Testing

  • Software executes tests without human intervention—using tools like Selenium, Cypress, or TestNG
  • Excels at regression testing where the same tests run repeatedly after each code change
  • High upfront investment but dramatically reduces long-term testing time and human error

Manual Testing

  • Human testers execute test cases using judgment and intuition
  • Essential for exploratory testing—discovering unexpected issues through creative investigation
  • Irreplaceable for UX evaluation and scenarios requiring subjective assessment

Regression Testing

  • Re-runs existing tests after code changes to ensure nothing broke
  • Critical after bug fixes and feature additions—new code often creates new problems
  • Prime candidate for automation due to repetitive nature and high frequency

Compare: Automated vs. Manual Testing—automation wins for speed and consistency; manual wins for flexibility and judgment. The best strategy combines both: automate repetitive regression tests, manually explore new features and edge cases.


Specialized Testing Techniques

These techniques address specific testing challenges with targeted approaches. They complement broader testing strategies rather than replacing them.

Functional Testing

  • Validates software against documented requirements—does it do what the spec says?
  • Tests UI, APIs, databases, and business logic from a user perspective
  • Black box by nature—concerned with behavior, not implementation

Boundary Value Analysis

  • Targets values at the edges of input ranges—where bugs most commonly hide
  • Tests at, just below, and just above boundaries—for example, testing age validation at 17, 18, and 19
  • Highly efficient for finding edge case defects with minimal test cases

Test-Driven Development (TDD)

  • Write tests before writing code—a development methodology, not just a testing technique
  • Red-Green-Refactor cycle: write failing test, write minimal code to pass, improve code quality
  • Ensures testability by design and produces comprehensive test suites as a byproduct

Compare: TDD vs. Traditional Testing—TDD integrates testing into development from the start; traditional testing happens after code is written. TDD catches design flaws earlier but requires discipline and initial slowdown.


Quick Reference Table

ConceptBest Examples
Testing by Scope (small to large)Unit Testing, Integration Testing, System Testing, Acceptance Testing
Code Visibility ApproachBlack Box Testing, White Box Testing
Non-Functional QualityPerformance Testing, Usability Testing, Security Testing
Execution MethodAutomated Testing, Manual Testing
Change ManagementRegression Testing
Requirements ValidationFunctional Testing, Acceptance Testing
Development MethodologyTest-Driven Development (TDD)
Edge Case DetectionBoundary Value Analysis

Self-Check Questions

  1. A bug occurs when two modules exchange data incorrectly, but each module works perfectly in isolation. Which testing level would catch this defect, and why wouldn't unit testing find it?

  2. Compare black box and white box testing: which would you use to validate that a login form meets user requirements, and which would you use to ensure all code branches are executed?

  3. Your team releases weekly updates and needs to verify that new features don't break existing functionality. Which two testing techniques should you combine, and what are the trade-offs of each?

  4. A system passes all functional tests but users complain it's confusing and slow. Which two testing types were likely skipped, and what quality attributes do they measure?

  5. Explain how TDD differs from adding tests after development. If an FRQ asks about reducing defects early in the development lifecycle, which approach provides the strongest answer and why?