Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
In Design Strategy And Software I, you're not just learning to write code—you're learning to build software that actually works reliably for real users. Testing techniques are the bridge between "it works on my machine" and "it works everywhere, every time." When exams ask about software quality assurance, they're testing whether you understand why different testing approaches exist and when to apply each one in the development lifecycle.
These techniques demonstrate core principles: the V-model of development, verification versus validation, and the trade-offs between test coverage and development speed. You'll encounter questions about choosing appropriate testing strategies, identifying defects at different system levels, and understanding how testing fits into both waterfall and agile methodologies. Don't just memorize definitions—know what problem each technique solves and where it fits in the development process.
These techniques form a hierarchy from smallest to largest scope. Each level catches different types of defects, and skipping levels creates gaps in quality assurance.
Compare: Unit Testing vs. System Testing—both validate functionality, but unit testing isolates individual components while system testing evaluates the complete application. FRQs often ask you to identify which testing level would catch a specific type of defect.
These approaches differ based on whether the tester can see inside the system. The distinction matters because each reveals different categories of defects.
Compare: Black Box vs. White Box—black box tests what the system does; white box tests how it does it. Use black box when validating requirements, white box when hunting for logic errors or security vulnerabilities.
Beyond functionality, software must meet non-functional requirements. These techniques validate the "-ilities": usability, reliability, security, and performance.
Compare: Performance Testing vs. Usability Testing—both affect user satisfaction, but performance testing measures system behavior under load while usability testing measures human behavior during interaction. A fast but confusing interface fails usability; a clear but slow interface fails performance.
These define how testing gets executed rather than what gets tested. Choosing between them involves trade-offs in speed, coverage, and cost.
Compare: Automated vs. Manual Testing—automation wins for speed and consistency; manual wins for flexibility and judgment. The best strategy combines both: automate repetitive regression tests, manually explore new features and edge cases.
These techniques address specific testing challenges with targeted approaches. They complement broader testing strategies rather than replacing them.
Compare: TDD vs. Traditional Testing—TDD integrates testing into development from the start; traditional testing happens after code is written. TDD catches design flaws earlier but requires discipline and initial slowdown.
| Concept | Best Examples |
|---|---|
| Testing by Scope (small to large) | Unit Testing, Integration Testing, System Testing, Acceptance Testing |
| Code Visibility Approach | Black Box Testing, White Box Testing |
| Non-Functional Quality | Performance Testing, Usability Testing, Security Testing |
| Execution Method | Automated Testing, Manual Testing |
| Change Management | Regression Testing |
| Requirements Validation | Functional Testing, Acceptance Testing |
| Development Methodology | Test-Driven Development (TDD) |
| Edge Case Detection | Boundary Value Analysis |
A bug occurs when two modules exchange data incorrectly, but each module works perfectly in isolation. Which testing level would catch this defect, and why wouldn't unit testing find it?
Compare black box and white box testing: which would you use to validate that a login form meets user requirements, and which would you use to ensure all code branches are executed?
Your team releases weekly updates and needs to verify that new features don't break existing functionality. Which two testing techniques should you combine, and what are the trade-offs of each?
A system passes all functional tests but users complain it's confusing and slow. Which two testing types were likely skipped, and what quality attributes do they measure?
Explain how TDD differs from adding tests after development. If an FRQ asks about reducing defects early in the development lifecycle, which approach provides the strongest answer and why?