Why This Matters
Testing isn't just about finding bugs—it's about building confidence in your code and understanding how and when different verification strategies apply. In CIS 1200, you're being tested on your ability to reason about program correctness, design testable code, and apply appropriate testing strategies to different scenarios. The methodologies here connect directly to concepts like abstraction, modularity, specification vs. implementation, and the software development lifecycle.
Don't just memorize definitions. Know which testing approach fits which situation, understand the trade-offs between approaches (automation vs. manual, black box vs. white box), and be ready to justify your testing strategy in code reviews or exam questions. When you see a testing question, ask yourself: "What am I trying to verify, and what information do I have access to?"
Code-Level Testing Strategies
These methodologies focus on verifying that individual pieces of code work correctly, from single functions to integrated modules. The key principle: catch bugs at the smallest possible scope before they propagate.
Unit Testing
- Tests individual functions or methods in isolation—the foundation of all testing strategies and your first line of defense against bugs
- Automated and repeatable, allowing you to run tests after every code change to catch regressions immediately
- Requires well-designed modules with clear interfaces; if a function is hard to unit test, that's often a sign of poor abstraction
Integration Testing
- Verifies that components work together correctly—catches interface mismatches and data flow errors that unit tests miss
- Can be incremental or "big bang"; incremental testing (adding one component at a time) makes it easier to isolate failures
- Exposes assumptions about how modules communicate, especially around shared state and method contracts
Regression Testing
- Re-runs existing tests after code changes to ensure new code doesn't break old functionality
- Essential after refactoring, bug fixes, or feature additions—your safety net for maintaining code quality over time
- Automated test suites make regression testing practical; manual regression testing doesn't scale
Compare: Unit Testing vs. Integration Testing—both are automated and code-focused, but unit tests verify components in isolation while integration tests verify components in combination. If an exam asks about finding interface defects, integration testing is your answer.
Specification-Based Approaches
These methodologies test what the software should do based on requirements, without necessarily examining how it's implemented. The key principle: validate behavior against specifications.
Functional Testing
- Validates software against requirements—does the system do what it's supposed to do?
- Focuses on inputs and outputs, treating the system as a specification-to-behavior mapping
- Can be manual or automated, though automation is preferred for repeatable test cases
Black Box Testing
- Tests without knowledge of internal code structure—you only see inputs and outputs
- Derives test cases from specifications, making it ideal for validating that requirements are met
- Supports abstraction principles; if you've designed good interfaces, black box testing should be sufficient for functional verification
Acceptance Testing
- Determines if software is ready for delivery—the final validation before release
- Involves stakeholders or end-users who verify the system meets their actual needs, not just documented requirements
- Alpha testing happens internally; beta testing happens with real users in real environments
Compare: Black Box vs. Functional Testing—black box describes a technique (no code visibility), while functional testing describes a goal (verify requirements). Black box testing is often used for functional testing, but they're not synonyms.
Implementation-Aware Testing
These methodologies require knowledge of the code's internal structure to design effective tests. The key principle: use code knowledge to achieve thorough coverage.
White Box Testing
- Tests internal code structure and logic paths—requires access to and understanding of the source code
- Enables coverage analysis, ensuring tests exercise specific branches, statements, or conditions
- Reveals logical errors that black box testing might miss, like dead code or incorrect conditionals
Test-Driven Development (TDD)
- Write tests before writing code—the test defines the expected behavior, then you implement to pass it
- Follows a red-green-refactor cycle: write a failing test, make it pass, then clean up the code
- Produces inherently testable designs because you're forced to think about interfaces and behavior upfront
Behavior-Driven Development (BDD)
- Extends TDD with natural language specifications—tests describe behavior in human-readable format
- Bridges communication gaps between developers and non-technical stakeholders using "Given-When-Then" syntax
- Focuses on user-visible behavior rather than implementation details, keeping tests aligned with requirements
Compare: TDD vs. BDD—both write tests first, but TDD tests are typically code-level assertions while BDD tests are written in domain language describing user scenarios. TDD is developer-facing; BDD is stakeholder-facing.
Execution Approaches
These methodologies describe how tests are executed rather than what they test. The key principle: choose the right execution strategy for your context.
Automated Testing
- Uses tools to execute tests without human intervention—essential for CI/CD pipelines and large codebases
- Enables frequent testing that would be impractical manually; run your entire test suite on every commit
- Requires upfront investment in test infrastructure but pays off through consistency and speed
Manual Testing
- Human testers execute tests directly—irreplaceable for exploratory testing and UX evaluation
- Provides flexibility and judgment that automated tests can't replicate; humans notice unexpected issues
- Doesn't scale for repetitive tests but excels at finding edge cases automation might miss
Compare: Automated vs. Manual Testing—automation excels at repetitive, well-defined tests while manual testing excels at exploratory, judgment-based evaluation. Most projects need both; the question is finding the right balance.
Non-Functional Testing
These methodologies verify system qualities beyond correctness—performance, security, and usability. The key principle: correct code that's slow, insecure, or unusable still fails users.
- Measures speed, scalability, and stability under various conditions—does your O(n2) algorithm actually matter?
- Includes load testing (expected traffic), stress testing (extreme conditions), and endurance testing (sustained load)
- Identifies bottlenecks before they affect real users; essential for any system with performance requirements
Stress Testing
- Pushes the system beyond normal limits to find breaking points and recovery behavior
- Reveals failure modes—does your application crash gracefully or corrupt data under extreme load?
- Critical for high-availability systems where understanding failure behavior is as important as preventing it
Security Testing
- Identifies vulnerabilities like SQL injection, XSS, and authentication flaws before attackers do
- Tests data protection and integrity—especially critical for applications handling sensitive information
- Requires adversarial thinking; you're testing what happens when users don't follow the rules
Usability Testing
- Evaluates user experience through observation of real users interacting with the system
- Identifies pain points and confusion that functional testing can't detect—the feature works but users can't find it
- Requires human participants and qualitative analysis; can't be fully automated
Compare: Performance Testing vs. Stress Testing—performance testing asks "how well does it work under expected conditions?" while stress testing asks "what happens when conditions exceed expectations?" Both inform capacity planning but answer different questions.
Quick Reference Table
|
| Code-level verification | Unit Testing, Integration Testing, Regression Testing |
| Specification-based | Functional Testing, Black Box Testing, Acceptance Testing |
| Implementation-aware | White Box Testing, TDD, BDD |
| Execution strategy | Automated Testing, Manual Testing |
| Non-functional qualities | Performance Testing, Stress Testing, Security Testing, Usability Testing |
| Test-first development | TDD, BDD |
| Requires code knowledge | White Box Testing, Unit Testing, TDD |
| No code knowledge needed | Black Box Testing, Acceptance Testing, Usability Testing |
Self-Check Questions
-
You've just refactored a module's internal implementation without changing its public interface. Which testing methodology is most critical to run afterward, and why?
-
Compare and contrast black box testing and white box testing. For each, give one scenario where that approach is clearly preferable.
-
A teammate argues that TDD is "just unit testing with extra steps." How would you explain the key difference in when tests are written and why that matters for code design?
-
You're building an e-commerce checkout system. Which three testing methodologies would you prioritize, and what specific concerns would each address?
-
An FRQ asks you to design a testing strategy for a new feature. What questions should you ask yourself to decide between automated and manual testing approaches?