Study smarter with Fiveable
Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.
The System Development Life Cycle (SDLC) isn't just a checklist—it's the conceptual framework that explains how and why information systems succeed or fail. When you're tested on SDLC, you're really being asked to demonstrate understanding of project management principles, requirements engineering, quality assurance, and continuous improvement. These phases show up in every real-world IT project, from building a simple database to deploying enterprise-wide systems.
Here's what matters for your exam: you need to understand the logical flow between phases and recognize that each phase addresses a specific type of risk. Planning mitigates scope creep, analysis prevents building the wrong system, testing catches defects before users do. Don't just memorize the phase names—know what problem each phase solves and how skipping or rushing a phase creates downstream failures.
Before any code gets written or any system gets designed, organizations must establish what they're building and why. These phases prevent the most expensive mistake in systems development: building something nobody needs.
Compare: Planning vs. Analysis—both gather requirements, but planning focuses on feasibility and scope while analysis dives into detailed specifications. If an FRQ asks about "determining whether to proceed with a project," that's planning; if it asks about "documenting user needs," that's analysis.
Once requirements are locked, the focus shifts to creating the system. These phases transform abstract requirements into concrete, functional technology through systematic design and coding practices.
Compare: Design vs. Implementation—design answers "what will we build?" while implementation answers "how do we build it?" Exam questions often test whether you can identify which phase a specific activity belongs to (creating wireframes = design; writing code = implementation).
Testing isn't just "checking if it works"—it's a systematic process of verifying that the system meets specifications and validating that it solves the original business problem.
Compare: System Testing vs. User Acceptance Testing—system testing is performed by IT teams against technical specifications, while UAT is performed by business users against real-world scenarios. Both must pass before go-live, but they catch different types of defects.
A system's launch is just the beginning. These phases ensure the investment continues delivering value and adapts to changing business needs over time.
Compare: Maintenance vs. Evaluation—maintenance is operational (keeping the system running), while evaluation is strategic (determining if the system delivers value). Both happen post-implementation, but they serve different purposes and involve different stakeholders.
| Concept | Best Examples |
|---|---|
| Risk Mitigation | Planning (risk assessment), Testing (defect detection) |
| Requirements Engineering | Analysis (detailed requirements), Planning (initial scope) |
| Technical Construction | Design (architecture), Implementation (development) |
| Quality Assurance | Testing (all levels), Evaluation (performance review) |
| Change Management | Implementation (training), Maintenance (user support) |
| Continuous Improvement | Evaluation (lessons learned), Maintenance (updates) |
| Stakeholder Engagement | Planning (identification), Analysis (validation), Evaluation (feedback) |
Which two phases both involve gathering requirements, and how do they differ in scope and depth?
A company discovers after launch that their new system can't handle peak holiday traffic. Which phase failed, and what specific activity should have caught this problem?
Compare and contrast system testing and user acceptance testing—who performs each, what do they validate, and why are both necessary?
If a post-implementation review reveals that the system doesn't align with business goals, which earlier phase likely had deficiencies, and what activities should have prevented this?
An FRQ describes a scenario where users refuse to adopt a new system despite it being technically functional. Which phase activities address this problem, and what should have been done differently?